Wednesday, February 21, 2024

Call for Papers: Workshop on CBR and LLMs

Generated by Gemini
Last year was the most remarkable year in AI that I can recall. Large Language Models (LLMs) like Chat-GPT changed the public perception of AI, and what had previously seemed like science fiction was now a reality. I was only tangentially familiar with LLM research, having been working on emotion recognition in speech with a PhD student. However, last year, I started diving into LLM research in-depth, which, as one commentator said, was like trying to drink water from a fire hydrant, such was the volume of publications through places like arXiv.

I view all problems through a lens coloured by case-based reasoning (CBR), my long-term AI research speciality. I quickly saw synergies between CBR and LLMs where both could benefit from each other's approaches, and I wrote up my initial thoughts and published them on arXiv.

CBR has an annual international conference, and I proposed the idea of a workshop at the conference on CBR-LLM synergies to some colleagues, who all thought this was a great idea and agreed to co-organise the workshop with me. The Case-Based Reasoning and Large Language Models Synergies Workshop will take place at  ICCBR 2024 in Mérida, Yucatán, México on July 1st 2024. The Call for papers can be accessed here, and submissions are via EasyChair. 

Thursday, February 15, 2024

A Long-term Memory for ChatGPT

Generated by Gemini

In October last year, I published a short position paper, A Case-Based Persistent Memory for a Large Language Model, arguing that ChatGPT and other LLMs need a persistent long-term memory of their interactions with a user to be truly useful. It seems OpenAI was listening because a couple of days ago, they announced that ChatGPT would retain a persistent memory of chats across multiple conversations. As reported in Wired, the memory will be used to add helpful background context to your prompts, improving their specificity to you over time. I argued in my October paper that the LLM community should look to the Case-Based Reasoning community for help with memory since we are the discipline within AI that has been explicitly concerned with memory for decades. For example, we long ago realised that while remembering is vital, a memory must also be able to forget some things to remain functional. This is a non-trivial problem discussed in Smyth and Keane's 1997 paper Remembering To Forget: A Competence-Preserving Case Deletion Policy for Case-Based Reasoning Systems. The synergies between CBR and LLMs will be the focus of a workshop at ICCBR-24 in July in Merida, Yucatán, México.

Thursday, January 4, 2024

Intelligent Agents: the transformative AI trend for 2024

Generated by DALL-E
As we move into 2024, the spotlight in AI will increasingly be on Intelligent Agents. As outlined in the influential paper by Wooldridge and Jennings (1995), agents are conceptualized as systems with autonomy, social ability, reactivity, and pro-activeness. Their evolution signifies a shift from mere tools to entities that can perceive, interact, and take initiative in their environments, aligning with the vision of AI as a field aiming to construct entities exhibiting intelligent behaviour.

The fusion of theory and practice in agent development is critical. Agent theories focus on conceptualizing and reasoning about agents' properties, architectures translate these theories into tangible systems, and languages provide the framework for programming these agents. This triad underpins the development of agents that range from simple automated processes to systems embodying human-like attributes such as knowledge, belief, and intention.

Ethan Mollick's exploration of GPTs (Generative Pre-trained Transformers) as interfaces to intelligent agents adds a contemporary dimension to this conversation. GPTs, in their current state, demonstrate the foundational capabilities for agent development - from structured prompts facilitating diverse tasks to integration with various systems. As envisioned by Wooldridge, Jennings, and Mollick, the future points towards agents integrated with a myriad of systems capable of tasks like managing expense reports or optimizing financial decisions.

Yet, this promising future has its challenges. The road to developing fully autonomous intelligent agents is fraught with technical and ethical considerations. Issues like logical omniscience in agent reasoning, the relationship between intention and action, and managing conflicting intentions remain unresolved. Mollick raises concerns about the vulnerabilities and risks in an increasingly interconnected AI landscape.

The explosion in Agents will be fuelled, like throwing gasoline on a fire, by the opening of OpenAI's GPT store sometime in early 2024. Many online pundits will think "agents" are a new thing! But as this post shows, the ideas and vast body of AI research dates back to the mid-1990s and early 2000s; as exemplified by The Trading Agents Competition.

Intelligent Agents represent a transformative trend in AI for 2024 and beyond. Their development, grounded in a combination of theoretical and practical advancements, paves the way for a future where AI is not just a tool but a proactive, interactive, and intelligent entity.