Monday, May 20, 2024

ChatGPT now has a memory - but it's naive

Screenshot - OpenAI
Last year, during the hype surrounding Large Language Models (LLMs), I published a position paper and wrote in this blog that LLMs, like ChatGPT, would need a persistent memory of their conversations to be most helpful. It's tough to converse intelligently with somebody with no memory. We value old friends so much because we know they recall events relevant to us, both good and bad, going back many years or even decades. 

However, managing that memory responsibly is a nontrivial task. Moreover, if virtual assistants based on LLMs become part of our daily lives, as it seems they will, their memory may have to be maintained over many years, perhaps even decades. I don't believe ChatGPT's memory management will be sufficient for this task.

My research has primarily focussed on case-based reasoning (CBR), a memory-based method. Interestingly, as a research community, we didn't consider how our case-bases (the memory) should be maintained over time. This was because we'd been focused on building systems in the early years of the discipline. Only when the systems matured did we realise our memories needed to be maintained. This happened in the late 1990s and centred on the work of Wilson and Leake, for example, "Categorizing case-base maintenance: Dimensions and directions". This work sparked a new line of research within CBR, leading to "Remembering to Forget" becoming a memorable paper title.

Consider this scenario: you've asked ChatGPT to remember your partner's name and that they like dark chocolate. You subsequently break up and acquire a new partner who prefers milk chocolate. You later ask ChatGPT to advise on buying a present. ChatGPT recommends dark chocolate in a gift box. Its memory is out of date, and the recommendation is inappropriate. The event of breaking up with your previous partner should have triggered a memory management process. These triggers are detailed in Wilson and Leake's paper in a comprehensive framework for maintaining memories. 

OpenAI's memory for ChatGPT is described in a FAQ webpage that is naive in its simplicity. The memory is described as a "notepad" with individual memories jotted down sequentially on it. Users can review and delete individual memories. But this is far too simplistic an approach to manage an AI assistant's memory that may have to span many years. An AI Assistant's memory must be structured, and policies and procedures will be required to manage it. OpenAI and others who build AI assistants with long-term memories should draw upon the expertise of case-based reasoners who have been managing memory for decades. Otherwise, they are in danger of reinventing the wheel.

Coincidentally, I've just been reading Why We Remember by Charan Ranganath. This book provides a fascinating insight into how the brain processes memories and highlights how little we currently know about this crucial aspect of ourselves.





No comments:

Post a Comment