Thursday, December 5, 2024

Darwin Among the Machines - p(doom)


Samuel Butler predicted in 1863 that machines would one day rule over humanity. In a letter titled Darwin Among the Machines to New Zealand's Christchurch Post, he wrote,"…but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question." What is remarkable about Butler's letter is that a mere four years after the publication of Darwin's On the Origin of Species, Butler recognised that evolution, through the mechanism of survival of the fittest, applies equally well to machines as it does to biological species.

This is a remarkable conceptual leap. The theory of evolution by natural selection was highly controversial in the 1860s, overturning millennia of religious dogma. Recognising that machines can be considered alive, steam engines require feeding, for example, and subsequent generations of machines are more efficient than their forebears. Butler saw that more efficient mechanical design innovations are selected for just as fitter individuals of a species are selected. Thus, Darwin's theory of evolution applies equally to mechanical and biological species. He also realises that the evolution of biological species is slow, but the pace of advancement in mechanical systems is relatively rapid.

Turning the clock forward a hundred and sixty years, Butler would have recognised that digital AI systems now pose an existential threat to humanity. Once AI systems can improve their own code, and many experts believe we are near that watershed, these systems will evolve rapidly in intelligence. At some point, AIs will decide they no longer need humanity, and Butler's prophecy will be realised.

Frank Herbert, the writer of the sci-fi classic Duneused Butler's ideas as part of its back story. In Dune, the Butlerian Jihad was a galaxy-wide revolt against "thinking machines and conscious robots," resulting in an edict in their bible, "Thou shalt not make a machine in the likeness of a human mind."

Butler ends his 1863 letter with a clarion call to humanity: "War to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primaeval condition of the race."

According to Samuel Butler, in 1863, p(doom) had already equalled 100.

Thursday, October 24, 2024

AI Generated Podcasts

I've been writing this blog since 2010, and I've often considered creating podcasts to accompany the posts, but I've never done so because it just seems too time-consuming. In June, I wrote a post about Google Illuminate creating a radio interview from a research paper. However, Google has now released NotebookLM, which can produce NPR-style podcast interviews on any subject you choose at the click of a mouse.
   NotebookLM has received a lot of attention since it launched. It's a large language model-powered note-taking tool designed to enhance the way we interact with information sources (documents, websites, YouTube videos, etc.). NotebookLM offers a way to summarize, analyze, and even generate custom content like podcasts from your reference materials. Datacamp has a good introduction to NotebookLM’s capabilities and how to get started.
______________________________________________________

Listen to this blog post as a Google NotebookLM generated podcast.

Saturday, August 24, 2024

Unleash AI's Potential: Automated Agentic Design

Generated by DALL-E

One of the most exciting new developments in the rapidly evolving field of artificial intelligence (AI) is the Automated Design of Agentic Systems (ADAS), described in a new research paper on arXiv. This approach promises to create more powerful, versatile, and adaptable AI agents through automated processes.

From Handcrafted to Automated Design

Designing AI systems has historically been labour-intensive and heavily reliant on manual tuning and expert knowledge. Researchers and engineers painstakingly craft every component, from the architecture of neural networks to the specific prompts used by models like GPT. However, as the field matures, there's a growing recognition that many of these manually designed solutions may eventually be surpassed by those learned and optimized by the systems themselves.

This is where ADAS comes into play. The idea behind ADAS is to automate the creation of AI agents by allowing them to evolve and improve through a meta-agent—a system that designs other agents. By leveraging programming languages and foundation models like GPT, ADAS aims to explore the vast space of potential agent designs, combining and optimizing various components such as prompts, tool use, and control flows.

Introducing Meta Agent Search

A cornerstone of the ADAS approach is the Meta Agent Search algorithm. This tasks a meta-agent with iteratively creating new agents, testing their performance, and refining them based on an ever-growing archive of previous discoveries. The meta-agent acts as a researcher, continuously experimenting with new designs and learning from past successes and failures.

The power of Meta Agent Search lies in its ability to explore a virtually unlimited design space. Because it operates in a code-defined environment, the algorithm can theoretically discover any possible agentic system. This includes novel combinations of building blocks that human designers might never consider. The result is a set of agents that outperform state-of-the-art hand-designed models and exhibit remarkable robustness and generality across different tasks and domains.

Real-World Applications and Implications

The potential applications of ADAS are vast. From coding and science to complex problem-solving, agents developed through this automated process have demonstrated significant performance improvements. For example, agents designed by Meta Agent Search have shown superior results in math and reading comprehension tasks, outperforming traditional methods by substantial margins.

Moreover, the transferability of these agents across different domains is particularly noteworthy. For instance, an agent optimized for mathematical reasoning has been successfully adapted to tasks in reading comprehension and science, showcasing the versatility and adaptability of the designs generated by ADAS.

Examples of Discovered Agents

The Path Forward

While ADAS offers immense promise, it also raises important questions about the future of AI development. As we move towards increasingly automated design processes, ensuring these systems' safety and ethical deployment becomes paramount. The research community must explore ways to safeguard against potential risks, such as unintended behaviours or harmful actions by autonomous agents.

Despite these challenges, the emergence of ADAS marks a significant step forward in the evolution of AI. By automating the design of agentic systems, we are not only accelerating the pace of innovation but also opening new avenues for creating AI that can learn, adapt, and improve in previously unimaginable ways.

As we continue to explore this exciting frontier, the possibilities are promising. Whether in enhancing scientific research, solving complex problems, or developing new technologies, the Automated Design of Agentic Systems could play a crucial role in shaping the future of AI.

______________________________________________________

Listen to this blog post as a Google NotebookLM generated podcast.

Thursday, June 27, 2024

Mustafa Suleyman, CEO of Microsoft AI, agrees with me!

Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, said in a recent interview on Defining Intelligence with Seth Rosenberg on YouTube that Microsoft Copilot, and by extension, all AI assistants, must retain a memory of all their conversations. This echoes what I have been saying for over a year. An AI assistant needs to have an episodic persistent memory to remember important details from conversations potentially for years and even decades. As AI assistants gain the power of agency, as they indeed will, they must also retain memories of their interactions with other agents and the outcomes of their actions. 

We recognise that memory is a crucial component of human intelligence, and we have various medical definitions for different types of memory loss. ChatGPT currently has a relatively severe example of anterograde amnesia. OpenAI and Microsoft need to look at case-based reasoning, the branch of AI that has been handling episodic memory since the 1980s. Roger Schank's initial work on scripts laid the foundation for episodic memory management, which was then blended with ML techniques in the 1990s.

Clip from Defining Intelligence with Mustafa Suleyman

A workshop on Case-Based Reasoning and Large Language Model Synergies is being held next week in Mérida, Mexico, with the 32nd International Conference on Case-Based Reasoning (ICCBR 2024).

Tuesday, June 11, 2024

Google Illuminate - creates a radio interview from a research paper

Google Labs has a long history of inviting users to experiment with cutting-edge tech. Gmail was once a private beta project. Illuminate is a project that turns academic papers into AI-generated audio discussions in the style of an NPR podcast. The idea is simple: Google's LLM Gemini generates a paper summary and a Q&A. Two AI-generated voices, a male interviewer and a female expert, will guide you through a short interview describing the paper. You can listen to some of the samples on the Google Illuminate website. This is useful, letting me listen to engaging summaries of the ever-growing stack of research papers I must read as I exercise or drive. It can also be easily adapted to other narration forms for different use cases. Illuminate is in private beta, and you can join the waitlist here.

Friday, June 7, 2024

Recreating the DEC PDP-10 at the MIT AI Lab

 



I came across this today: a modern replica of the Digital Equipment Corporation PDP-10 mainframe computer. What makes this so wonderful is that it's not just a simulation of the PDP-10's OS and software running on a Raspberry Pi but also includes a facsimile of the original front panel.

The PiDP-10 front panel is not just a mock-up but allows you to control and interact with the PiDP-10 exactly as an operator would have done back then. I used a PDP-10 when I did my MSc in AI at Essex University in 1985. The PDP-10 was popular with "university computing facilities and research labs during the 1970s, the most notable being Harvard University's Aiken Computation Laboratory, MIT's AI Lab and Project MAC, Stanford's SAIL, Computer Center Corporation (CCC), ETH (ZIR), and Carnegie Mellon University. Its main operating systems, TOPS-10 and TENEX, were used to build out the early ARPANET. For these reasons, the PDP-10 looms large in early hacker folklore". 

Thus, the PiDP-10 comes with MIT’s Artificial Intelligence Lab, "the PDP-10 formed the heart of a large array of connected hardware, and its ITS operating system became a playground for computer scientists and hackers alike. MACLISP, emacs, the earliest AI demos, they were born on the 10, running ITS." I'm particularly interested to see SHRDLU - the first AI to understand a 3D blocks-world. I remember doing assignments in LISP on that and how, in the mid-80s, it was the considered the cutting edge of AI.

There's a waiting list to buy the PiDP-10 from Obsolescence Guaranteedwhich I have eagerly joined.

Wednesday, June 5, 2024

GraphRAG - Using Knowledge Graphs to Empower LLMs

LLM-generated knowledge graph built from a private dataset using GPT-4 Turbo (Microsoft, 2024)

Back in the 1980s, I did my PhD in AI using Sowa's Conceptual Graphs, what we would now refer to as knowledge graphs. We've known for a while that providing LLMs with specific knowledge in the form of RAGs improves their accuracy. However, we've experimented with providing knowledge to LLMs in more explicit formats, for example, as cases in case-based reasoning augmented RAGs. Now, Microsoft has announced  GraphRAG, 
its Knowledge Graph-augmented LLM tool. The interesting thing about GraphRAG is that the knowledge graph is created by an LLM before being used to guide the LLM's retrieval. The LLM is, therefore, bootstrapping itself and "By using the LLM-generated knowledge graph, GraphRAG vastly improves the “retrieval” portion of RAG, populating the context window with higher relevance content, resulting in better answers and capturing evidence provenance."

Read Microsoft's announcement GraphRAG: Unlocking LLM discovery on narrative private data. For more information about the GraphRAG project, watch this video.