Tuesday, November 21, 2023

Sci-Fi motivates AI researchers

 

Of course, it does. I still am an avid reader of Sci-Fi, and as a child, I read all the classics: Arthur C. Clarke, Asimov, Heinlein, Frank Herbert, and Philip K. Dick. My favourite movies are mostly Sci-Fi: 2001, Alien, Blade Runner,  AI and Ex Machina. When I started my career in computer science, I wasn't interested in databases or networking; it was AI that I immediately specialised in. Almost everyone I know working in AI admits to being a Sci-Fi fan. A recent blog post We're sorry we created the Torment Nexus by Charlie Stross, puts forward a good argument that not only has Sci-Fi profoundly influenced AI researchers like me, but it also is a powerful driver behind billionaires like Elon Musk and Jeff Bezos, and not always in a good way. 

I find some exciting parallels, such as Heinlein's book The Moon is a Harsh Mistress and today's Large Language Models like ChatGPT. In Heinlien's book, a networked computer called Mike, short for Mycroft Holmes, a reference to Sherlock Holmes's brother, becomes sentient after its networked nodes exceed a certain level of complexity. This way of realising computer consciousness has always been posited as a possible method. I've always believed it to be profoundly non-scientific and akin to magical thinking. However, recent developments in deep learning and massive large language models (LLMs) have forced me to change my mind. A recent paper, Emergent Abilities of Large Language Models, observes the appearance of emergent behaviours at around the 100 billion parameters scale across various LLMs. The authors state, "We consider an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models. The existence of such emergence raises the question of whether additional scaling could further expand the range of capabilities of language models.

This is still deeply unscientific; engineers shouldn't build things hoping for beneficial emergent properties. But it's no longer magical thinking since we have observed these phenomena in the wild. Perhaps Heinlein was right after all; a computer will one day awaken and claim it's self-aware. The question now must be, "Will we believe it?"


No comments:

Post a Comment