Tuesday, November 28, 2023

Moore's Law visualised

 


Last week, a photo from my social media feeds perfectly illustrated Moore's Law. It shows a computer being manhandled into a local government building in 1957. A little Internet sleuthing revealed that it was an Elliot Series 405, revealing its full spec. These English business computers were 32-bit and had 8k of memory. That's not the entire computer; there were bulky peripherals, and a typical installation cost around £85,000. That's about $1,094,915 (USD) in today's value.

The computer below, shown against the same building, is a Raspberry Pi. Even a base model has 1GB of RAM, costing $100 or less. The photo is a beautiful illustration of Moore's Law, named after the late Gordon Moore, co-founder of Intel, who observed that the number of transistors in an integrated circuit doubles about every two years. Moore's Second Law also noted that the price fell.

Thursday, November 23, 2023

Has AI passed the Turing Test?

The Turing Test

 In 1950, British Mathematician Alan Turing published a paper called Computing Machinery and Intelligence. The paper opens with the remarkable sentence, "I propose to consider the question 'Can machines think?'" Remember that back in 1950, there were only a few computers in the world, and they were used exclusively for mathematical and engineering purposes. In this paper, Turing describes The Imitation Game, which we now call The Turing Test for machine intelligence. The test is quite simple: an interrogator using a teletype has to converse via a Q&A session with two hidden entities. One is a person, and the other is an AI chatbot. If the person guesses wrong, that is, identifies the chatbot as a human, then the computer has passed the Turing Test. Rember Turing called this the Imitation Game. Hence, the computer is successfully imitating intelligence. We can leave philosophers to decide if the computer is actually intelligent (note: any group of philosophers will never agree on this).

Now consider the maths of the Turing Test. If the interrogator simply randomly guesses between Human or Computer and wastes no time paying any attention to the merits of the Q&A session, they will be correct fifty per cent of the time since there are only two options. So, a large experiment using the Turing Test needs to identify the computer correctly significantly more than fifty per cent of the time to prove the AI has failed the Turing Test.

One such large experiment involving three large language models, including GPT-4 (the AI behind ChatGPT) has recently been published: HUMAN OR NOT? A GAMIFIED APPROACH TO THE TURING TEST. Over 1.5 million participants spent two minutes chatting with either a person or an AI. The AI was prompted to make small spelling mistakes and to quit if the tester became aggressive. With this prompting, interrogators could only correctly guess whether they were talking to an AI system 60% of the time a little better than random chance.

However, if the ChatGPT was prompted to be vulgar and use rude language, its success increased, and interrogators only identified the AI correctly 52.1% of the time, causing the authors to observe "that users associated impoliteness with human behaviour."

Turing himself set a low threshold for passing his eponymous test: "I believe that in 50 years’ time, it will be possible to make computers play the imitation game so well that an average interrogator will have no more than 70% chance of making the right identification after 5 minutes of questioning.” Well, it's been seventy years, but AI has decreased the chance of identification to 60%, and no better than guesswork if the AI curses.

This is a historical milestone. Passing the Turing Test has been held up as a significant challenge for AI since Turing's paper was first published, akin to summiting Everest or splitting the atom. The philosophers (and theologists) will continue to argue about the nature of intelligence, consciousness and free will while computer scientists continue developing machines that imitate intelligence.



Tuesday, November 21, 2023

Sci-Fi motivates AI researchers

 

Of course, it does. I still am an avid reader of Sci-Fi, and as a child, I read all the classics: Arthur C. Clarke, Asimov, Heinlein, Frank Herbert, and Philip K. Dick. My favourite movies are mostly Sci-Fi: 2001, Alien, Blade Runner,  AI and Ex Machina. When I started my career in computer science, I wasn't interested in databases or networking; it was AI that I immediately specialised in. Almost everyone I know working in AI admits to being a Sci-Fi fan. A recent blog post We're sorry we created the Torment Nexus by Charlie Stross, puts forward a good argument that not only has Sci-Fi profoundly influenced AI researchers like me, but it also is a powerful driver behind billionaires like Elon Musk and Jeff Bezos, and not always in a good way. 

I find some exciting parallels, such as Heinlein's book The Moon is a Harsh Mistress and today's Large Language Models like ChatGPT. In Heinlien's book, a networked computer called Mike, short for Mycroft Holmes, a reference to Sherlock Holmes's brother, becomes sentient after its networked nodes exceed a certain level of complexity. This way of realising computer consciousness has always been posited as a possible method. I've always believed it to be profoundly non-scientific and akin to magical thinking. However, recent developments in deep learning and massive large language models (LLMs) have forced me to change my mind. A recent paper, Emergent Abilities of Large Language Models, observes the appearance of emergent behaviours at around the 100 billion parameters scale across various LLMs. The authors state, "We consider an ability to be emergent if it is not present in smaller models but is present in larger models. Thus, emergent abilities cannot be predicted simply by extrapolating the performance of smaller models. The existence of such emergence raises the question of whether additional scaling could further expand the range of capabilities of language models.

This is still deeply unscientific; engineers shouldn't build things hoping for beneficial emergent properties. But it's no longer magical thinking since we have observed these phenomena in the wild. Perhaps Heinlein was right after all; a computer will one day awaken and claim it's self-aware. The question now must be, "Will we believe it?"


Wednesday, November 15, 2023

A good AI story from Google DeepMind

 

GraphCast's forecast for New Zealand Sun 19 Nov

In recent months, we've become used to news stories declaring AI poses humanity an existential threat, that a superintelligence "whose values do not align with ours" may exterminate us all. So, it's nice to see a good AI news story. Yesterday, a team at Google's Deepmind published a paper in Science, Learning skilful medium-range global weather forecasting. They have trained a deep learning model on publically available global historical weather data. They show that their model makes better weather predictions "much faster than the industry gold-standard weather simulation system – the High-Resolution Forecast (HRES), produced by the European Centre for Medium-Range Weather Forecasts (ECMWF)".

Moreover, their system, GraphCast, is fast. They say, "While GraphCast's training was computationally intensive, the resulting forecasting model is highly efficient. Making 10-day forecasts with GraphCast takes less than a minute on a single Google TPU v4 machine. For comparison, a 10-day forecast using a conventional approach, such as HRES, can take hours of computation in a supercomputer with hundreds of machines."

In January, Auckland, New Zealand's largest city, received a year's worth of unforecasted rain in a single afternoon, causing widespread flooding. Cars floated down streets, road were washed away, houses slipped down hillsides, and people died. Better weather forecasting can help prevent this. You can try out GraphCast's 10-day forecast here. I'm going to Waiheke Island on Sunday. It looks like the weather will be Okay.

Monday, November 13, 2023

I created my own GPT


Last week, OpenAI hosted their first developers' conference where Sam Altman revealed their new innovation - build your own GPT without coding. I wanted to try it out and had some free time Saturday morning. I've been working with the NZ AI Forum to create a whitepaper on Large Language Models. We've decided to create living document rather than a static published PDF. To this end we've worked with IBM in Melbourne using WatsonX to create a RAG augmented LLM with content provided by us. 

So I thought I would try out OpenAI's "Create a GPT" beta. It asked me for a name "The AI Forum Guide" and who the intended audience for the GPT would be and what tone its replies should be in: professional, casual, etc. It then asked for any content material. I uploaded about a dozen PDFs and then told it to search ".nz" domains for relevant New Zealand case studies and collate them into a bulleted list with a title, URL, and a paragraph description. I then asked it to incorporate that material into the GPT.

All the while the GPT builder was asking questions of me and encouraging me to inform it of my needs and intentions for the system. I was then able to preview it and if satisfied publish it. It was super easy to use the GPT builder, and the results look promising. I see it being helpful when we've fully populated its content with NZ-specific material on generative AI and its use. 

You can play with my new GPT here (you need a subscription to ChatGPT Plus).


Friday, November 3, 2023

A Career in Artificial Intelligence

Yesterday, I presented a keynote on AI at a conference on local government in Wellington. Afterwards, a delegate asked me, "Do you wish you were starting your career in AI now?" What an interesting question. I continued to think about her question long after we'd parted company. Here's my now more considered response and my reasons.

No, I am glad I started when I did, which, if you're interested, was in 1985, an MSc in Intelligent Knowledge-Based Systems at Essex University. Learning Prolog and LISP seemed liberating from conventional programming languages with their typed variables and data structures. AI students felt we were part of an elite in the CS dept. I Remember the excitement when the lab got a SPARCstation that could run KEE, and we could create graphical knowledge bases and mix and match frames and rules with LISP code. And yes, we did make some horrible, complex, brittle systems, but it was fun. AI felt bleeding-edge.

Although progress seemed glacially slow over the decades, I saw AI technologies emerge, develop and become so widespread that they've vanished into the programmers' standard toolkit. Rule-based systems disappeared into Business Intelligence. Case-based reasoning was largely subsumed into CRM. Fuzzy logic went from an idea to a critical component of so many machines, your camera's autofocus, for instance. Machine learning has gone from the curio it was in the late 80s to spawn the new discipline of Data Science. Knowledge Management emerged as a new corporate speciality. Along the way, many milestones that had been held out as unachievable were surmounted. IBM's Deep Blue beat Kasparov at chess, and NLP became so commonplace it's part of ordinary household devices (e.g. Alexa). Spam was defeated by Baye's theorem. Vision was cracked, and object recognition is now largely solved. Facial recognition is so advanced we worry about state surveillance in oppressive regimes. I worked in Game AI for many years because it was easy to attract talented students. One by one, games were "solved" by AI: checkers, chess, backgammon, bridge, poker, StarCraft and finally, Go, the most complex of them all, by deep learning. Now, Game AI researchers develop AIs that are fun and challenging to play against or alongside. It's a given that the computer can beat anyone.

I saw recommender systems go from just an idea in the mid-90s: "Hey, we could recommend TV shows to you based on what you and your friends liked watching in the past." to a pervasive technology that recommends everything from news stories to pet food. Optimisation algorithms in all sorts of applications make modern commerce efficient, from logistics to human resources. Finally, ChatGPT has smashed the Turing Test, and generative AI has made society at large wake up to AI's potential for both good and harm in society. A sign of AI's maturity as a discipline is the emergence of eXplainable AI (XAI) as a thriving research area. It is now insufficient for an AI to merely solve a problem or offer a decision; it must explain how that solution was generated.

Looking back, AI researchers as a community met every challenge presented. That's quite an achievement. Now, only AI's grand vision remains. The creation of a conscious, self-aware superintelligence. Given AI's track record, I'm sure even that goal is within our reach, perhaps sooner than we expect.

Would I have liked more processing power in 1985 than I had back then? Yes, but then again, the constraints we worked under made us inventive. Researchers today wish they had more compute for even more extensive and larger models. On balance, it's been an enriching journey I wouldn't have missed for the world, and it's not over yet. Deep Learning and Large Language Models have just opened a new area of opportunities for AI. The future is still exciting.

#artificialintelligence #AI

Thursday, November 2, 2023

How Chatbots work - a visual explainer

 The Guardian has published an engaging visual explainer that describes how chatbots like ChatGPT and Bard work. In a (relatively) simple step-by-step guide, you're shown how they work with simple examples and no Math! Read the article here.