Saturday, December 23, 2023

AI is (not) a bubble

 

Image generated by DALL-E
2023 has been an unprecedented year for Artificial Intelligence (AI). I know this because I have worked in the area since 1985 and have never seen AI get so much attention in the media. This is due to the release of ChatGPT and other generative AI applications based on Large Language Models capturing the public's attention like never before. Consequently, many pundits are nay-sayers, stating that AI is a bubble bound to burst, leaving fortunes in tatters and start-ups bankrupt. Undeniably, there is a small and finite market for apps that help students cheat on their essays or create the perfect dating site profile. However, AI is not a bubble. 

This blog post by Cory Doctorow What Kind of Bubble is AI? is typical, making the common error of conflating AI with Large Language Models (LLMs) like ChatGPT. ChatGPT is merely one type of AI which has a 70+ year research and development history. Your smartphone map app uses the A* algorithm to find your route from A to B. It was developed at the Stanford Research Institute (SRI) in 1968 (the same place that made Apple's Siri). Fuzzy logic manages the autofocus in your phone's camera. Case-based reasoning provides knowledge to the help desk operator when you call 0800, and there are countless other examples of different AI methods embedded in all aspects of modern society. Large Language Models are called by us AI people "Foundation Models" because they provide a foundation other AIs can use to provide a two-way multimodal conversational interface. Yes, they are expensive to build and train, but as their name suggests, you only need a few "Foundation" models to underly a multitude of applications. This is a genuine breakthrough that will have a lasting impact on the uptake of AI once essay-cheating apps fall out of the public's focus.

Cory Doctorow's blog post, for example, says that "Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments." This mistakenly assumes that medical image analysis uses the same techniques as LLMs like ChatGPT. They do not; they're a mature application of medical image analysis using rigorously tested machine-learning algorithms that do not "guess" or "hallucinate". A recently published paper, Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging, by Reabal Nadjjar (Diagnostics 2023, 13, 2760. https://doi.org/10.3390/diagnostics13172760), details the development of AI-assisted medical imaging. The article clearly shows that AI is now a fixture in medical image analysis and diagnosis, although there is always room for improvement.

AI is just coming of age. ChatGPT has focused a spotlight on AI, which is now mature enough and has the processing power in the cloud to succeed. Why wasn't A* a thing in the 1960s? Back then, there simply wasn't enough portable processing power (or GPS). 2024 is going to be the year of "agents." OpenAI's release of its GPT Builder and an app store for GPTs that can interact with a myriad of online resources and tools will focus attention on the notion of intelligent agents. Many ill-informed pundits will think this is a brand new invention, whereas once again, Intelligent Agents is a mature discipline within AI dating back to the mid-1990s. This review paper by Michael.Wooldridge and Nicholas Jennings: Intelligent Agents: Theory and Practice. Knowledge Engineering Review 10(2), 1995, would be an excellent place to realise that agents won't be a flash in the pan either.

Undeniably, there is a lot of hype around AI, but within the bubble is a solid core of mature technologies ready to be exploited by people with knowledge and imagination. 

No comments:

Post a Comment