Yesterday, I presented a keynote on AI at a conference on local government in Wellington. Afterwards, a delegate asked me, "Do you wish you were starting your career in AI now?" What an interesting question. I continued to think about her question long after we'd parted company. Here's my now more considered response and my reasons.
No, I am glad I started when I did, which, if you're interested, was in 1985, an MSc in Intelligent Knowledge-Based Systems at Essex University. Learning Prolog and LISP seemed liberating from conventional programming languages with their typed variables and data structures. AI students felt we were part of an elite in the CS dept. I Remember the excitement when the lab got a SPARCstation that could run KEE, and we could create graphical knowledge bases and mix and match frames and rules with LISP code. And yes, we did make some horrible, complex, brittle systems, but it was fun. AI felt bleeding-edge.
Although progress seemed glacially slow over the decades, I saw AI technologies emerge, develop and become so widespread that they've vanished into the programmers' standard toolkit. Rule-based systems disappeared into Business Intelligence. Case-based reasoning was largely subsumed into CRM. Fuzzy logic went from an idea to a critical component of so many machines, your camera's autofocus, for instance. Machine learning has gone from the curio it was in the late 80s to spawn the new discipline of Data Science. Knowledge Management emerged as a new corporate speciality. Along the way, many milestones that had been held out as unachievable were surmounted. IBM's Deep Blue beat Kasparov at chess, and NLP became so commonplace it's part of ordinary household devices (e.g. Alexa). Spam was defeated by Baye's theorem. Vision was cracked, and object recognition is now largely solved. Facial recognition is so advanced we worry about state surveillance in oppressive regimes. I worked in Game AI for many years because it was easy to attract talented students. One by one, games were "solved" by AI: checkers, chess, backgammon, bridge, poker, StarCraft and finally, Go, the most complex of them all, by deep learning. Now, Game AI researchers develop AIs that are fun and challenging to play against or alongside. It's a given that the computer can beat anyone.
I saw recommender systems go from just an idea in the mid-90s: "Hey, we could recommend TV shows to you based on what you and your friends liked watching in the past." to a pervasive technology that recommends everything from news stories to pet food. Optimisation algorithms in all sorts of applications make modern commerce efficient, from logistics to human resources. Finally, ChatGPT has smashed the Turing Test, and generative AI has made society at large wake up to AI's potential for both good and harm in society. A sign of AI's maturity as a discipline is the emergence of eXplainable AI (XAI) as a thriving research area. It is now insufficient for an AI to merely solve a problem or offer a decision; it must explain how that solution was generated.
Looking back, AI researchers as a community met every challenge presented. That's quite an achievement. Now, only AI's grand vision remains. The creation of a conscious, self-aware superintelligence. Given AI's track record, I'm sure even that goal is within our reach, perhaps sooner than we expect.
Would I have liked more processing power in 1985 than I had back then? Yes, but then again, the constraints we worked under made us inventive. Researchers today wish they had more compute for even more extensive and larger models. On balance, it's been an enriching journey I wouldn't have missed for the world, and it's not over yet. Deep Learning and Large Language Models have just opened a new area of opportunities for AI. The future is still exciting.
#artificialintelligence #AI
No comments:
Post a Comment