Magic Intelligence in the Sky
This week the FT was reporting an interview with Sam Altman, CEO of OpenAI, developers of ChatGPT. There were a few observations in this which we thought were interesting and worthy of further consideration.
Can machines think?
We have read a few pieces recently that talked to ‘what comes next’ in AI. The FT mined a similar vein. Altman stated that OpenAI’s single product is ‘magic intelligence in the sky’ (we are not sure if this idea is fun or creepy!) and the business is pushing ahead with its vision to create Artificial General Intelligence – AI which can learn to accomplish any intellectual task that human beings can perform and/or surpasses human capabilities in the majority of economically valuable tasks. OpenAI articulates AGI as being “generally smarter than humans”.
AGI has always been the ‘holy grail’ of AI developers, but the world is split on the likelihood of this actually coming about. On whether it is even possible. Nir Eisikovits, Professor of philosophy and director of the applied ethics centre at the University of Massachusetts, argues that people’s worries about a sentient AI are groundless. The argument is that Chat GPT and similar Ai technologies are never going to be anything more than sophisticated stochastic language machines parroting back what we humans tell it -nothing more nothing less. But as a counterpoint, Shane Legg co-founder of Google Deepmind’s AI lab … reflecting Ray Kurzweil’s ‘The Age of Spiritual Machines’, recently reiterated his belief that there was a 50% chance that AI will achieve artificial general intelligence by 2028 – a claim he originally made in 2011.
Whether or not we achieve AGI, just the journey towards it will be hugely impactful and deliver new capabilities that again redefine the human-technology relationship. AI may develop many of the cognitive abilities associated with the human mind but still not achieve sentience. Without achieving AGI, the new models may well have the capabilities that allow them to do even more knowledge work, and displace even more knowledge workers, than they currently do.
While it walks like a duck and quacks like a duck, it might not be a duck, but it is so ‘duck-like’ that in reality it makes no difference. So AGI might be a bit of red herring! AI doesn’t need to have the intelligence of the smartest humans to completely reshape whole sectors of the economy. You don’t need to be a savant to perform well in whatever job you have, most of us are ‘smart enough’. We don’t need to achieve AGI for artificial intelligence that is ‘just smart enough’ to displace millions of jobs.
Safety First?
Safety first … to put the condition of being safe from harm, injury or loss before other matters! Well, not so much when it comes to developing AI. AI’s evolution is outpacing our ability to consider its implications and the destination is just not clear. What we do know is that we will experience discontinuous change and that AI will go through hyper-evolution i.e. its evolutionary trajectory is unlikely to be smooth, there will be big leaps forward.
Sam Altman talked to OpenAI’s vision to make AGI, then figure out how to make it safe, and then figure out the benefits. Many will argue that he has his priorities back to front … figuring out how to make it safely should be at number one – but there is already an AI ‘arms race’ being played out by technology businesses and countries … and while this headlong rush toward the next iteration of AI models makes us feel somewhat uneasy, we don’t see it slowing despite the assurances of the great and the good at Bletchley Park.
There was also no mention in the FT’s conversation with Mr. Altman about the downsides and mitigation – is that for someone else to think about?
“While ChatGPT-5 is likely to be more sophisticated than its predecessors, Altman said it was technically difficult to predict exactly what new capabilities the model might have […] its like a fun guessing game for us. We’re trying to get better at it because its important from a safety perspective to predict capabilities”
A fun guessing game?
It is interesting that back in June AI pioneer Dr. Geoffrey Hinton made headlines by leaving Google so that he had the freedom to openly express his concerns about the potential dangers of AI. His work developed the foundations for generative AI and has been instrumental in shaping the future of AI more generally. Despite this he felt that companies were rushing headlong into new AI territory without due care and attention. His concerns revolve around the potential misuse of AI, the erosion of job opportunities, and the long-term risks to humanity. He acknowledges that his life’s work now brings him some regret, but he consoles himself with the thought that if he hadn’t pursued it, someone else would have.
This is not without a historical parallel. “Now I am become Death, the destroyer of worlds.” – so said Julius Robert Oppenheimer. Oppenheimer affirmed that, had he been able to go back, he would have done everything exactly the same, but its hard to argue that his work has provided humanity with a net benefit.