‘Pipe Dreams’ … the future of work
Last week BBC Newsnight ran an interview with Geoffrey Hinton.
https://x.com/BBCNewsnight/status/1791587541721780400
His work on multi-layer neural networks in and from the mid-1980s earned him the Turing Award and the sobriquet ‘the Godfather of AI’. Having worked with Google for 10 years, he left in 2023 to allow himself the space to question and challenge big tech’s approach to AI.
During the short discussion, Prof Hinton makes a distinction between AI which exceeds human intelligence (it is not an ‘if’, “it’s just a matter of when”) AND AI which doesn’t need to be smarter than us to be hugely disruptive and/or dangerous.
Artificial General Intelligence i.e. AI which exceeds human intelligence, has always been the ‘holy grail’ of AI developers, but the world is split on the likelihood of this actually coming about. On whether it is even possible. Nir Eisikovits, Professor of philosophy and director of the applied ethics centre at the University of Massachusetts, argues that people’s worries about a sentient AI are groundless. The argument is that Chat GPT and similar AI technologies are never going to be anything more than sophisticated stochastic language machines parroting back what we humans tell it -nothing more nothing less.
But as a counterpoint, and closer to Prof. Hinton’s point of view, Shane Legg co-founder of Google Deepmind’s AI lab … reflecting Ray Kurzweil’s ‘The Age of Spiritual Machines’, recently reiterated his belief that there was a 50% chance that AI will achieve artificial general intelligence by 2028 – a claim he originally made in 2011.
Let’s tackle the existential threat first … as it stands today Prof. Hinton believes AI is “a not very good expert at almost everything” BUT there is a non-zero chance of some not-too-distant future manifestation of AI “going rogue and trying to take over”.
“My guess is in between 5 and 20 years from now there's a probability of about half that will have to confront the problem of them trying to take over”. Prof. Hinton
Bend your mind around Nick Bostrom’s (2003) popular AI thought experiment – The Paperclip Maximiser, whereby a seemingly harmless goal to maximize production of paperclips leads to the end of all life on earth. Nick Bostrom was pointing out the importance of incorporating ethics into AI and illustrating the broad problem of managing powerful systems that lack human values. The scenario describes an advanced AI tasked with manufacturing paperclips. If it were not programmed to value human life, given enough sway over its environment, it would try to turn all matter in the universe, including all life on earth, into paperclips or machines that manufacture more paperclips.
“Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans” Nick Bostrom
Prof. Hinton touched on the question of regulation and self-regulation. He made the point that it was good that a discussion had started BUT most regulations have no teeth, and that the competition (between countries and companies) is going to cause things to develop rapidly without enough focus on the implications.
We have written before that 'move fast and break things' has become a widespread mantra in the technology industry arguing the case for technological development at full speed, without impediment. Central to effective accelerationism is the belief that propelling technological progress at any cost is the only ethically justifiable course of action.
"'Move fast and break things' - if you never break anything, you're probably not moving fast enough […] a lot of times people are just, like, too careful. I think it's more useful to, like make things happen, and then, like, apologize later, than it is to make sure you dot all your i's now, and then, like just not get stuff done." Mark Zuckerberg.
Early Facebook investor Roger McNamee observed that “It wasn’t that they intended to do harm so much as they were unconcerned about the possibility that harm would result”
While we do not for a moment dismiss the existential issues (too many people have said it is a non-zero scenario for us to do that), putting aside the danger that AI will turn us all into paperclips …. whether or not we achieve AGI, just the journey towards it will be hugely impactful and deliver new capabilities that again redefine the human-technology relationship. AI may develop many of the cognitive abilities associated with the human mind but still not achieve AGI. While it walks like a duck and quacks like a duck, it might not be a duck, but it is so ‘duck-like’ that in reality it makes no difference.
So , AGI might be a bit of red herring! AI doesn’t need to have the intelligence of the smartest humans to completely reshape whole sectors of the economy. You don’t need to be a savant to perform well in whatever job you have, most of us are ‘smart enough’. We don’t need to achieve AGI for artificial intelligence that is ‘just smart enough’ to displace millions of jobs. So, these new models may well have the capabilities that allow them to do even more knowledge work, and displace even more knowledge workers, than they currently do. It is this more prosaic threat that we concern ourselves with, and Prof. Hinton seems to share this view.
“I'm very worried about AI taking over lots of mundane jobs. That should be a good thing. It's going to lead to a big increase in productivity which leads to a big increase in wealth and if that wealth was equally distributed that would be great. But it's not going to be. In the systems we live in that wealth is going to go to the rich and not to the people whose jobs get lost and that's going to be very bad for society I believe. It's going to increase the gap between rich and poor which increases the chances of right-wing populists getting elected” (Prof. Hinton)
This is not without precedent. In their book ‘Power and Progress – our thousand-year struggle over technology and prosperity’, Acemoglu and Johnson argue that American workers used to have access to good jobs, which in addition to paying decent wages, provided job security and career building opportunities. But these jobs (due in part to earlier waves of technological advance) have largely disappeared for workers without a college degree, due to technology … automation and computerization. People without a college education in the US have seen their real earnings decline since 1980. They became the post war dispossessed. And it has not been without consequences.
In 2016 Bill Clinton suggested that ‘Make America Great Again’ used as a campaign rallying cry, was a message that promised the dispossessed to "give you an economy you had 50 years ago, and ... move you back up on the social totem pole and other people down". And it is not restricted to the US. There is a rise in populism/nationalism in Europe, South America and beyond that taps into the same dissatisfaction. And let’s not talk about Brexit!
“There's no economic law that says that when technology advances everybody necessarily benefits, some people, even a majority of people, could be made worse off”. Erik Brynjolfsson (Professor and Senior Fellow at the Stanford Institute for Human-centered AI)
In his interview Prof. Hinton suggest Universal Basic Income (UBI) could be part of the solution but that this doesn’t tackle the loss of identity that would come with loss of employment opportunities.
“I certainly believe in universal basic income [but] I don't think that's enough because a lot of people get their self-respect from the job they do and if you put everybody on universal basic income that solves the problem of them starving and not being able to pay the rent but it doesn't solve the self-respect problem” Prof. Hinton
Much has been said about the role of UBI … but we respectfully suggest you look at the math, and decide what is possible? FYI the UK’s Coronavirus Job Retention Scheme ran for about 19 months … 11.7 million employee jobs were furloughed for some period during that time, at a total scheme cost of £70 billion. We couldn’t afford to replicate this for another 6 months … let alone making this a permanent fixture. There was a recent UBI pilot in two areas of the UK with some people given £1,600 a month to live on - now assuming this is only given to households (and not individuals) and that it is truly universal, the cost of rolling this out to the 28m plus households in the UK would be half a trillion pounds a year. And of course, you have a double whammy with UBI, as with fewer people in work, you have a smaller tax base with which to cover the cost. Somehow you have to tax the AI, and the corporations that apply the AI – and we know how easy that can be. As a parallel Bill Gates proposed recently that robots should be taxed like human workers, based on the value of their work. According to Gates, “it is the first time in history that technology may be eliminating jobs faster than it creates new ones. If so, then our economic and social systems will be remade potentially in a disruptive and destabilizing way”.[1]
While the path forward is not (yet) clear, we do know that doing nothing is certainly not an option. So, what does the future hold for young people entering the workplace – where will the jobs be?
“I don't know because it's clear that a lot of mid-level intellectual jobs are going to disappear and if you ask me which jobs are safe, my best bet is plumbing because these things aren't very good at physical manipulation yet - that will probably be the last thing they're very good at so I think plumbing is safe for quite a long time” Prof. Hinton
This is not a new idea. In November 1954, Albert Einstein wrote a letter to The Reporter magazine in which he declared that, were he a young man again, he would not try to become a scientist: “I would rather choose to be a plumber in the hope to find that modest degree of independence still available under present circumstances.”
[1] https://www.futuresplatform.com/blog/robot-tax#:~:text=Bill%20Gates%20proposed%20recently%20that,robot%20tax%20the%20best%20solution%3F