This is the second part of what are in effect our notes on last weeks slew of AI conferences. Here we focus on ‘Is AI An Existential Threat to Humanity’ at The Southbank Centre in London
The session featured a US university professor who is a specialist in computer science, a University of Cambridge academic focusing on the ethics around AI, a specialist on social media, and also an artist who uses AI as part of her creative process.
So ‘Is AI An Existential Threat to Humanity’? The short answer to the question posed was YES: AI is an existential threat unless we quickly move to put in place effective controls that are sensitive to the needs of all parties.
Whilst we might have given the answer away too early in this post … do read on for the ‘so what’!
The challenges in putting in place effective controls were discussed and can be summarized as:
One:
The controls need to recognize the fact that AI is both threatening and liberating in that it also opens up lots of opportunities that need to be encouraged. So controls must not kill off AI's potential with excessive bureaucracy. So AI is scary, threatening and potentially dangerous, but on the other hand AI is full of joy, fun and opportunities to liberate people - it is a great leveler. It is a way for us all to augment our creativity.
Two:
AI is challenging because we don’t really know how AI works. We don't know precisely what’s going on with all the algorithms etc. inside the black box.
Because AI has evolved in a mysterious way, the start point for building in controls puts us in a totally different scenario from how monitoring systems were generated to control (say) nuclear energy production and automated aircraft systems. These were based on a clearer understanding of how the technology worked. In order to manage it and creatively work with it … we need to be able to get under the hood of AI and understand it (to an extent).
Three:
The timescales we are facing are demanding. Some aspects of AI move rather slowly in the background, and if not controlled, lead to the gradual erosion of what humanity wants. This is the TS Eliot point: 'This is the way the world ends... not with a bang but a whimper'. BUT in other scenarios AI can overnight transform key aspects of our lives where we suddenly lose control. We need to move quickly with controls and boundaries as the world is going to look totally different in 5 years.
Four:
We know that controlling AI will be a political and logistical nightmare and we already have the illustration of our failure to act quickly to adequately control it e.g. in the US (apparently) AI is regularly being used by criminals to plan the optimum jailbreak strategy. In addition, we’ve seen the implications of our failure to fully control social media over the last decade. This has meant that this medium has been used to threaten democracy itself, with people tampering with the election process etc. We don’t have many good examples of best practice in controlling these things.
Five:
There are the obvious discussions around what it means for people's jobs, but there are wider ranging threats that could be posed to the environment, the management of energy resources and a host of other factors, such as human rights. We haven’t fully mapped out the threats that AI could pose.
Six:
There is an inequality around the discussions about AI, from ‘experts’ who know a lot, through to a whole cohort of society who have had little education and knowledge around AI and what it means for them. There is a wide range of stakeholder knowledge - from very little to a lot - and all of us have a role in the dialogue.
Seven:
There was no clear consensus on the strategic pathways for how you fundamentally deal with controlling AI. In the debate a number of perspectives were expressed.
We need not worry about the power of Silicon Valley because ultimately governments have control because they levy taxes, run the military and can dominate. Therefore, AI providers will have to toe the line (self regulate). However, the point was made that the big tech giants through their advancing AI knowledge are likely to become even more powerful (and wealthier) and have the potential to surpass the state.
There was a view that we need hard ball red lines to be in-place to control AI providers. They must declare their specific objectives and on each initiative are only allowed to proceed if they can demonstrate that they are doing things that are human compatible i.e. what is good for human existence and is compatible with social justice. If they don't meet these criteria they should be shut down and not be allowed to develop this piece of AI technology. For example, if there’s a suspicion that the technology could be used for harm /terrorism etc. it is shut down immediately. However we assume that the state (any state) is not a bad actor in this scenario - encouraging harmful behavior if it advances their agenda over another.
There was a view that the above hardball red line power and control approach was too heavy-handed, impractical and never going to work. Instead, what is needed is a collaborative approach, whereby AI providers work with regulators to create an AI/ human symbiotic relationship. This is one that allows AI / human generated solutions to flourish within certain protective guidelines/ guardrails.
We have to find consensus on the path to follow before we can actually start unravelling the Gordian Knot of AI regulation
In summary, the panelists agreed that we need a regime of controls and tight regulation that make AI similar to operating as an accountant, a lawyer or medical practitioner, with the fundamental guiding principle being that AI must be human compatible and promote human justice!!
There was a hint of 'Good luck with that' about this!