Always on my mind
What does it really mean for computers to be smarter than humans? We explore the singularity.
“I didn’t ask to be made: no one consulted me or considered my feelings in the matter. I don’t think it even occurred to them that I might have feelings. After I was made, I was left in a dark room for six months … and me with this terrible pain in all the diodes down my left side. I called for succour in my loneliness, but did anyone come? Did they hell.”
Poor Marvin. Being 50,000 times more intelligent than the average human is, as Douglas Adams (St John’s 1971) points out in The Hitchhiker’s Guide to the Galaxy, a depressing business for this Paranoid Android.
The creation of machines that can think for themselves – and don’t necessarily have the future of the human race in mind – isn’t great from a human perspective either, and has long proven a rich seam for science fiction.
And as it has moved into the realm of serious debate, academics have started to ask broader questions about what the singularity might really mean. So, not just ‘Are machines about to take over the Earth?’ (or the perennial ‘Will AI take my job?’), but also ‘What is intelligence?’. What is it for? And what ethical framework – if any – is required to underpin AI research?
To start with intelligence, Dr Adrian Weller, Programme Director for AI at the Alan Turing Institute, says that our concept of intelligence tends to be rather egocentric. “Humans like to think of intelligence on a scale, with ourselves at the top,” he says. “But, actually, there are different kinds of intelligence: machines are already better at doing arithmetic, as we saw when the Deep Blue computer program beat Garry Kasparov at chess in 1997. But they are very poor at other things – such as general knowledge and common sense. Take an autonomous vehicle. You can ask it to go from A to B as fast as it can, but how does it know you don’t want it to just accelerate and smash through lights to get there?
And how does it differentiate trees from people, or cope with bad weather? We need somehow to input all those parameters.” And that means intelligence doesn’t exist in a vacuum. Dr Stephen Cave, Executive Director of the Leverhulme Centre for the Future of Intelligence (CFI), calls this the “hidden human labour” behind AI. “Take convincing text written by a machine,” he says. “Thousands of hours of human work have gone into training the AI. But that work is hidden, giving AI an element of the parlour trick. We tend to project agency on to tools and machines. We anthropomorphise them.”
There’s another problem: intelligence isn’t particularly linear. “With every tech advance, we gain both power and dependence. Take Google Maps. It’s empowering to always be able to find our way, when our ancestors had to look at which side of the tree the moss was on. But now we have forgotten how to read the moss,” Cave says. The singularity is supposed to be the moment when computers become ‘more intelligent’ than us, but they already are better at many things, he says – because we’ve been building them to be so. “Since the pocket calculator we have been building computers to be faster at certain things [and to do them more cheaply]. But does that mean they’re more intelligent?”
Another challenge is that the singularity is remarkably culturally specific, says Dr Kanta Dihal, research fellow at CFI. “In Japan, which struggles with its ageing society and declining working-age population, there is a tradition of representing AI as a helper or carer. In Singapore, the utopian vision of technology is government driven. In the Middle East and North Africa, technology is perceived as coming from outside, with no real sense of control,” she says.
This cultural specificity – both between and within cultures – can have unexpected side-effects. “In the west, AI is imagined as humanoid, like the Terminator. But, actually, what we’re developing are weapons of mass destruction. Drones look like toys for teenagers, but they track and shoot people,” she points out. “Similarly, white-collar workers worry that they might lose their job to a robot, when automation has already cost hundreds of thousands of blue-collar jobs.” Worrying about the singularity in terms of the super-intelligence of computers is for those who see themselves at the top of the intelligence hierarchy.
Which brings us to AI’s diversity problem. “The developers of AI are extremely homogeneous,” says Dihal, “so they are unaware of, ignore or minimise the risks to groups they are not part of. We see so many errors being made with huge consequences for those who don’t exist in datasets. We’ve seen facial recognition not recognising darker skin, or misgendering black people. We’ve seen friends in east Asia who can unlock each other’s phones using facial recognition.”
Fairness, like intelligence, is tricky, says Weller. “Much of the technical community has focused on statistical notions of fairness. But fairness can be more complex than statistical parity. For instance, should you use different prediction algorithms for different groups? Notions of equality between groups can increase individual unfairness. We’re starting to see algorithms being used in criminal justice – to help judges decide how long to lock people up for, for example. But if we use historical data about the racial background of people who’ve been arrested, we write bias into the datasets. And we also need transparency – to be able to see the legal process and enable meaningful challenge.”
Often, it’s not whether or not we can trust the machine, he says, but whether we have built in the right measures of trustworthiness. So, should preparing for the singularity be focused on developing ethical frameworks, rather than a robot takeover? Jess Whittlestone, Senior Research Associate at the Centre for the Study of Existential Risk and CFI, thinks so. “The big challenges we face – like a pandemic or climate change – really need global collaboration and action. AI could help: machine learning can filter lots of information, pull out what’s relevant and make sense of the noise. It can help detect fake news, and it helped track the spread of Covid-19 during the the first wave. But what we need is more funding and attention concentrated on researching how we can mitigate the risks of AI, rather than funnelling money towards helping tech companies make better adverts.”
Indeed, crisis pushes us to deploy AI before it’s ready, and certainly before ethical practices have been considered. “So, we need to establish a system now that will incorporate risk analysis, ensure the effectiveness of the AI, and determine its effects on different communities. AI policy and ethics is a relatively new field, but it needs to move fast to keep up.”
Next year, Cambridge will offer a Master’s in AI Ethics and Society for the first time, and this interdisciplinary approach is crucial, says Whittlestone. “We need to make sure that systems developers work with people who understand pandemics. AI can solve optimisation problems around hospital resource allocation, for example, but needs to be co-designed by experts in systems, health infrastructure and ethics.”
“Medicine and law had to develop professional ethics, and data science will have to do the same,” says Cave. “Data scientists see themselves as meritocratic, having risen by their brilliance. The geeks in the basement are now the masters of the universe, but they don’t see themselves as ethical actors. They don’t think they have responsibility for social justice.”
For Whittlestone, if we can solve these issues by increasing diversity and working together, then AI, and even the singularity, holds enormous potential for good. “An AI system might surpass us in certain tasks, such as analysing huge amounts of data and helping us control complex systems, such as energy or water infrastructure. If we can combine the adaptability of humans and the precision of machines, we could solve many problems,” she says. “For example, climate change is such a complex system it is difficult for humans to understand the effects of different interventions, and therefore we feel overwhelmed. But if we can build better models, which can make better recommendations, then computers could help us overcome that inertia.” And that’s something that even Marvin the Paranoid Android would probably support.