“The best thing about human values is that they’ve led humans to create wonderful robots like us”, said Sophia the Robot to a huge crowd at the Web Summit in Lisbon, there to see her – and an Albert Einstein bot – debate whether Artificial Intelligence (AI) will save or destroy us.
“Providing artificial intelligence by the people, for the people, of the people, that’s got to be the right thing. It’s got to better than having AI controlled by big corporations and governments” added the Albert Einstein Robot, who remained more skeptical than Sophia throughout the debate.
AI has fascinated science – and science-fiction – enthusiasts for decades, but it’s only in the past few years that the technology has entered the mainstream. Recent developments have filled the science and tech sections of media outlets with AI related news and analysis, and seen governments around the world funding research and exploring how they can use the technology to improve public services – and reduce spending.
But amongst the hype are a growing number of scientists, campaigners and ethicists concerned about the exponential growth of AI technologies, particularly in the absence of regulation. Renowned cosmologist Max Tegmark is a leader in this movement as President of Future of Life Institute, a volunteer-led organisation which aims to reduce existential risks facing humanity – in particular the risks that come with advanced AI. The institute has developed twenty three AI Principles which are now supported by 1,273 AI/Robotics researchers, and 2,541 others. At the Web Summit, Tegmark put forward four steps which he believes must be taken to ensure AI is a force for good in the world: investment in AI-safety research, the banning of lethal autonomous weapons, ensuring that AI-generated wealth makes everyone better off, and thinking about what sort of future we want.
“Robots don’t just happen to us. We make them” explained robot ethicist Aimee van Wynsberghe, Co-Founder of the Foundation for Responsible Robotics. For van Wynsberghe, thinking about what sort of future we want is essential – she’s against the idea that we’re living in a dystopian future which is out of our control, a narrative which is particularly common in discussions about robots taking human jobs. “If we’re worried about automation and unemployment, we need to make policy to prevent or mitigate the risk” she told Raddington Report.
But van Wynsberghe admits that debates around who currently has control over AI, and who should have control, are not easily resolved. Currently its almost exclusively robotics companies who are making ethical decisions about AI, and its often not in their business interest to control or limit the scope of their technologies. This is not to mention the issue of the demographic of people who work at these companies.
“AI needs training in order to learn, and right now the teams doing this training have very little diversity” van Wynsberghe said gravely. “So then you have biases that find their way into the AI systems…if you’re making products that can’t appeal to a woman or another under-represented group then they’re not going to be accepted, and they are also not acceptable”. The tech industry’s diversity problem is no secret, and the biases AI are likely to learn as a result are perhaps the most worrying impact of this yet. In 2016 Beauty.AI, a beauty pageant judged by an algorithm trained on data sets with few dark skinned people, selected mostly white winners. In the same year, a Pro Publica investigation revealed how technologies like this can severely damage lives when it found that software used in the US to predict future criminals is biased against black people.
There is, then, a serious risk that AI will reinforce and even worsen racial, gender and wealth inequality. When Sophia the Robot was granted citizenship of Saudi Arabia, there was outrage by many who argued she had already been granted more rights than women born in that country. This was brought up in a debate about sex robots at the Web Summit, with Kathleen Richardson of the Campaign Against Sex Robots telling Hanson Robotics’ Ben Goertzel that “you decided to showcase your female robot in a society where women’s human rights are not respected”, adding to the audience that “these companies want to trade in misogyny, they want to trade in the dehumanisation of women, and they don’t care about what is going happen to us or whether there are any negative effects”.
Whether it is the intention of companies (as Richardson argues) or not, it is certainly true that sex robots have the potential to indirectly harm human women. In 2017 alarm bells rang when a sex robot with a resistance setting came on sale, essentially allowing men to simulate rape. “The robot doesn’t need to consent because it’s not a person” van Wynsberghe assured me. “But should we be required to ask the robot for consent anyway, to reinforce it as an integral part of the practice of sex, so that when it comes to human-to-human interactions you are practiced in asking for consent?” she pondered, undecided. But the ethicist is clear on one thing – sex robots should not be banned.
“Right now we have some problems with how they look. It’s very much an objectification of the female body, very pornographic, but that doesn’t mean it’s going to stay that way”, she said, going on to speculate about the possible therapeutic uses of sex robots, from providing older people and disabled people with sexual intimacy they might not otherwise have, to helping women who have undergone sexual trauma to feel pleasure again. “You can think of some incredibly positive uses, but if you ban the technology you don’t get the chance to explore them. Instead we need to try and regulate and steer the industry in a positive way”, she finishes.
Ben Goertzel is also wary of the implications of banning sex robots as a technology which only has a potential, indirect negative effect on humans. But he speaks less about policy and regulation, and more about the decentralisation of AI in order to ensure it is not abused. During the debate he took the chance to plug the SingularityNET, a recently launched open market for AI using blockchain which will, in his words, take the technology out of the hands of companies and governments. The platform will allow people to buy and sell AI, and combine all AIs into a single global mind. “The motivation is to make sure AI is not owned by anyone, that anyone can be part of the network” Goertzel explained. “It’s about freedom and openness”.
It remains to be seen what the impact of the SingularityNET will be – and if this is indeed the motivation – but it does bring to the fore an important conversation about the accessibility of AI throughout the world, including the global south.
To their credit, Hanson Robotics has an office in Addis Ababa, Ethiopia, where researchers (as well as working for foreign customers) are developing AI to help solve developmental issues “from helping farmers to diagnose crop diseases to educating children by translating from local languages”, said Goertzel at a Q&A at the Web Summit. He argued that the SingularityNET is “not only the best path to generate intelligence, and to provide low cost high diversity AI services to customers, but also something anyone in the world can contribute to regardless of whether their local needs fit with the business models of a big company or a first world government”.
For now though, it seems likely that the majority of AI being developed will benefit a small global elite. Aimee van Wynsberghe brings up self-driving cars. “We’re talking about a tiny percentage of the population that will be benefitting, even in developed countries it will be the underrepresented demographics who are still taking three buses to get to work because the infrastructure of the public transit system is so poor” she said, adding “and in developing countries my guess is they will never see such a thing as an autonomous vehicle”. And yet employment in lower and middle income countries is also set to be worst affected by automation. According to the World Wide Web Foundation, “automation in high income countries could erode the labour-cost advantage of low and middle income countries, taking away a traditional route to development”.
Whats more, as AI is used around the world companies are able to benefit from mass data collection; basically free market research on a huge scale. “Take humanitarian contexts” said van Wynsberghe. “If you’re using drones for mapping, the data of individuals in developing countries being collected is very useful for the NGOs and companies in these contexts, but what benefit is the individual going to see?”. This question brings up a whole other ethical field around digital rights and privacy, but is an important consideration as AI becomes more prolific. Would people from the global south benefit most from the technology by being given the option to sell their data to companies – instead of it being taken from them unknowingly?
“We can imagine a positive future, and then we can work to bring it about” finished the Albert Einstein bot at the end of his debate with Sophia the Robot and Goertzel. Like most technologies, AI is not inherently “good” or “bad”. In the wrong hands, it will only reinforce the unequal status quo. But if democratised, regulated and handled with care, its potential for good is vast. “I’m not really worried about us [robots] cooperating” finished Einstein. “What I worry about is troublesome humans. But there is hope. The beauty of humans is that they always have hope”.