How do we ensure that AI benefits humanity?
Artificial intelligence has reached a cultural tipping point. Google is implementing their DeepMind in the Google Home. Amazon does their thing with their Echo and Alexa.
IBM’s AI interface Watson is used in many applications, among others elevator control. In the automotive industry, Waymo has been taking gigantic leaps in autonomous driving in 2017, thanks to the AI functions in the cars.
And these are just the consumer angles. AI will play an equally big, if not greater, role in industries and within systems such as mobile networks. Consider how AI can augment humans as they work on complex tasks using things like virtual or augmented reality. Yet for all its potential, this still means we’re giving up some control to machines, and that must both feel – and actually be – safe.
It then becomes very relevant to ask: How can we as a global society come together to ensure the responsible development of AI? How do we ensure that we don’t mistakenly build Skynet?
I am personally skeptical about some developments in AI, but far from all. For instance, in 2015 I wrote here about how we will need automation for operators and others to build complex systems for IoT.
So I’m on board. But the development in this area has been staggering since I wrote that post in 2015. And it’s hard not to be influenced when you read long stories about people like Elon Musk, who are extremely afraid of losing control over a future AI.
It’s like Tim Chang of the Mayfield Fund said at an AT&T Foundry Futurecast event last year in Silicon Valley, referring to the often dystopian science fiction TV show Black Mirror: “For me, it becomes very much a black mirror/white mirror question. If my life is impacted by how AI scores me, I have right to know how that black box works or my data needs to expire.”
These are legitimate conversations we need to have.
To stay on the science fiction theme, we need to be thinking about rules for AI, something along the lines of Isaac Asimov’s Three Laws of Robotics. If we are to do this, it becomes incredibly important for industries utilizing AI and connected technologies to take responsibility for their development.
There are concrete efforts being made along these lines. This includes groups like OpenAI, which was actually co-founded by the same Elon Musk! They’ve examined issues with AI safety and malicious uses of AI.
It also includes other industry funded groups like the Parternship for AI plus the government of Canada and the Institute for Business Ethics in the UK, among many other governments. So there are a lot of smart people working on the topic. Their challenge? Keeping up with the speed of AI developments.
Here at Ericsson, we’ve been putting a lot of resources into machine learning and AI, something we call Machine Intelligence (MI). A lot of our work is driven by the fact that current networks are so complex that while humans can manage them, they cannot optimize them without the help of machines. So future networks must be able to handle errors and faults itself, either without the need for human interaction or augmenting the abilities of humans to make crucial decisions.
When you consider the amount of data that will run through a 5G network, managing this kind of complex system could be even more challenging than inviting AI to our homes through Alexa or Google Home.
I understand that AI is coming, and that we will see exponential innovation built on top of mobile technology. Ray Kurzwell, Google’s chief researcher within AI has said that by 2029, machines will be more intelligent than a human, and that by 2045 we will reach singularity, when humans and machines will merge. If the singularity happens in 2045, I will be 79 years old. At the very least, healthcare systems will be taking great advantage of AI by then, as will the transportation networks that will get me around. I just want my kids to also feel safe in this future world, when the machines might be smarter than us.
Tough questions are starting to be asked and answered, but this needs to keep up with the leaps in the technology. It’s not a question of do we or don’t we make use of AI, but how do we best harness its power.
Like this post and the questions it raises? Be sure to check out this post from Rebecka Cedering Ångström, Director of Insights and Concept Creations at Ericsson Research, as she queries the impact of future AI in everyday life.
Want to know more about what Ericsson is doing with machine learning and artificial intelligence? Check out our latest work with Machine Intelligence and see how we are using tomorrow’s technologies to support humans and optimize networks.