What happens when we can’t understand how AI works?
But how about an artificial intelligence that no one understands? One that was designed by a machine and is essentially alien?
Ready for that?
Will we lose control of artificial intelligence?
This is a hot topic right now., highlighted recently by articles in Backchannel and MIT Technology Review.
It's a reasonable fear too. During an interview at the AT&T Futurecast, Tim Chang of the Mayfield Fund both talked about how he looks specifically for ethical founders of AI companies and broke with Silicon Valley's caution on regulation. Why? We need to know what's in the black box, he said, and it's too important to just trust to the market.
This subject can go very broad very quickly – wrapped up in ecstasy about the Singularity or in despair about AI taking over the world (something that Wired co-founder Kevin Kelly dismantles reassuringly, also in Backchannel).
So let's keep it narrow. Let's assume that AI will stay under our control and be confined to, as Kelly says, "hundreds of extra-human new species of thinking, most different from humans, none that will be general purpose, and none that will be an instant god solving major problems in a flash."
But even if one AI system is confined to, say, diagnosing the common cold, what do we do when we can't look inside the black box and understand why it's right (or wrong). What happens when AI is an inscrutable as the heavens or weather were to early humans? What if we're creating unknowable gods of, say, climate control in office buildings?
Perhaps this is a good way to think about AI. Are we ready to create a god of networks?
Learning to trust our AI
Assuming this makes many people uncomfortable, what do we do? Do we give AI consciousness so it can explain themselves properly? Do we regulate the right of explanation, meaning an AI system has to be able to explain itself, as the European Union will in 2018?
Maybe the answer is a bit of everything. And maybe the answer is also about time and trust.
I talked to two smart people at Ericsson about this. Geoff Hollingworth – a fellow Networked Society blogger – used the analogy of a calculator.
People thought for a long time that calculators would ruin our ability to do math. And while this is true of some basic math skills, they've allowed more people to do higher math.
"It's a dependency," he says. "We don't know how it works. We have a basic premise that something is true only if proven true. When that breaks [with AI], do you still trust the truth?"
Geoff said that the core issue for him is governance. You have to be able to trust that the data that goes into the system and comes out is not compromised. If we know the data that the AI receives is the right data, we "have to get used to trusting the answer."
If you want to read more from Geoff, check out the paper he just wrote with cloud pioneer Jason Hoffman, Head of Product Area Cloud Systems, on Future Digital Infrastructure.
Building artificial intelligence in future networks
Ericsson's Chief Innovation Office is taking a laser focus on AI in terms of telecom networks. I talked with Manoj Prasanna Kumar, Head of Data Science, who pointed me to an article in VentureBeat by Diomedes Kastanis, Head of Technology and Innovation for the innovation office. In the article, Diomedes lays out the four ways that AI will lead to self-healing networks. (Diomedes will be speaking at an AI panel on Monday sponsored by Chetan Sharma at the Google Launchpad office in San Francisco. See below for info on catching the live stream of the event).
In the Chief Innovation Office, people like Manoj and Diomedes are focusing particularly on network management and performance tuning, one of those detailed AI use cases that Kelly talked about in his article quoted above.
Working with the stages of in the VentureBeat article, Manoj said we are in Stage 2 (predictive networks) at the moment, and this year we will be working on enabling the transformation from Stage 2 to Stage 3 (Prescriptive networks) using AI. It will take 2 to 5 years, for our customers to trust that AI can enable completely autonomous stage 4 (self-healing networks).
He acknowledged that this pace of advancement could make people nervous. Can we really trust AI to run the network that will power the entire connected world?
For now, people are still deeply involved in validating any actions and alerts. And they'll stay involved going forward. But, eventually, we'll have to start to trust the machines, he says. Because only then can we reap the full benefits of AI.
"We can't wait for people to act [to fix problems or maximize performance]," he says. "How flexible we can be with our trust depends on how much we can validate. But the AI needs to prove itself first. Let's say it is right 70 percent at the start. Over time, AI will learn from feedback. Once it is stable at 99 percent, we will trust the system more."
So maybe that's the answer to making AI like the now harmless calculator. We keep checks in place for as long as we need them. And then, when the machines have proven their trustworthiness, we let them go.
Information on AI panel
On Monday, May 8 at 6pm PT, tune into the livestream of "Artificial Intelligence: Underpinnings of a Disruptive Wave," a panel at Google Launchpad that will be attended by over 500 people. Diomedes Kastanis, Head of Technology and Innovation for Ericsson Innovation Office, along with other industry leaders from Google and Optimizing Mind, will discuss how AI and ML techniques might be applied to real-life scenarios like IoT, mobile devices, personalized services, and more. Kastanis and the panel will also tackle the pressing ethics questions on how to deal with personalization and the use of AI, as well as how to separate the hype from ground reality and give viewers things to think about for their own businesses.
Like what you’re reading? Please sign up for email updates on your favorite topics.Subscribe now
At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.