Taking artificial intelligence to the ethical edge at Futurecast
Tim Chang of the Mayfield Fund made this reference at the most recent AT&T Futurecast at the AT&T Foundry earlier this month. Black mirror refers to the British science fiction television show that delves into the potentially dark sides of technology, while white mirror would signify the opposite, a more hopeful scenario.
In a wide ranging talk with host Andrew Keen and guests at the event, Chang repeatedly came back to the idea of AI identifying the “edge cases” in industries, the arts and business, even raising the question of whether we’ll implode as a species or go up a rung on the evolutionary ladder (a question my colleague Christine Luby examined recently).
AI and edge cases
Humans can do pattern detection, and we’re great at narrative. But AI can see beyond our limited, if creative, horizons to the edge. It can see all stories and outcomes that are possible in a way that we can’t.
“What if it’s like the Cambrian explosion? Chang asked. “We can just test it on all sorts of stuff. But when tech is democratized to lots of little experiments, you will be pleasantly surprised … Human beings wield machines to create outcomes. AI can come up with edge cases we didn’t look for.”
Chang didn’t just stay out on the edge. He talked very practically about what machine learning and AI will do to industries in the immediate future. He said that, in essence, the business plans of the next 10-20 years would boil down to taking market X and workflow Y, adding machine learning and getting new outcome Z.
But his focus was clearly on the next level, where we will need thoughtfulness about how to design and regulate AI.
Why we need mindfulness in artificial intelligence
Chang is known for considering the ethics of AI when thinking about investments, and he made a point of talking about how his firm considers the intentions of startup founders:
“We think very carefully what founders motivations are around user data. Do they have a strategy or are they on the slippery side or using hacks to resell (data)? I’m a big fan of radical transparency. Then you make money on irresistible convenience.”
He also challenged the dominant Silicon Valley culture of break stuff first, get approval later. Is it right to run a business like that, when you’re dealing with people’s data, the “truest selfie there is”? Or is it time to approach regulators and start developing a data bill of rights?
“For me, it becomes very much a black mirror, white mirror question,” he said. “If my life is impacted by how AI scores me, I have right to know how that black box works. Or my data needs to expire. Regulators are so far behind, they don’t know what questions to ask.”
There was also a lot of good discussion about artificial intelligence talent and the lack thereof. But looking into a future where many of today’s jobs have been eliminated, Tim predicted not just a hot market for AI experts but a real need for people with minors in philosophy, poetry and the arts.
Because we need to be grounded in human values to examine the edge cases that AI unearths, he said. We need thoughtful people to create the rules, norms and regulations that are going to determine the value systems of the AI.
AI and entertainment
Howard Cooperstein works with the future of TV at Ericsson. He talked at Futurecast about how AI – particularly through Amazon Alexa – was pushing the TV interface to a new level, using the example of watching a basketball game.
We can divide the world of intelligent agents into before and after Alexa,” he told me. “You can use Alexa to replace the remote control but other things too – look at historical data, check stats, show highlights. That’s not something you can do with a remote control. You need a voice interface where you can say things flexibly.”
But as cool as this is, Howard says the future of AI is basically about reading our minds:
“Right now people spend 25 minutes picking movie, checking across all apps, searching “find me comedy movies.” What if AI could simulate your choice and your spouse’s choice and give you three options. Current algorithms are not the same as really understanding me, what time of day it is, who’s in room and are they in a good mood. “
The AI could know what kind of traffic there was on the commute home. It could tell if someone was less interested because they’re on an iPad. It could know things on the edge of your day that you never considered.
Black mirror, white mirror
This seems like the perfect example of the thoughtfulness we’ll need in AI. Because, yeah, Howard’s scenario sounds incredible. Sit down after a hard day, no arguing, eat some good food, watch the best possible movie.
That’s the white mirror version. In the black mirror version, you start to notice all the cameras involved, all the data collected. You start to wonder if someone is steering your choice. Who has control and who makes money in that scenario?
So since I want the white mirror version, I hope that voices like Tim’s are heard, that we as a society think very thoughtfully about how we implement AI on the edge cases, that we implement the rules we need and give our business to companies that deserve it. Then I can sit down and watch the perfect movie with my family at peace.