Who decides the ethics when it comes to emerging technology?
Students are my favorite audience. They are not easily impressed with technology but quick to challenge or engage when complex questions arise. Like talking about ethics in relation to emerging technology.
“What if you could alter the way you experienced your physical surroundings. You could use Augmented Reality (AR) to highlight street signs or modify shopping windows. Wouldn’t that be cool?”
“Yea, I guess, I don’t know…”
“But what if you could edit out things you didn’t want to see, such as graffiti or garbage? Or alter the way you perceive people? For instance, making them look like characters from Lord of the Rings.”
“That would be kind of cool, but I honestly don’t care what other people look like.”
“Ok, but what if someone could alter how they perceive ALL people. For instance change the color on their skin to suit them. Or maybe use that same technology to edit out people they don’t want to see? Like homeless people?”
“No! That would be terrible!”
“Yes, I agree. But let’s assume that this will be possible. Whose responsibility is it to ensure that we stay on the ethical side when it comes to emerging technology?”
A couple of weeks ago I held a presentation for a group of students. I was asked to do a presentation on the 10 Hot Consumer Trends for 2017, my last piece of work I did for ConsumerLab. I agreed, but having the opportunity I decided to make a change. Rather than just presenting the trends, I connected each of the trend with an ethical dilemma, just to see where the discussion would take us.
Ethics is a hot topic–especially when it comes to Artificial intelligence (AI). You can hardly read a technology section in a newspaper or magazine without finding an article on an ethical aspect of AI. The discussion revolves around the future of our jobs, about privacy and usage of our personal data, and the risks and responsibilities of autonomous systems that malfunction.
But looking at other emerging technologies, most of them actually come with their own potential challenges that would benefit from a discussion on ethics already today. Virtual Reality (VR), Wearables, IoT, nano technology, 3D printing, brain-computer interface, the list goes on.
Let’s take VR as an example. VR is said to be able to boost our empathy by allowing us to experience anything from being the victim in a war to being bullied in high school. It’s a great tool for teaching us to see others perspectives and to learn. But if it can put you in the shoes of the victim, it can just as easily do the opposite: allow you to explore the role of the perpetrator. What if VR will let you assault or hurt other human beings (real or virtual), just for fun and without any consequences? And as VR can be perceived as very real, what would be the difference from doing it in real life? Do we need to draw a line of what should be accepted? Or is this just like the 80’s debate on video violence and role playing games?
Now you may ask why we should discuss ethics before the technology has become mainstream, or perhaps even possible. When I asked the students who they believe should own the responsibility, it becomes rather clear. The students were quick to point out multiple stakeholders, from the educational systems, teaching young children from an early age, to legislation and societal institutions setting up rules and guidelines. They also thought that companies need to be part of the discussion. Not only companies that develop the technology but also the companies that use, facilitate or enables the same. In extension that means that we all, you and me included, may be challenged with handling these dilemmas in the future. And if we want to steer or influence development we need to involve ourselves now.