ericsson.com
Your location is listed as Global
Login

Product moral and artificial ethics (a few notes)

Robot toy
Category, topic & hashtags

I'm collecting thought-seeds for possible future work related to the user experience of products containing artificial intelligence. Here is some thinking-aloud after a little ad-hoc digging triggered by reading about The Centre for the Study of Existential Risk (CSER), who got some attention in mainstream media this week, especially for the part of their research area concerning artificial general super-intelligence.

So. First: moral and ethics as key topics. As well as security and trust. And sustainability (i.e. survival), if we go all the way. A couple of points:

Ethics is already a topic regarding smart products. Machines with artificial intelligence are bound to make decisions in unpredictable situations that will affect human lives. Think a self-driving car that understand that a collision with pedestrians is 100% unavoidable, but it has the possibility to steer toward a single adult or toward two children.

Pet robots are already used within elderly care and sooner or later someone will propose to use cuddly machines as replacement for human social interaction in order to save money.

Many nations already have ethical councils that develops recommendations for bio and nano technology. We will probably soon see AI added to the list, like the Danish Council of Ethics: Recommendations concerning Social Robots.

There are those who take a very opposite point of view: The American Society for the Prevention of Cruelty to Robots argue that since other non-human entities like companies and animals have rights, so should self-aware robots.

Anthropomorphism as a part of the user experience of products is something we find interesting (see SWoT and somewhat similar thinking from our friends at BERG), but that is all a kind of play. Treating products as actual people, for real, seems to me a very different thing. But then it also raise the question about how we define and understand life as such, which inevitably catapults us into philosophical and spiritual domains where anything could happen. Which is often in itself a good thing, I guess. (But robot rights? Seriously?)

Like todays smart products, their future descendants will probably not be separable as physical objects, unlike living creatures from nature, who are self-contained as physical beings. Be it virtual agents or a physical robots, their intelligence does not have to be fitted in a scull-sized box, but their "brain" is distributed across "the cloud", which means that they depend on networks in order to "think". Which means that the network will be an integrated part of robots. Could that mean that it may be situations where cutting off their network access would be the moral right thing to do?

And if the networks themselves are artificially intelligent, self-configuring and self-analysing, turning on and off automatically and able to prioritise, accelerate or even stop certain kinds of data, or users: could ethical issues similar as with the self-driving cars apply to the network too?

So, no answers here, just a bunch of loose ends, but it seems likely that ethical product behaviour and machine code of conduct (ah, the pun…) could soon become a significant part of the user experience of future products and services.


Photo by Kevin Dooley, found on flickr

The Ericsson blog

In a world that is increasingly complex, we are on a quest for easy. At the Ericsson blog, we provide insight, news and opinion to help make complex ideas on technology, business and innovation simple. If you want to hear from us directly, please head over to our contact page.

Contact us