Moral compasses and parallel universes — a few thoughts about technology and ethics
Should public safety trump civil liberty? Will cities (or our lives for that matter) become better if we make them more efficient? Does it matter if common technologies are indecipherable to most people? Is it always a given that the data generated by people’s use of products and services belongs to the ones providing those products and services?
These kinds of questions are a tacit or clearly stated part of most of our projects. Here are some thoughts about how we reckon morality works and how we relate ethics to technology.
First thing first. The dictionary defines the term morality as "principles concerning the distinction between right and wrong or good and bad", which sounds relatively straight forward.
The notion of what morality constitutes however is not as simple. Just think about how we have to use metaphors when we try to talk about it. What kind of geography is for example a 'moral compass' operating in? I don't know, but in my experience people's moral compasses don't seem to point towards the same North and South Pole. In addition to that the ethical poles seem to jump around dependent on where people are, who they are, what they are doing and why they are doing what they are doing. The Poles don't seem to be in the opposite end of straight axes either.
To me, the 'moral compass' metaphor makes a lot more sense if I imagine that people are actually on different planets that have their axes tilted differently. The planets are co-existing in time/space in overlapping parallel universes, which by the way are fully transparent to each other so nobody can really figure out where one planet ends and the other begins.
So yeah, thats probably not quite straight forward. Let me try to illustrate:
Herbert Spencer — who extended Darwin's theories about natural selection into the field of economics, sociology and ethics — used Darwin's ideas as support for his own theories about economy. Spencer's belief was that systems of selfish individuals driven by competition would generate the most benefit for everyone, implying that competition and self-interest is good, in a moral sense. (Spencer was the one who coined the phrase "survival of the fittest", which has become a popular catchphrase for this kind of view).
Charles Darwin himself on the other hand believed that morality, as a biologically evolved mechanism, has empathy and altruism at its core, i.e. quite the opposite of the idea of moral selfishness promoted by Spencer in a socio-economic context.
This is the glitch where we see a glimpse of the two parallel universes where Darwin's and Spencer's moral compasses have different north/south, despite their agreement on the principles. The principle of a biological mechanism ('natural selection') is morally neutral in nature. It just exists. But the same principle is morally biased when applied to technology and socio-economical systems. When we have choices our moral compasses turns on and that's when we may find that they are pointing in different directions. If we are not paying attention, the glitch passes unnoticed and the differences in the ethical dimension are lost in translation.
One way we make ethics part of our work is simply to constantly pay attention the glitch between the moral 'parallel universes'. It is really just about keeping an eye the two following archetypical ethical orientations, which we call 'nature' and 'nurture':
Some are winners and others must be losers. Natural selection is just how things work and how things are supposed to be. Compliance with 'natural' processes like this is morally right, either because some divine figure has made it that way, or because that's the universal order of nature. This leads to the belief that evolution, in any context, should be allowed to take its natural course and not be disturbed. Interference would artificially maintain weaknesses otherwise bound to be discontinued by evolution. Since becoming the 'fittest' is a goal in itself, intervening in the natural processes is not only seen as foolish, but even immoral.
Society and socio-economic systems are manmade concepts that we must regulate so they work for everyone. It's morally wrong to have systems where some people become 'unfit' and have to suffer because of the way the system works. This goes along with a belief that selfishness is suboptimal compared to altruism when it comes to generating the most benefit for most people. Happiness and fulfilment is the ultimate goal, which should be achievable for everyone, regardless of individual 'fitness'. Since a laissez-faire society don't support that goal, not having regulations is not only cruel, but even immoral.
George Lakoff describes this as a metaphoric system where morality is either understood as strength or as empathy.
The important thing to notice is that the 'parallel universes' share the same principles of rights and wrongs when we look at the principles individually. This is typically what most religions list as vices and virtues — things like altruism, authority, community (loyalty), cooperation, empathy, fairness, harm, purity and reciprocity.
They are universal because any sane person from any place in any culture agree that it is generally wrong to harm another being, or to betray someone, not help someone in desperate need etc. But in a real situation people sort these universal moral principles in order of priority. Many people feel that harming someone is ok as punishment for wrongdoing, even if one would agree that it's wrong to harm people in general. People who reason this way rank obedience toward authority higher than not harming someone.
To make it even more complex, people often have different moral configurations for different aspects of society, for work, politics, money, technology, food, love, religion, medicine and so on. Essentially it means that the principles are universal and that their order of priority is relative.
Awareness of the universal moral principles and their relative order of priority can make it easier to decipher moral opinions that might seem strange at first. Like for example why a prominent Iranian Ayatollah considers mobile broadband immoral: This is a religious conservative context where the highest regarded moral principle is authority (the religion) followed by purity and loyalty. Humans are seen as inherently weak/bad and it is the religion that makes and keeps them strong/good. Since people are weak it is important to keep their minds pure. Information, images, ideas and ideals from the outside world is something that contaminates or infects peoples minds. This is a threat to the moral authority of the religion. Since mobile broadband give people easy access to ideas from the outside, the technology itself is deemed immoral since it is threatening the religion.
It's probably unnecessary to mention that we beg to differ.
Looking at ethics this way also reveals that Spencer's liberal views on economy and the Ayatollah's conservative view on technology bear some resemblance. They are both giving high priority to authority and purity, but the difference is that Spencer see nature/science, not religion, as the moral authority and 'artificial human interference', not foreign ideas, as moral impurity. Even someone who are a vegetarian for ethical reasons apply the same moral principle of purity to food: If even a small amount of beef bouillon got into the soup, the soup will be "contaminated" and eating it would be immoral. The big difference here though is that the vegetarian ranks purity as a supporting principle for not harming animals, which again supports an understanding of morality as empathy.
In our daily lives though, all of this is in a bit of a haze, hidden behind expressions like 'moral compass' and 'common sense'.
Technology is not morally neutral
Since technology is such an integral part of society, it is important that those who shape it can think and talk about the ethical aspects. This means to be able to recognise that when people for example consider nano-technology or genetically modified food as immoral, they may be concerned with the moral principle of authority; i.e. that technology can be seen as a threat to natural and/or divine orders. That it may even be considered wrong because of moral purity (mixing genes from different species), or because there is a chance it could turn out to be harmful. We should be able to see that the same principles apply to people's moral concerns about artificial intelligence and robots. The uncanny valley — the disturbing wrongness in very human-like robots where the distinction between human and machine is blurred but still noticeable — can make people feel that it is wrong because of moral impurity (mixing human and machine). That Asimov's Laws of Robotics are about preventing harm to humans. That the EU cookie law is about fairness. Etcetera.
It's important to go a little bit beyond the "guns don't kill people, people kill people"-argument, i.e. that technology is neutral and that ethics only apply to how technology is used. In our work we tend to view any topic or concept we are working with through a number of different 'lenses'. One such lens is to think of technology as analog to language, as 'ethical expressions'. The words and expressions we use both reflect and reinforce how we conceptualise whatever we are talking about and technology, as well as design, also reflects and reinforces the morality of those who shape and use it in a similar fashion.
Another 'lens' is to bring Actor-Network Theory and Latour's ideas about 'agency' into the design of products, service and technology. To view technology as a mediator or amplifier of intent makes it apparent that technology has some moral agency by itself, but without also having responsibility it is kind of existing morally somewhere between the morally biased and morally neutral. (A philosophical question is then if there can be any 'half-ethical' state at all, or if morality must be binary).
Yet another lens is to look at how ethics relate to trust. Trust is one of those vaporous matters that is crucial for making technology work well in the real messy world. The trick is to view trust is an ethical 'credit rating', not as something that we automatically get by making the most advanced or reliable technology.
Anyway. We are not aiming for consensus about what's right and wrong or good and bad. Quite often we are in different 'parallel universes' ourselves, and we disagree about many things most of the time. What is much more important is that when we constantly bounce our ideas off the notions of morality and ethics as a part of our creative process, it makes us ask and discuss more important questions. Like the ones in the beginning of this post.
Top image by Joakim Formo, from flickr