Responsible AI – a human right?
In the midst of the excitement about emerging AI-enabled technologies, find out how Ericsson is working to safeguard the integrity of tomorrow’s consumers and guide us all to an age of responsible AI.
Artificial intelligence, once classed as "expert systems" back in the 1980s, will come of age in an era of 5G connectivity – making autonomous, disrupting technologies part and parcel of our everyday life. New business models will emerge, and change will be accelerated across our societies at increasing speeds. Most of this change will be for the good of humanity, such as enabling us to be more efficient, enhancing our senses and use of scarce recourses, tackling climate change or generally just helping us to make better decisions. However, as is often reported in the media, there is also a risk that these AI-enabled systems could be misused, either intentionally or unintentionally, to the detriment of humanity.
Get a taste of tomorrow's AI technologies in our 10 Hot Consumer Trends 2019 report.
Building trust in technology through responsible AI
That's why, at Ericsson, we are driving the notion of responsible AI. By this we mean that we need to be aware of the impact that AI-enabled systems might have while being implemented in different contexts. But also, that the AI systems themselves need to be programmed to act responsibly and fair within their boundaries for a sustainable and trustworthy outcome. In doing so, we plan to mitigate possible adverse effects of AI and help to build trust in the technology itself.
Today, we use machine learning and AI to support the operation and maintenance of our systems. Through new technologies, we can automate fault prevention and network optimization that leads to increased reliability and trustworthiness of the networks. By analyzing network traffic and mobility patterns, operators can better serve their subscribers with tailored services and products. Today's networks carry enormous amounts of data. In handling this data, we have a responsibility to make sure that it is accurate, with preserved end-user privacy and safeguarded against threat. To do this we rely on a complex eco-system of algorithms that must be designed for transparency and explicability, and trained so as to eliminate possible bias. These are just some of the things we, as a company, must continue to address when developing AI-enabled communication systems.
Find out how artificial intelligence is helping Ericsson to manage more intelligent network operations.
The challenges of artificial intelligence
In trying to understand the challenges, several companies including Ericsson, are investigating both the potential of AI technologies and possible unintentional effects. At a more overarching plane, we at Ericsson have identified six major challenges in this area:
- Transparency and explainability: If AI systems are opaque and unable to explain how or why certain results are presented, this lack of transparency will undermine trust in the system. In which ways can autonomous systems explain themselves?
- Security and privacy: Access to vast amounts of data will enable AI systems to identify patterns beyond human capabilities. In this there is a risk that the privacy of individuals could be breached. How can we as individuals secure and comprehend the use of data derived from our activities online or in real life?
- Personal and public safety: Deploying autonomous systems (e.g. self-driving cars, UAVs or robotics) across public or industrial arenas could pose a risk of harm. How can we ensure human safety?
- Bias and discrimination: Even if technology is neutral, it will only do what we program (and teach) it to do. Thus, it will be influenced by human and cognitive bias or skewed, incomplete learning data sets. How do we make sure that the use of AI systems does not discriminate in unintended ways?
- Automation and human control: Trust in systems that both support and offload current work tasks have the risk of undermining our knowledge of those skills. This will make it more difficult to judge the correctness and outcome of these systems and, in the end, make it impossible for human interception. How can we ensure human control of AI systems?
- Accountability and regulation: With the introduction of new AI-driven systems, expectations on responsibility and accountability will increase. Who is responsible for use and potential misuse of AI systems?
Identifying pitfalls of AI
Looking at these challenges, it is easy to see that there is a need to address them from a more ethical point of view. What is right and what is wrong? Independent of our answer to this question, the decision process for us includes not only information and knowledge but to the same extent our values and preferences. This means that the answer might not be the same for two individuals, yet AI-enabled decision support systems might be used globally across the world – adding to the complexity of judging the correctness of the outcome from a value perspective. How can we design normative applications that are truly global?
Going deeper into these challenges, we already know the potential pitfalls, but we still do not know the value and relevance. If we really intend to mitigate undesired outcomes and create new technologies which are truly resilient, it's important that we go deeper in exploring control over technology, our understanding of how (and why), privacy and integrity on several levels, fair treatment, physical safety, unintentional adverse impact, intentional misuse, personal freedom, bias of developers and in-learning algorithms, management of data and consent and, finally, the accountability and liability of technology.
Artificial intelligence and human rights
As a company, Ericsson is committed to respecting the UN Universal Declaration of Human Rights and implementing the UN guiding principles (UNGP) on Business and Human Rights. This includes actively addressing the potential adverse impact of our technology. So, how does this commitment address the challenges described above? Well, by examining the risks to human rights, we can see that the UN declaration actually already addresses several of the challenges for AI systems. These include harming the individual's right to life, eroding their right to dignity, intruding on right to privacy, curtailing the freedom of expression and thought, unfair treatment and unequal opportunities, discrimination due to biases, uneven distribution of benefit and arbitrary interference in an individual's life. If we map these principles side by side with the challenges described for AI systems, we can see several similarities. Some actually correlate directly with the challenges described above, while some make reference to possible effects of intentional misuse or unintentional effects of the technology.
In the light of this small exercise we can conclude that if we address the ethical challenges of artificial intelligence systems, our solutions will also respect the human rights as declared by the UN. Not all of these challenges are topical for a company like Ericsson. Nevertheless, our goal is to understand the way by which we can identify those that are, and carefully implement ways to address areas like transparency, explicability, bias in machine learning or data privacy in order to minimize any negative effects of using artificial intelligence in our systems and products.
Are you ready for automation at scale? Find out how artificial intelligence and machine learning will soon enable intelligent manufacturing and intelligent transport systems.