Skip navigation
Like what you’re reading?

AI bias and human rights: Why ethical AI matters

Recent examples of gender and cultural algorithmic bias in AI technologies remind us what is at stake when AI abandons the principles of inclusivity, trustworthiness and explainability. With AI becoming increasingly prevalent in our daily lives, it begs the question: Without ethical AI, just how at risk are our human rights?
Hashtags
AI bias

Ever been to an interview and not got the job?  If you have, then you’ve probably also been haunted by the ‘why’ question that follows.

Were you not qualified enough? Not good enough on the day? Or maybe it was something about you they just didn’t like – where you’re from, the words you use, or the clothes you wear.

Sadly, for many of us, those questions don’t end there. Today, reports still suggest that many are refused a job because of the color of their skin, the God they worship, their age, gender, sexual preference, social standing and more.

From human bias to AI bias

We live in a world of human bias. Each day, whether we like it or not, every decision we take is colored by our own biases based on years of our unique conditioning. These biases can muddy our ability to learn and reason in a way which is fair, indiscriminate, and based on rationale. They can create a discriminatory chain reaction.

Today, as we move into a world where code is being embedded into many facets of our daily lives, and algorithms – not humans – hold more sway in deciding if we get the job, get the loan, get the scholarship, get arrested, get to travel, and get just about everything else – will the same risks of

bias and prejudice remain? In other words, can emerging AI-powered systems finally liberate us from thousands of years of human bias?

The answer to this question depends on how the world chooses to develop and deploy AI technologies. Without the principles of ethical AI, which includes aspects such as explainabilitiy, safe AI, security, privacy, fairness, and human agency oversight, there is a strong risk that our future societies will not only continue to project historical human biases, but furthermore, will run the risk of exacerbating those biases. Why? Because AI technologies are ultimately modelled, specified, and overseen by people – with all their flaws. As such and unconsciously, it is inevitable that we carry our biases into those systems we create.

While it may be impossible to ride AI systems of human bias, we can take every precaution to minimize its effects, such as through careful selection of training data, conscious data governance, and a diverse workforce that covers a whole range of inputs and offers a fair representation of our social structures.

Elena Fersman, Head of Ericsson’s Global AI Accelerator, sums this up brilliantly in her blog post on the importance of balance in AI: “One of the things that fascinate me most is that AI technology is inspired by humans and nature. This means that whatever humans found to be successful in their lives and in evolutionary processes can be used when creating new algorithms. Diversity, inclusion, balance, and flexibility are very important here as well, with respect to data and knowledge, and diverse, organizations are for sure better equipped for creating responsible algorithms. In the era of big data, let's make sure we don't discriminate the small data.”

The root causes of AI bias

It sounds fairly straightforward, right? However, the harsh realities of today’s world can very often make the theoretical harder to achieve than it should be. The unfortunate truth is that automated systems can often be propped up by data sets built on thousands of low-paid labor hours and crowdsourced data – and as reports would suggest, very often by men. And without effective data governance or algorithmic hygiene, this can cause problems.

In 2017, AI researchers Kate Crawford and Trevor Paglen delved into the world of categorizing crowd work to explore if and how human bias was creeping into AI systems. In the process, their Excavating AI project, which examined how people were being labeled on the public image database ‘ImageNet’ – used as a dataset for many AI systems – found classificatory terms that were not only judgmental, but openly misogynist, racist, and ableist:

“You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a ‘slattern, slut, slovenly woman, trollop.’ A young man drinking beer is categorized as an ‘alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.’ A child wearing sunglasses is classified as a ‘failure, loser, non-starter, unsuccessful person.’ Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems?”

Read our AI ethics report

What does it mean to fully trust a technology? Read more on how fast-growing techneeds to align with humans’ ethical principles if it’s to be embraced by society.

Click here

AI bias examples

Let’s look at a real example where AI bias can and has infringed our human rights, and let’s return to our initial question of bias in hiring and recruitment.

Today, hundreds of bluechip companies worldwide have turned to algorithm-based ‘emotional AI’ hiring platforms to augment and lower the financial burden of their recruitment processes. While such AI-based systems may indeed be the fairest and most unbiased means of recruitment, there have been some widely reported examples of what can happen when it goes wrong.

This includes one particular example of women applicants being disproportionately rejected based on years of biased data in a male-dominated sector, as described by Noreena Hertz in her book ‘The Lonely Century’:

“In practice, stripped of my full, complex humanity I had to impress a machine whose black-box algorithmic workings I could never know. Which of my ‘data points’ was it focusing on and which was it weighting the most heavily? What formula was it using to assess me and was it fair? The challenge with machine learning is that even if the most obvious sources of bias are accounted for, what about less obvious, neutral-seeming data points that one might not even consider could be biased?”

Addressing AI bias – where to begin?

AI technologies are maturing and increasingly being deployed across our societies. According to a recent Ericsson AI Industry Lab report, an average 49 percent of AI and analytics decision makers said they planned to complete their transformation journey to AI by the end of 2020. In the same study, it became apparent that the biggest obstacle to the deployment of AI technologies is not the technologies themselves, but rather people. 87 percent of respondents said they faced more people/culture challenges than tech or organizational challenges. And interestingly, of the top ten most critical challenges faced by organizations, more than half relate to people and cultural challenges. This includes deterrence factors such as employees preferring to stick to tried and tested methods, employees being afraid that they will lose their jobs, and many generally not understanding the technology or being open to change.

Essentially, it all comes down to a lack of understanding and a fear of relinquishing control. To bridge those misconceptions, we need AI which is transparent, understandable and explainable. We need AI which humans can trust. So how do we get from here to there?

1. Regulating a more ethical AI

In April 2021, the European Commission set a significant precedent in this area by launching its first ever legal framework on AI as well as a new Coordinated Plan with Member States which it says will “guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU.”

The new risk-based approach will set strict requirements for AI systems based on a pre-defined level of risk. It also places an immediate ban on AI systems which are considered to “be a threat to the safety, livelihood and rights of people” – including “systems that manipulate human behavior, circumvent users’ free will and allow social scoring by governments”.

Further adding to this momentum, in June 2021, Australia launched a similar AI ethics framework which it says will “guide businesses, governments and other organisations to responsibly design, develop and use AI.”

Yet while emerging ethical AI frameworks, such as those mentioned above, offer a robust and sustainable framework for future AI development, they are not necessarily a silver bullet solution. Instead, tech companies, governments, businesses and activist groups all play a role in developing and delivering AI which is ethical and inclusive. This was underlined in the European Parliament’s 2020 resolution on civil liability for AI which states that it is never the AI system itself which is liable, but rather the range of actors across the whole value chain who create, maintain or control the risk associated with the AI system.

2. Company and organizational engagement

With frameworks in place, businesses who wish to adopt AI technologies must soon demonstrate that they can implement the necessary requirements in their day-to-day operations and products.

This Ericsson blog post on ethics and AI lays down seven steps as to how organizations can begin to build trust in AI technologies in addition to aligning with the necessary regulation and standards. This includes methodologies such cultural and educational programs, risk assessments and third-party audit programs which, as the author says, organizations should already be looking to roll out today: “The trajectory for this work is moving towards a prevention, detection and response framework similar to those already in place for other ethics and compliance programs, such as anti-corruption, and prevention of tax evasion frameworks. Trustworthiness is emerging as a dominant prerequisite for AI and companies must take a proactive stance. If they don’t, we face a risk of regulatory uncertainty or over-regulation that will impede the uptake of AI, and subsequently societal growth.”

3. Rights and activist groups

We’re at the start of a long journey into new forms of social machinery, where the relationship between people and technology will continuously be redefined. To make sure that our societies remain on the right side of the ethical compass, it is critical that AI remains human-centric, where human agency is guaranteed, and our fundamental rights remain sacrosanct.

Civil rights and activist groups will undoubtedly have a key role to play on that journey, to continuously challenge the discourse and amplify the voices of those who are most adversely affected by new technologies.

Joy Buolamwini, AI researcher and contributor to the recent Netflix documentary Coded Bias, says that a new politics of refusal is needed to help steer new technology in the right direction: “One of the questions we should be asking in the first place is if the technology is necessary or if there are alternatives, and after we have asked that if the benefits outweigh the harm, we also need to do algorithmic hygiene. Algorithmic hygiene looks at who these systems work for and who it doesn’t. There is actually continuous oversight for how they are used.”

AI bias and ethical AI – what next?

One of the players leading the research into ethical AI is Ericsson Research, particularly the components of AI explainability, safety and verification – as expertly summed up in this blog post on trustworthy AI.

The technology ecosystem is still taking its first steps on a long journey and, as always, the first step is the most defining. The choices and investments we make today – whether that be regulatory, research-based or across product development and deployment – could ultimately define the world we strive to create tomorrow. And as a technology which will invariably impact all of us, we all have a stake in how AI is designed, developed and deployed – from researchers to regulators, and activists to journalists.

Explore more

Ericsson has adopted the EU Ethics guidelines for trustworthy AI and implemented AI design rules to make sure that its AI is fully lawful, ethical and robust. Find out more on Ericsson’s AI in networks page.

Ericsson is on a journey to develop fully cognitive networks by 2030. Learn why and how in this recent opinion piece by Ericsson’s CTO Erik Ekudden: To develop cognitive networks, we are building human trust in AI

Read our ethical AI report: AI – ethics inside?

Read the Ericsson Tech Review AI special edition 2021

Read the Ericsson white paper: Explainable AI – how humans can trust AI

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.