Skip navigation
Like what you’re reading?

Ethical AI frameworks: A critical dimension for designing AI for telecom

For years, our industry has achieved unrivaled network resilience thanks to an inherent design process that puts security and privacy front and center. Could this innovation-centric approach also be the key to designing new AI technologies?
Hashtags
Hashtags
#design
Ethical AI frameworks: A critical dimension for designing AI for telecom

"We can only see a short distance ahead, but we can see plenty there that needs to be done." 

Alan Turing, ‘Computing Machinery and Intelligence’, 1950 

The journey to artificial intelligence (AI) – or the thinking machine, as it was once called – may have begun more than half a century ago, yet although many of the technological questions may have since been conquered, many bioethical questions have not. 

Critical questions such as ‘can we guarantee that new technologies will always do good and never do harm?’ and ‘can we always ensure that they are just, fair, explainable, and accountable?’ will rightly and inevitably form the centerpiece of any discussion on future AI deployment. With new breakthroughs, new questions will be asked. It’s the price of progress.  

While these questions will always be Important, they are also part of a much broader and more holistic conversation between society and technology itself, spanning many different Industries. No single person, company, or even industry should bear the responsibility of answering those questions alone, but our combined power of 'doing things right' within our own spaces should help to achieve the common goal. 

Within telecommunications and ICT, I believe we have already found our part of the 'answer', and that is within our approach:  to align core bioethical principles with generally accepted ethical frameworks as part of the design process, similar to the existing critical technology areas of security and privacy. For example, today, our Security Reliability Model enables a managed, risk-based approach to security and privacy implementation. 

Our approach is to also address ethical issues in AI development – by design. 

The need for risk-based regulation 

In recent decades, we have made significant and rapid advances in the areas of machine learning and machine reasoning. This has been paired with an all-consuming and ubiquitous surge in the deployment of data-driven digital technologies – taking in all sectors of society. Yet, regardless, it’s inevitable that the biggest breakthroughs are most likely still yet to happen.  

As Alan Turing alluded to in his seminal 1950 Computing Machinery and Intelligence paper: “we can only see a short distance ahead”. This makes the task of regulatory governance not only decisive, but also acutely difficult. 

Over the last few years, a lot of effort has been put into ways to address ethical challenges of AI development, operation and deployment. Today, at the latest count, there are 173 AI ethics frameworks and guidelines currently in place around the world. The recently proposed AI Act by the European Union is widely regarded to be the first comprehensive legal framework for AI systems. This landmark act is quite possibly the furthest we have come in regulating AI and has the potential to create a new global model for the governance of AI. It’s built on the earlier work of the European Commission’s High Level Expert Group on AI (EC HLEG on AI), including its ethics guidelines for trustworthy AI. While the EC HLEG on AI discontinued Its work In July 2020, parts still continue in the shape of the AI Alliance.   

However, is it enough? What we see is needed is not more guidelines. We need agreed right amount of guidelines that are risk and fact-based. These can then be subject for scheduled revisions. 

According to the findings in Ericsson’s latest AI ethics report, much more could still be done to “better support AI development for business, industry and society”, such as through “[improving] today’s guidelines, rules and regulations […] to cover more than consumer-focused variables.” 

While blanket guidelines, such as the European Commission’s ethics guidelines for trustworthy AI offer a solid foundation for industries to assess the ethical dimension in their development and deployment of AI-enabled products, there is a great risk that they will not always be aligned with the target use and context for specific products. When used as a basis for regulatory governance, this misalignment could lead to both under- or over-regulation, with the potential to under- or over-estimate risk prognoses as well as impeding conditions for innovation and value creation within each industry. 

Virginia Dignum, Professor of Ethical and Social Artificial Intelligence at Umeå University and former member of the EC HLEG on AI, agrees that a distinction must be made on an industry level as to which guidelines are applicable: 

“[Today’s EC HLEG AI guidelines] don’t cover everything. So, for example, when we talk about AI systems that are embedded in telecom switches or used for determining network capacity, those are not the type of applications that the EC group were concerned about.”  

All development of new technology will have some effects when deployed, both positive and negative. However, as well as noting the risk of negative outcomes when using AI, there is also a need to consider the severity of the impact caused by a malfunction. As an example, a failure of an AI system that controls the anesthetics during a heart surgery will have far more severe consequences then a glitch in AI-enabled thermostats controlling the temperature in the rest rooms at the same hospital. In this instance, a risk-based approach will allow for proper prioritization of real high-risk AI use and other less harmful use of similar AI technology. This approach is aligned with international human rights frameworks such as the UN Guiding Principles on Business and Human Rights. As an industry, telecom must take the lead in aligning and operationalizing the principles of trustworthy AI into our respective design processes – similar to what we have done with the principles of security and privacy, based on our industry’s unique risk landscape, and the severity of potential adverse impacts

Read our AI ethics report

What does it mean to fully trust a technology? Read more about how fast-growing tech needs to align with ethical principles if it’s to be embraced by society.

Click here

 

AI in telecom – understanding the risks 

So, what are the risks when developing and deploying AI-based technologies in the context of mobile networks? 

Firstly, it’s important to make the distinction that mobile networks offer a uniquely different risk profile compared to, for example, other digital applications and infrastructures that run on top of mobile networks.  

AI is primarily deployed in mobile networks as a way to increase the network’s performance, resilience and efficiency, as well as enabling system automation – using numerical data from the network elements and all the while ensuring that the deployments are lawful, ethical and technically robust. 

The culmination of these deployment benefits can have a significant positive impact on the development of future mobile networks and, as a result, the wider world. This includes: 

  • Ensuring superior network performance both from a consumer and a business perspective, including today’s and tomorrow’s industrial production processes  
  • Improving network energy efficiency and meeting the sustainability demands of businesses and the wider world 
  • Enabling greater efficiency and new business possibilities through data-driven and predictive operations, as well as enabling a dynamic focus on business KPIs 

AI technologies can also support network operations to become increasingly proactive, with the ability to predict potential faults through trend prediction, anomaly and root cause detection – improving the resilience and availability of the network and making it safer. 

Examples of the positive use cases we experience with AI to enhance network performance include how reinforcement learning (RL) is used to increase downlink user throughput by 12 percent with intelligent remote antenna operation. Additional important examples are found when AI is used for network energy efficiency to meet sustainability demand. In one case, we found up to 13 percent energy savings with intelligent RAN power-saving solutions. While highly valuable to the operations of telecom networks, both of these examples are of a low-risk nature. 

The hardware on which AI systems are deployed in mobile networks are governed by strict European safety regulations. In Ericsson’s case, this is the Radio Equipment Directive and Low Voltage Directive. While all sensitive data related to personal information is regulated by the General Data Protection Regulation (GDPR).  

By understanding any potential risks, requirements for an AI implementation can be tailored to the target environment and demands. As with the methodology build on the Security and Reliability model, potential risks need to be addressed, but we also need to acknowledge the difference in severity of negative impacts. The risk landscape for AI in telecommunication systems from an ethical and human rights perspective is low and have recently been explored by Ericsson in our 5G Human Rights Assessment

The road to designing AI within ethical frameworks 

Telecom must introduce new methodologies to ensure that the continued deployment of AI systems in mobile networks remains lawful, ethical and robust. 

This also echoes the sentiments in the recent Ericsson white paper ‘AI in next generation connected systems’, which states: 

"The dependence on data, the complexity of algorithms, and the possibility of unexpected emergent behavior of the AI-based systems requires new methodologies to guarantee transparency, explainability, technical robustness and safety, privacy and data governance, nondiscrimination and fairness, human agency and oversight, and societal and environmental wellbeing and accountability. These elements are crucial for ensuring that humans can understand and — consequently — establish calibrated trust in AI-based systems." 

So, what are the next steps for us, and how can we ensure that we continue to stay ahead of the evolving risk landscape when it comes to AI-based systems? 

By introducing an ethical dimension to run parallel to areas such as security and privacy within the design process, telecom could replicate a sustainable and proven model which would enable it to address ethical risks early in the development cycle. Such a model could definitively ensure the compliance of network-based AI systems with the principles of trustworthy AI based on ethics by design. 

In practice, this could potentially be operationalized through a combined methodology of guidelines, questionnaires and software which, according to the expert contributors of our latest AI ethics report, could be used throughout the development cycle to assess, for instance, the explicability of an algorithm or the unbiases of training datasets. 

By integrating such a combination of methodologies within the design process itself, telecom can deliver a sustainable, comprehensive, and novel framework which can support AI deployments which are lawful, ethical and robust. AI – with the best possible conditions for innovation. 

Explore more 

Ericsson has adopted the EU Ethics guidelines for trustworthy AI and are working to implement AI design rules to make sure that its AI is fully lawful, ethical and robust. Find out more on Ericsson’s AI in networks page. 

Ericsson is on a journey to develop fully cognitive networks by 2030. Learn why and how in this recent opinion piece by Ericsson’s CTO Erik Ekudden: To develop cognitive networks, we are building human trust in AI 

Read our ethical AI report: AI – ethics inside? 

Read the Ericsson Tech Review AI special edition 2021 

Read the Ericsson white paper: Artificial Intelligence in next-generation connected systems 

Explore the relationship between 5G and AI

Find out more about the European Union’s proposed AI Act and its ethics guidelines for trustworthy AI

Read about 5G and human rights: what’s the connection? 

Explore the methodology we use for our Security Reliability Model  

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.