Ethics and AI: 8 steps to build trust in intelligent technology
In reality, Artificial Intelligence (AI) has already become an important technology across all sectors, enabling new capabilities that serve communities and companies. However, a lack of trust still hinders its uptake.
The potential risk factors of AI necessitate a wide involvement of critical stakeholders across government and industry to ensure effective regulation and standardization. For example, the EU Commission’s ethics guidelines for Trustworthy AI, which we recently adopted and will continually work to align with where possible.
Other than ensuring their AI activities align with regulation and standards, how else can organizations build a trustworthy AI?
Below, I share the 8 steps for building trustworthy AI – based on methodologies from leading ethics and compliance programs – and why it’s essential for businesses to adopt them for AI.
- Start at the top: Top-level management in global companies generally has a good awareness of typical ethical or compliance risks in their sector, but many are still uninformed about how AI is developed and used within their organizations. Company leaders must be educated on the principles of trustworthy AI, which will enable them to provide a clear stance on ethics and AI and ensure it is complies with relevant laws and regulations.
- Conduct risk assessments: The relevant risks must be understood. AI is an emerging technology and can be called many things – such as machine learning and intelligent and autonomous systems – which means its definition in regulations and standards is vague and the risks are difficult to specify. A risk assessment framework will be needed to map high-risk activities and plan mitigation. The EU ethics guidelines for trustworthy AI provides a list of questions to help companies define risks associated with AI.
- Roles and responsibilities: Whilst companies are well versed in establishing an ethics & compliance program, few ethics & compliance officers will have the necessary understanding of AI. Therefore, new alliances will need to be forged between these professionals and their technology counterparts to agree roles and responsibilities.
- Establish the baseline: The processes for trustworthy AI should be embedded in the company’s management system. Policies and procedures need to be amended to convey the company’s expectations for preventing AI solutions from having an adverse impact on human rights and help address any problems if they occur. A trustworthy AI ethics and compliance program will need to include a combination of non-technical and technical measures. Non-technical measures include initiatives to safeguard against discrimination and unconscious bias, whereas technical measures involve ensuring compliant algorithms.
- Drive company-wide awareness of ethics and AI: Companies need to educate their entire workforce on the societal, legal and ethical impacts of working alongside AI. The risks relating to AI should be explained, as well as any company policies for mitigating these risks. Training a multi-disciplinary workforce on Trustworthy AI will require workshops on ethics and values, rather than a focus on guidelines for compliance. Finnish initiative, ‘Elements of AI’, provides a free online introduction course to demystify various aspects of AI.
- Monitor and control: Companies will be held accountable for their use, development and deployment of AI. Existing systems will need to be assessed and continuously improved, and new systems may be needed for producing and managing documentation that supports risk mitigation activities.
- Onboard Third Parties: Development of products and services incorporating AI is rarely managed by companies all by themselves. Companies should obtain a reciprocal commitment from third parties that are involved in developing AI to ensure the technology is trustworthy and developed in accordance with the company’s standards. Supplier audit programs will need to be expanded to include an evaluation of how suppliers address potential adverse impacts on human rights during the development of AI solutions.
- Create a speak-up culture: Finally, speak-up channels must be established so employees can raise concerns if they identify circumstances where AI systems may have an adverse impact on human rights. Some companies will also require the establishment of human rights grievance mechanisms should any harm occur.
The current regulatory landscape
Earlier this year, the EU Commission published its ethics guidelines for Trustworthy AI, which was accompanied by an assessment tool that is currently being piloted. The guidelines are likely to be followed by some form of EU regulation on ethics and AI during 2020. Ericsson has adopted these guidelines and is working towards addressing their requirements where relevant.
In the US, the Department of Commerce is working on a proposal to classify AI technologies that could become subject to export control, primarily to address security concerns. Meanwhile, the National Institute for Standards and Technology (NIST) has published a plan for developing technical standards and related tools for AI that includes standards for trustworthiness.
Finally, the Organization for Economic Co-operation and Development (OECD) has published its ‘Principles on AI’, promoting a global standard that aims to “foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values.”
In parallel with such governmental initiatives, several industry organizations have established working groups to define AI principles and publish branch-specific guidelines. The World Economic Forum Responsible Use of Technology, The Institute of Electrical and Electronics Engineers (IEEE) standards on ethically aligned design and International Telecommunications Union (ITU)’s backgrounder on Artificial Intelligence for good are some examples of such activities.
Why businesses need to be proactive
We have now reached a global consensus on the principles for trustworthy AI, and the requirements for achieving it are currently being drafted by regulators. Naturally, companies will soon have to demonstrate how they implement those requirements in their day to day operations and products.
The trajectory for this work is moving towards a prevention, detection and response framework similar to those already in place for other ethics and compliance programs, such as anti-corruption, and prevention of tax evasion frameworks. However, few companies have addressed the issue of AI and ethics in their current compliance programs.
Trustworthiness is emerging as a dominant prerequisite for AI and companies must take a proactive stance. If they don’t, we face a risk of regulatory uncertainty or over-regulation that will impede the uptake of AI, and subsequently societal growth. Start your journey to building trustworthy AI today.
Read more
To hear more insights about AI, visit the Ericsson podcast series where we explore the impact of AI on today’s telecom operations. Or for a more technical overview, I also recommend reading our Ericsson Research post on the technical challenges to integrating future network AI.
How could AI potentially impact human rights? In our blog post about responsible AI, we explore the ethical pitfalls and challenges of AI technologies.
This article was originally published on my LinkedIn page, which you can read here.
Like what you’re reading? Please sign up for email updates on your favorite topics.
Subscribe nowAt the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.