BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Understanding The Layers Of Trustworthy AI

Following

As organizations get more familiar with the use (or misuse) of AI, they are beginning to realize just how many ways things can go wrong with AI projects. On the one hand, we have challenges around deep fakes and the malicious use of AI for deception. On the other hand, we have aspects of biased datasets, potential misappropriation of copyrighted and protected data, and the potential for AI systems to do real harm. In the push to make AI systems more ethical and trustworthy, organizations are realizing the broad scope and set of things to consider under the umbrella of the term “Trustworthy AI.”

The Basis For Trustworthy AI

As detailed in a recent Cognilytica AI Today podcast on this topic, anyone looking to develop or make use of AI systems need to maintain trust, provide visibility and transparency, provide greater oversight and accountability, and create more explainability and understanding of how AI systems operate.

This is because organizations are making increasing use of AI to enable a wide range of applications that are increasingly becoming mission critical. These AI systems could have huge impacts on people’s daily lives and livelihoods. Therefore, we need Trustworthy AI to keep an organization’s customers, employees, users, partners, stakeholders, shareholders, and the organization itself safe.

Likewise, people have significant fears and concerns about AI. We need to address these aspects to build and maintain trust. Organizations shouldn’t spend the time, allocate money, and resources just to have people not feel comfortable or not trust the AI systems that are built. This leads to a very expensive failure.

Lack of visibility into AI systems also causes people to be concerned. People are often asked to just blindly trust AI systems not knowing what's going into them or how they were created. This also results in limited disclosure and limited consent when using AI systems.

In addition, we have the real possibility of bad actors doing bad things enabled by the power of AI. While machines can also malfunction or behave in ways that can cause real harms, so too can people make use of AI to cause just as significant, or greater harms. We can gain some upper hand on these issues by imposing limits, controls, safeguards, and guardrails. We can also monitor, manage, provide controls, testing, and of course, by keeping a human in the loop.

All of these different aspects of Trustworthy AI need to be considered in a holistic way to avoid a piecemeal approach to safeguarding people and systems.

What are the layers of Trustworthy AI?

Rather than considering each aspect of trustworthy AI separately, we can treat them as different aspects or layers of Trustworthy AI that can be addressed in a comprehensive manner. In 2020, research firm Cognilytica evaluated and analyzed over 60 different frameworks for ethical, responsible, and trustworthy AI from a wide range of organizations, nations, and corporations. Many of the concepts and terms in those frameworks were often confusing, contradicting, and at different levels of detail.

In response, Cognilytica put together a comprehensive approach to Trustworthy AI that treated various aspects of Trustworthy AI at different layers that could be addressed more consistently without leaving any aspects unaddressed.

The five main layers of Trustworthy AI addressed by the Comprehensive Trustworthy AI Framework address:

  • Ethical aspects of AI: Guidelines for AI systems to participate in society in a positive manner, including aspects of do no harm and human value, providing positive benefit to humans, addressing issues of bias, diversity, and inclusivity, and ensuring human control, freedom, and agency.
  • Responsible use of AI: Deals with the potential for misuse or abuse of AI, including aspects of AI safety and privacy, trust, human accountability, and reducing the risk and ability for AI systems to be used in harmful, inappropriate, or malicious ways.
  • Systemic AI transparency: Helps increase trust in AI systems by providing visibility into overall system behavior, data and AI configuration, disclosure and user consent, bias visibility and potential mitigation of that bias, and use of open systems.
  • AI governance: Focusing on implementing processes, controls, and safeguards on AI systems, AI system audit and monitoring, third-party regulation and certification of systems.
  • Algorithmic explainability: Approaches to reducing the “black box” of AI systems by providing means to gain an understanding of how machines arrived at their conclusions, including guidelines for algorithmic explainability, interpretability, and/or understanding.

While there are many competing approaches to trustworthy, ethical, and responsible AI, a comprehensive approach that provides guidance to the widest possible constituency will help to achieve the aims of Trustworthy AI.

What Are The Characteristics Of Trustworthy AI To Put Into practice?

Making trustworthy AI is not just an exercise for a few individuals to put some thoughts on paper. It must translate to real-world implementation that cuts across the entire organization. Regardless of your approach to Trustworthy AI, the key is to focus on making it practical and implementable. This is the only way to keep the trust of your users, customers, employees, and all stakeholders in the AI world we’re living in now.

By focusing on the five main layers outlined above, these are the characteristics and guidelines to follow when implementing a comprehensive Trustworthy AI framework.

Disclosure: I’m a co-host of the AI Today podcast and managing partner at Cognilytica

Follow me on Twitter or LinkedInCheck out my website or some of my other work here