Blog
All posts

Why AI governance is critical: Ensuring compliance and ethical use of AI

Lead Consultant

Grant Newton, Lead Consultant

More blogs by this author

At ClearPeople we are minding the GAPS (Governance, Accuracy, Prompt controls, Sustainability) in AI. In this post we discuss the value of AI governance and how Atlas Fuse makes this happen.

Why is AI governance critical?

Artificial Intelligence (AI) has become a tool that people use in a personal and business setting, much like how search engines became the way to find and understand information. This freedom to discover and locate useful information without having to request it from specialists/trained researchers or access it via a subset of approved output has increased the risk of incorrect guidance/advice been used and propagated wider.

The purpose of any governance strategy is to provide principles that control appropriate use and minimize the risk of error.

Appropriate governance should never be a blocker and should always be considered an enabler to help people perform their tasks safely.

The purpose and criticality of AI governance is covered by many leading consultancies. For example,

  • Gartner: AI governance platforms promote responsible AI by enabling organizations to manage and oversee the legal, ethical and operational performance of AI [1].
  • IBM Consulting: AI governance refers to the processes, standards and guardrails that help ensure AI systems and tools are safe and ethical [2].
  • Deloitte: Effective AI governance is crucial for supporting board engagement and effective oversight of AI [3], [4], [5].


How to deliver AI governance?

To successfully deliver AI governance two key touchpoints must be considered:  

  1. It will form part of any existing Data and Information Governance strategy. Great AI responses are reliant on well-defined content that is understood and accurate.
  2. Essential to the delivery of AI is its integration into a Knowledge Management (KM) framework. AI query and responses work well when they are targeted. As with any research, having the right set of information will provide better outcomes. 

Knowledge Management (KM) and Data and Information strategies treat knowledge and data as strategic assets to inform decision making and improve organizational efficiency. Acknowledging the value of AI to provide knowledge to people means that the delivery of AI needs to become part of the KM and Data governance policies, processes and procedures.

To successfully deliver AI it needs to be explainable. Users need AI to be transparent and understandable. By providing an AI governance that leverages existing strategies and frameworks that define the purpose, quality, consistency, relevancy, and recency of the content that AI is accessing makes it easier for the user to understand how the responses have been informed and improves their trust and confidence in the output.

For example, consider both permissions (access) and data classification (sensitivity). When AI uses content, it is important to know:

  • Who has access to the source content? Ideally everyone who is using a set of content should have the same access to the underlying content so that they get the same responses. 
  • What is the sensitivity of the content? If the content being used has the same sensitivity labels associated, then the decision-making process of how to use and share the output provided by AI is easier.

In the above examples, how does the person know what can/cannot be shared with different groups without having clearly defined governance around the content?

To ensure a governance framework works, the processes that people must comply with need to be easily applied by them and considered part of their ways of working. They cannot be dictated to, engagement is critical.

To deliver governance that provides users with the assurances they need and the flexibility for them to engage, Atlas AI leverages the Azure OpenAI framework and adds two additional administration layers that manage the underlying data and AI responses to increase trust and remove obstacles to use.

This is part of the Atlas Fuse framework. Read more about Atlas Fuse here.

How does Atlas Fuse ensure compliance and ethical use of AI?

Atlas Fuse provides two layers of controls on top of Azure OpenAI to help deliver AI governance that is integrated into your Knowledge Management framework and form part of an overall Data and Information Governance strategy.

1. Central controls

These settings enable overall governance guidelines to be set across all uses of Atlas AI. 
From a single interface it is possible to

  • See all collections that have been created, what controls have been applied and who have been provided access.
  • Set ethical controls on how the AI responds, a system prompt that can be tailored for the organization.
  • Define what LLMs are available to collection owners.
  • Create grounding prompts, to provide
    • Strict control that responses can only use the content supplied, or
    • Non-strict control that allows the response to use content available from the LLM.
  • Create a response prompt to define the basic construct of how to format the response, such as always providing citations.
2. Democratized controls

Through the concept of AI Knowledge Collections, individual collection owners can apply more granular governance that empower them to:

  • Define collections so people understand their purpose and ensure they contain well-governed relevant content of the same nature.  
  • Set what data to use from available sources, which the collection will contain based on its defined purpose. 
  • Maintain it to keep it up to date so that it remains recent. 
  • Have specific prompt instructions applied to the collection in addition to the centrally provided prompts so that responses are consistent in style without the user having to understand complex prompting. 
  • Control who can access what collections. 
  • Monitor reporting analytics based on direct user feedback to assess where improvements may be required by collection.

This approach ensures that governance is part of the process rather than a theoretical construct.

The empowerment of knowledge owners to decide what content is relevant to what queries increases the uptake and compliance to the governance strategy. Giving them the ability to engage with their users based on the feedback provided enables them to tailor the content and provide suitable training on how best to succeed.

Atlas Fuse: The simplest, fastest route to precise AI responses for every team and department.

What does “good” AI governance look like and how to achieve it?

AI is evolving fast and will continue to do so. Each industry sector will no doubt approach governance in different ways, but fundamentally understanding the data being used and how is the same for everyone. To reduce the risk of compliance, regulatory, security or privacy infringements the understanding and transparency of what content was used to provide a response is critical.

By providing the ability to control what data is used for what purposes increases the level of AI governance maturity and decreases the risk of inappropriate use.

People want to “know” that they can trust the source content and the responses provided are not subject to fabrication or copyright implications.

“Good” AI governance is provided by having the right tools available to directly apply the guidance outlined in the governance document so every user can effectively use AI without having to refer to a document.

Ultimately, validated sources, well-defined content, embedded governance tools and a clear framework will drive adoption and provide a safe secure AI working environment.  

Best Practices for Ensuring Data Quality and Relevancy for AI e-book cover 3D

Best Practices for Ensuring Data Quality & Relevancy for AI

Accurate, secure enterprise AI starts with data and people

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.