At ClearPeople we are minding the GAPS (Governance, Accuracy, Prompt controls, Sustainability) in AI. In this post we discuss the value of AI governance and how Atlas Fuse makes this happen.
Why is AI governance critical?
Artificial Intelligence (AI) has become a tool that people use in a personal and business setting, much like how search engines became the way to find and understand information. This freedom to discover and locate useful information without having to request it from specialists/trained researchers or access it via a subset of approved output has increased the risk of incorrect guidance/advice been used and propagated wider.
The purpose of any governance strategy is to provide principles that control appropriate use and minimize the risk of error.
Appropriate governance should never be a blocker and should always be considered an enabler to help people perform their tasks safely.
The purpose and criticality of AI governance is covered by many leading consultancies. For example,
To successfully deliver AI governance two key touchpoints must be considered:
Knowledge Management (KM) and Data and Information strategies treat knowledge and data as strategic assets to inform decision making and improve organizational efficiency. Acknowledging the value of AI to provide knowledge to people means that the delivery of AI needs to become part of the KM and Data governance policies, processes and procedures.
To successfully deliver AI it needs to be explainable. Users need AI to be transparent and understandable. By providing an AI governance that leverages existing strategies and frameworks that define the purpose, quality, consistency, relevancy, and recency of the content that AI is accessing makes it easier for the user to understand how the responses have been informed and improves their trust and confidence in the output.
For example, consider both permissions (access) and data classification (sensitivity). When AI uses content, it is important to know:
In the above examples, how does the person know what can/cannot be shared with different groups without having clearly defined governance around the content?
To ensure a governance framework works, the processes that people must comply with need to be easily applied by them and considered part of their ways of working. They cannot be dictated to, engagement is critical.
To deliver governance that provides users with the assurances they need and the flexibility for them to engage, Atlas AI leverages the Azure OpenAI framework and adds two additional administration layers that manage the underlying data and AI responses to increase trust and remove obstacles to use.
This is part of the Atlas Fuse framework. Read more about Atlas Fuse here.
Atlas Fuse provides two layers of controls on top of Azure OpenAI to help deliver AI governance that is integrated into your Knowledge Management framework and form part of an overall Data and Information Governance strategy.
1. Central controlsThese settings enable overall governance guidelines to be set across all uses of Atlas AI.
From a single interface it is possible to
Through the concept of AI Knowledge Collections, individual collection owners can apply more granular governance that empower them to:
This approach ensures that governance is part of the process rather than a theoretical construct.
The empowerment of knowledge owners to decide what content is relevant to what queries increases the uptake and compliance to the governance strategy. Giving them the ability to engage with their users based on the feedback provided enables them to tailor the content and provide suitable training on how best to succeed.
AI is evolving fast and will continue to do so. Each industry sector will no doubt approach governance in different ways, but fundamentally understanding the data being used and how is the same for everyone. To reduce the risk of compliance, regulatory, security or privacy infringements the understanding and transparency of what content was used to provide a response is critical.
By providing the ability to control what data is used for what purposes increases the level of AI governance maturity and decreases the risk of inappropriate use.
People want to “know” that they can trust the source content and the responses provided are not subject to fabrication or copyright implications.
“Good” AI governance is provided by having the right tools available to directly apply the guidance outlined in the governance document so every user can effectively use AI without having to refer to a document.
Ultimately, validated sources, well-defined content, embedded governance tools and a clear framework will drive adoption and provide a safe secure AI working environment.
Subscribe to our newsletter
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.