When considering Artificial Intelligence (AI), I think we can all agree that it fosters a wide range of discussions - from the deeply philosophical, through science-fiction to highly technical. It is a somewhat divisive topic, with a huge fanbase and an equally large number of concerned parties. Most do though meet in the middle and agree on a pragmatic stance that - if we are to have AI, we probably cannot avoid it even if we wanted to - then we need to wrap some governance or protective measures and controls around it.
Aside from all that however, one aspect that has generally been overlooked is the rise of that I would call "Shadow AI" in organisations.
We have for years - in Enterprise IT - been talking about Shadow IT, meaning all the technology (apps and systems) being used, without the IT department's knowledge, consent or control. Typical examples include government officials using WhatsApp between each other, development teams signing up for a collaboration tool such as Slack without any IT governance in place, or employees in highly regulated organisations that forward confidential documentation on to their own private Hotmail or Gmail - because it is easier than accessing the internal systems while on holiday.
These Shadow IT scenarios - and many many more - are what give Chief Information Security Offices (CISOs) and IT Directors nightmares and keep them up at night.
I believe another problem is here - and actually has been here for a while - which is Shadow Artificial Intelligence or Shadow AI.
You could define Shadow AI as AI solutions that are not officially known or under the control of the IT department.
For several years already, savvy students and employees, have been exploiting technology to help them work faster (or maybe to write a paper by a deadline). Apps such as Jasper.AI, Rytr, HyperWrite and others have been around for a while - including both free and professional/premium offerings.
It is highly plausible that every organisation will see a rise in the deployment of “Shadow AI”.
What should concern the CISO is this: if a consultant, advisor or anyone else representing your organisation uses one of these tools to generate content as part of their work, how are you, as an organisation, in control of:
The bottom line is, organisations will need to take advantage of AI rapidly and effectively, but not while sacrificing quality and integrity or by risking litigation.
Therefore we expect that forward-thinking CISOs and IT Directors, will want to provision "approved AI", which will allow adequate levels of governance to mitigate a range of concerns and risks, while allowing end-users to leverage the undeniable power of AI to streamline routine tasks.
On March 30, 2023, ClearPeople released a new feature (codenamed "Prometheus") to select Atlas customers to address the problem of Shadow AI. The feature will be generally available in the next few months, and it will allow end-users to engage with an organisationally controlled and "context aware" integration with Microsoft Azure OpenAI ChatGPT. This capability will allow the organisation to put policies and governance in place around use of AI, while giving end-users the freedom, and power to be creative, innovative and more productive together with AI.
Organisations can now use AI, while knowing who is using it, where it is used, how it is used - and can govern the process of authorising the curated content before it is published under the company's logo.
The feature is in a very limited preview programme - but contact us if you are interested in knowing more.
Read my other recent articles on ChatGPT, Microsoft Business Chat and Microsoft Copilot: