As businesses increasingly adopt AI, the need for thoughtful, strategic implementation becomes critical. One promising approach gaining traction is Composite AI, which offers practical solutions to some of the most pressing challenges associated with large language models (LLMs). From reducing energy consumption to minimizing biases and misinformation, Composite AI provides a more controlled, efficient, and reliable way to leverage AI in business.
This blog covers:
In short, composite AI helps create more robust, ethical, and efficient AI systems by combining different approaches to achieve better results while mitigating some of the risks and limitations that can come with using a single AI model.
I've shared my thoughts on ethical AI in my previous blog "Ethical AI - some thoughts about how we ensure our use of AI is positive in all senses".
In this blog, we’ll explore how composite AI helps businesses reduce risks, improve accuracy, and ensure quality, all while making AI applications more sustainable and ethical. Let's dive into how integrating these practices can make a real difference for your organization.
Composite AI is a practical approach to dealing with some of the most pressing issues to do with AI. Most notably it can help reduce the power consumption needed to deliver quality results; it reduces the number of times you need to prompt the AI to get to the right answer further reducing energy usage; it provides a way to help validate the quality of responses and thus reduce hallucinations and misinformation; it provides an easily understood way to control biases.
As a business implementing AI, it’s vital that you consider how your chosen large language model (LLM) has been trained. Biases are likely inherent in every LLM. Even the most rigorous attempts to marginalize or remove biases are likely to be only partially successful. This can lead to outputs that may seem rational and plausible, but which do not, in fact, reflect reality. The closer these outputs are to what might be perceived as a ‘perfect’ response, the greater the danger, particularly if we become used to relying on AI to generate content for us and become less and less interested in testing the output.
The risks are that ingrained biases may amplify the divisions that have arguably affected many western societies in the last few years, often driven by social media’s attempts to curate our news feeds to reinforce our biases. If our aim as businesspeople is to have a predictable economy, strong rule of law and educated employees then this destabilization must be seen as wholly counterproductive to our future success. As such it is beholden on us to ensure we do not add fuel to the fires.
There is published analyses that attempt to demonstrate whether individual LLMs have managed to expurgate the most obvious biases and there are controls applied to stop their misuse but as we implement AI in ever more novel ways, we’re likely to uncover unexpected and insidious biases or plainly incorrect pattern matching that could have profound implications.
So, what can we do? A process of continuous validation is key to ensuring that you minimize the risks of biases and outright errors from affecting your business and, by extension, broader society.
Some basic analysis of the quality of your chosen LLMs must be the starting point, but in reality, you’ll only learn so much from the published data. Controlling the content on which you feed your AI is an important first step. Limiting the use of AI to analyzing documents you know are factually accurate and which are fully under your control means that implementing validation processes can highlight issues faster and more clearly. Allowing AI to include content from uncontrolled sources, such as the internet or third-party document libraries, makes this task nearly impossible.
We can go further. The work we’ve done testing LLMs on data sets that we know are accurate and whose content we have validated has highlighted that this alone is not enough to avoid biases and hallucinations in the AI’s output. We must take additional steps.
The use of composite AI is gaining momentum. Making composite AI work requires much more effort than simply pointing your chosen AI at your content and as such it will require more time and more resources. With our Atlas product we use meta data tagging of content to apply a very specific meaning that helps remove ambiguity. Not only can we ensure that the meaning of a document is clearly telegraphed by its associated tags, but we can ensure that only gold-graded content is considered when the AI is grounded on the morass of duplicated and modified content that is the reality of every business’ data estate.
We can use the tags along with some very specific prompt engineering to ensure that what the AI outputs is properly grounded in high quality content and the meaning of the content is not open to interpretation.
We have proven that this composite approach when applied to curated content of known provenance and quality can reduce hallucinations almost to zero, removes concerns about copyright as it relates to the content being analyzed, enables us to remove or reduce biases inherent in our own content (the biases of the LLM will remain but will be of less concern due to the controls placed around them in our engineered prompts). What is concerning is that even when using known content, the results from using AI without these composite features is decidedly random. Throwing AI at content without adding such controls leaves you open to considerable risk, which might first affect your business, but which could, when multiplied across thousands of businesses, affect broader society.
To make the validation of the AI's output as simple as possible we ensure that a full list of citations accompanies every response and we even enable interrogation of the chunks used in the creation of the output. This final point is important as the chunking of input data in a RAG model can lead to unpredictable errors where some of the context of the input is lost in the output. The famous 'how many Rs are there in strawberry' is a classic example of where chunking can create obvious errors.
So, we have an approach that can help protect your business from the most harmfully deleterious effects of applying AI without prior thought or preparation. It requires effort to implement and, importantly, it is vital that a program of reviews and checks is in place to continually review the output of your chosen AI. It is likely that the tags you’ve applied as part of this composite approach will need to evolve and the engineered prompts continually refined.
These costs and skillsets need to be factored into any AI ROI model and too often they aren’t. It’s so easy to engage with AI models such as ChatGPT and DeepSeek that a whole new form of shadow IT is emerging that can undermine your business’ culture, legal controls and ethical boundaries whilst undermining your carbon reduction commitments. Limiting the use of AI may prove impossible when it’s so easy to cut and paste content into a web-based AI, but it is beholden on us to ensure we minimize its impact if we’re to be both successful businesses and part of a successful and stable society.
In the next blog I’ll try and unravel some more of the environmental challenges of using AI and attempt to identify ways that we can minimize its environmental consequences.
A brief description which highlights the value of the offer and how it addresses the visitor's needs, interests or problems. Bullet points are a great way to show what they will be getting from the offer whilst italics and bold text can emphasize key points.