CMOtech UK - Technology news for CMOs & marketing decision-makers
Philip miller

Retrieval-augmented generation can manage expectations of AI

Tue, 18th Nov 2025

AI adoption is accelerating across the economy, with  39% of UK organisations already using the technology. Across industries – from finance and healthcare to manufacturing and retail, the technology is being integrated to drive efficiencies at scale. The debate is no longer whether to adopt AI, but how quickly – and where.

Yet as implementation rises, so do expectations. While many assume AI should be able to deliver flawless outputs every time, this double standard is damaging trust, slowing down adoption and holding back innovation.

So how can organisations rethink how they use AI? This starts with focusing on small use cases, continually testing and avoiding overdependence on any single system. Retrieval-augmented generation (RAG) can add another layer of reassurance, grounding responses in verifiable data and producing outputs that are both relevant and trustworthy.

Changing perspectives

As AI becomes increasingly integrated into day-to-day operations, tools like RAG are vital for accuracy. Yet equally important is changing how we use the technology. When another employee makes a mistake, we see it as a vital part of the learning process. When AI delivers an imperfect answer, the majority assume the technology isn't ready for wider deployment. However, these errors aren't bugs in the system; they're an expected trade-off of models that work in probabilities. Expecting flawless performance is like hiring a new employee and expecting their work to be perfect every time.

Organisations need to stop thinking in binary terms - that AI must be either perfectly right or completely wrong. Instead, the focus should be on how the technology is used, the safeguards we put in place and how it combines with human insight. AI is an agile technology. These models can fail, learn and improve in days or even minutes, far faster than human learning cycles. Ultimately, our approach towards deploying AI should be equally flexible.

Organisations that take a multi-year, top-down transformation plan risk waiting for a 'perfect' version of AI that may never arrive. Instead, they need short-term, incremental projects that deliver value quickly, before scaling from there.

Responsible AI in practice

Adopting AI responsibly requires translating this mindset into concrete, manageable actions that deliver results. However, this should also be built around trust, and a wider human-centric approach.

While every organisation's journey is unique, there are a number of ways to accelerate adoption without compromising on accuracy or ethics. Focusing on achievable goals is key. By targeting use cases that can be delivered in weeks or months, organisations can generate wins early on that demonstrate tangible value and build confidence in the technology.

AI models are inherently imperfect, so each mistake should be treated as an important learning opportunity. Analysing errors, refining prompts or experimenting with different models are all crucial to improving performance over time. Small adjustments allow teams to continuously enhance results while keeping projects manageable.

Once initial use cases deliver tangible benefits, adoption can expand gradually across the wider organisation. Maintaining oversight and governance ensures outputs remain accurate, relevant and aligned with ethical standards, allowing organisations to scale AI effectively while minimising risk.

Building trust through RAG

One of the most effective ways to improve reliability is through RAG. Within a RAG framework, AI systems access relevant, up-to-date information from a variety of sources before generating a response. This ensures outputs are anchored in verified, contextually accurate data rather than relying solely on potentially outdated or incomplete patterns learned during training.

By connecting AI to data in the right way, organisations can reduce hallucinations, deliver context-aware answers, and increase stakeholder confidence; all critical steps for responsible adoption at scale. Embedding a culture of careful, iterative AI use complements RAG, creating a continuous feedback loop that further strengthens trust and ensures insights are actionable and reliable across the organisation.

Final thoughts

Every organisation operating within the AI era faces the same challenges when trusting the technology. What separates success from failure is the ability to anticipate these errors, design ways of working that identify them quickly and adapt accordingly. AI is neither infallible nor magical, but it is a great resource. Organisations that balance ambition with realism will be the ones that succeed.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X