Skip to content

When AI sounds confident — but gets it wrong: A case study in project delivery

Added to your CPD log

View or edit this activity in your CPD log.

Go to My CPD
Only APM members have access to CPD features Become a member Already added to CPD log

View or edit this activity in your CPD log.

Go to My CPD
Added to your Saved Content Go to my Saved Content
Medium Gettyimages 2218792706 (1)

We’re living through a real Artificial Intelligence (AI) moment. Tools that can draft reports, summarise meetings and even write code are now part of everyday project delivery. But as much as AI can help us move faster, it can also trip us up — sometimes in spectacular fashion.

What happened and why it matters

Recently, a well-known professional services firm learned this the hard way. They used generative AI to help produce a client report. The AI’s output was polished, confident and sounded exactly like what you’d expect from a top-tier consultancy. There was just one problem: some of the references and sources it cited didn’t actually exist but because the content “sounded right,” it wasn’t thoroughly checked before being sent to the client.

The fallout? Embarrassment, a costly refund and a dent in the firm’s reputation. It’s a classic example of what can happen when we trust technology a little too much.

This isn’t just a story about one company’s mistake. It’s a wake-up call for anyone using AI in their projects. AI is great at sounding sure of itself — even when it’s making things up. If we take its output at face value, we risk passing along errors, damaging trust, and, in some cases, facing real financial consequences.

So, what can we learn from this? Here are some practical takeaways for anyone working with AI in project delivery:

Key lessons from the case

1. Human oversight Is non-negotiable

AI can help us work faster and smarter, but it’s not a replacement for human judgment. Always treat AI-generated content as a draft. Before anything goes out the door, give it a critical review — especially if it’s going to a client or stakeholder. We know time pressures and resource constraints can make thorough reviews challenging, but skipping them can lead to bigger issues later. Building in quick peer checks or allocating review time upfront helps counteract this.

2. Build AI know-how across your team

It’s not enough for just one person to understand how AI works (and where it can go wrong). Make sure your whole team gets some basic training. The more people know about AI’s strengths and weaknesses, the better equipped you’ll be to spot mistakes or “hallucinations” (when AI makes up facts).

3. Double down on quality checks

Don’t let AI outputs skip your usual quality assurance steps. In fact, consider adding extra layers of review for anything AI-assisted. Get subject matter experts or senior team members to sign off on key deliverables — just as you would have done before AI was introduced. SMEs play a critical role in identifying subtle inaccuracies or “false positives,” where content looks correct but is wrong. Assurance checks should combine human and AI capabilities:

  • SME review: Experts validate technical accuracy and context.
  • AI-assisted verification: Use AI to cross-check its own citations and flag unverifiable references before human review.
  • Always verify sources: Check metadata (author, publication date and link to source) and confirm that references actually exist.
  • Contextual accuracy: Ensure outputs align with project requirements, not just generic statements.

Best practice: Always instruct AI to cite its sources and, where possible, to avoid producing a response if it cannot verify the information confidently. If something looks too good to be true, it might be.

4. Keep up with best practices

AI is evolving fast, and so are the standards around using it responsibly. Stay plugged into professional bodies, industry blogs and training opportunities. The more you know, the less likely you are to get caught out by a new pitfall 

Make AI governance a habit, not a one-off task. Treat it as a continuous and progressive process. Regularly review how AI is being used in your projects, update guidelines and refresh team knowledge. Governance should be embedded into everyday workflows, not something done once and forgotten.

Moving forward: Professionalism in the digital age

AI is a powerful tool, but it’s not magic, and it’s definitely not infallible. The real value comes when we combine what AI can do with our own experience, judgment and common sense. By learning from real-world slip-ups (like the one above), we can make sure our projects stay on track, and our reputations stay intact.

So next time you’re tempted to copy-paste that AI-generated paragraph, take a moment to double-check. A little skepticism goes a long way. 

 

You may also be interested in:

 

0 comments

Join the conversation!

Log in to post a comment, or create an account if you don't have one already.