Skip to content

Five tips to boost your AI-readiness

Added to your CPD log

View or edit this activity in your CPD log.

Go to My CPD
Only APM members have access to CPD features Become a member Already added to CPD log

View or edit this activity in your CPD log.

Go to My CPD
Added to your Saved Content Go to my Saved Content
Medium Gettyimages 16

Who knows with any certainty how to be AI-ready? A packed session at APM’s Women in Project Management Conference delivered some great advice on this subject. Sarah Boutle, Deputy Director, Data and Insight, at the National Infrastructure and Service Transformational Authority (NISTA), and Nermeen Latif, Technical Director at WSP, took centre stage. Here are five highlights from their session:

1. Confidence and trust

We all use elements of AI throughout our days, from ChatGPT and Alexa to Google Maps. A recent report from the Tony Blair Institute found that over half of adults in the UK use generative AI. Compared to previous transformational technologies, like the internet or PCs, take-up of AI has been much faster, said Boutle. However, the take-up at work is much smaller – around 20% of people use generative AI weekly in their work. According to the research, trusting AI and using it go hand in hand.

“People who trust generative AI use it, and people who use generative AI trust it,” said Boutle. This sense of confidence builds a cycle of use and trust that can be a really useful thing, she added.

2. How to get started

How do you build trust in something you don’t know? You get to know it and understand it, was Boutle’s simple answer. Find a podcast that tells you something about what large language models (LLMs) do. The other thing you can do is create a bit of time and space for yourself to play with AI and practise it.

Boutle said that NISTA is often asked for examples of how AI will be used in project delivery, “but the only person who is going to know the best way to use AI in your own job is you,” she said. Other people in similar jobs have tried lots of things out, so learn from them, get talking and build networks, she suggested.

3. Try using the RACE framework

When writing an AI prompt, try using the RACE framework, which stands for: 

  • role (tell the AI what you do, e.g. risk analyst in a construction project)
  • action (what do you want the LLM to do? e.g. ‘Identify the top five risks for the project’)
  • context (give some details of the project, e.g. ‘It’s a £200m road upgrade in a city’)
  • expectations (tell the AI what output you want, e.g. ‘Tell me in 150 words what the answer is’).

You can tell the AI additional things, said Latif, including not to make things up if it doesn’t know the answer or to cite its sources. The framework gives plain and clear instructions for the LLM model to follow.

4. What is agentic AI and how do you use it?

Agentic AI is a tool that can act on your behalf – you can think of it as a digital assistant, explained Latif. It can execute a number of tasks in sequence, so you could give it a workflow for something simple and repetitive. It combines the art of prompting and then the automation of tasks.

In project management, this could mean helping you with reporting, grabbing data and reminding people to give you updates. It could help you analyse risks and update schedules. But to make you more effective in your job, you need to ask yourself: what’s the task? What do you need from an AI? What would really help you in your daily work?

You can try creating your own ChatGPT agent, said Latif, who showed the room how she had done this herself.

“This is a great first draft, but you wouldn’t want to hand over your job completely to AI,” she said. It had given her the skeleton for her piece of work.

“As project managers, we need to do the work and review it and ask: is that any good? Do I agree with it? Do I need to ask it some extra questions?”

Examples of how the audience had experimented with using AI included checking project schedules, writing resourcing profiles and monthly updates, and for writing first-draft explanations of a new project for a stakeholder.

5. Data and security

We must only use data that we're allowed to use, the audience was reminded.

“We must only use AI that’s approved for projects or our own companies,” said Latif. “Within my organisation, we have an AI working group and we have a responsible AI use framework. There are general guidelines around data, checking with clients about what’s OK, and about not automating decision-making, she said: “You could give some guidance, but you must be checking it.”

What does the future hold? Latif believes that AI is going to evolve.

“People very much know what the tools are and are experimenting seriously, but I don't think we’ve actually deployed it at scale.” 

 

You may also be interested in:

 

0 comments

Join the conversation!

Log in to post a comment, or create an account if you don't have one already.