HR has been warned not to put too much faith in the answers generative AI offers up, but to learn to repeatedly question and argue with it and demand more from it.
“AI lies convincingly,” warned Michael Wooldridge, professor of Computer Science, University of Oxford and director of foundational AI research at The Alan Turing Institute, as he gave his keynote speech at the CIPD Annual Conference in Manchester.
As a researcher in AI for more than 30 years, he asked it to tell him about himself and it came back saying he had studied at the University of Cambridge. Wooldridge didn’t study at Cambridge and has never had any affiliation with it.
False answers
AI generated this false answer because of the way it works, he said, likening it to “auto complete”.
“It’s not designed to tell you the truth, it’s designed to tell you the most plausible thing.
“It’s read, probably, thousands of biographies of Oxford professors, and studying at Oxford or Cambridge is very, very common. So in the absence of any information, it’s filling in the gap with what it thinks is the most plausible answer. And it’s very plausible, if you read that you wouldn’t have batted an eyelid. They lie convincingly a lot. They don’t know what the truth is, but the fact that they lie plausibly makes those lies particularly problematic.”
Critical HR skill
Learning to question and argue with generative AI is a critical skill for HR as this powerful technology is “capable of remarkable things”, Wooldridge said.
The technology to sift through job applications, shortlist candidates and even decide which employees to make redundant is already here.
But Wooldridge said: “You shouldn’t trust AI, you should learn to question it. You should learn to demand more. Just because the computer says, ‘no, we shouldn’t shortlist this, you need more than that.”
Auto complete, not a super brain
He cited the example of Durham police force, which introduced an AI program called Harm Assessment Reduction Tool. It advises custody officers on whether or not to detain someone in a cell overnight.
“It was trained on previous cases where people have been brought in and whether they’ve been released and something bad has happened, or whether things have been okay. Basically, you keep somebody in a cell if you think they’re going to harm themselves or somebody else, or they could reoffend.
“This particular program was carried out with due diligence, but my worry about it is that in 10 years time, people don’t question it. So [the question would be] ‘do we keep them in the cell? What does the AI say?’
“You’ve got to think of a reason to argue with it and we find that tiring. It’s difficult for us to do.
“I think probably the single most important skill is not treating the AI as if it is some kind of super brain that’s guaranteed to give you the right answers, because it’s not. It’s just doing an auto complete. Being prepared to question what it gives you is incredibly important.”
Productivity sweet spot
The technology’s sweet spot is to create text, which has big implications for workforce productivity.
Enormous numbers of people are employed in the global workforce doing routine intellectual activities, he explained.
“People, for example, whose role is basically to take two documents, collate them into a third document, and then pass that to somebody else who takes two documents and collates them into a third document. The UK government employs hundreds of thousands of people in roles like that.”
AI is really good at summarising, collating, translating, extracting key points, which could help the UK with its productivity problem, he said.
“The UK government is very excited about the possibility that generative AI might be of assistance.”
AI can also be an assistant for creativity, for example it can come up with advertising slogans by simply asking it to give you five slogans. The results might not be great, but by asking it to give you another five, and another five, with the added request that it emphasise the sweetness of the drink or the healthy qualities of a product, it will do it.
‘Weird beasts’
By questioning and arguing you can tease better answers from AI, something that has caused the role of ‘prompt engineers’ to “blow up in the last 18 months”, he said.
“It turns out that these weird beasts [AI] are very sensitive to the way that you frame questions. You give it a problem, and just by saying, ‘I would like you to think about this very carefully’, it can come out with a better answer. Why should that be? Honestly, we don’t really know, but being able to formulate the questions in the right way, to formulate our problems in the right way, turns out to be really quite an important skill.”
But Wooldridge said: “I’m not a super pessimist around employment. For most of us, we’re going to find AI is just another tool that we use in our working lives, like we use web browsers or computers, but we are starting to see the effects of generative AI.”