Generative AI is being increasingly integrated across organisations to enhance efficiency, personalisation, and operations, but concerns about potential biases, job impacts, burnout, and the importance of ethical AI use remain.
This was the main conclusion from a Benefits Expert panel discussion on generative AI with Caroline Hopper, lead consultant at Quietroom; Josephina Smith, director of reward at British Airways; and Ella Vize, global head of learning and experience at Allbright.
The discussion focused on how AI can help HR by improving personalisation, nudging, and faster analysis while identifying key priorities and addressing the latest regulatory and ethical risks. It also emphasised promoting human-AI collaboration to improve efficiency and productivity.
Vize highlighted how AI is driving improvements in member experiences but noted mixed reactions to AI adoption internally. She emphasised the need for leadership to ensure AI is used ethically while focusing on responsible AI adoption to enhance service, market positioning, and client relationships.
Smith shared that British Airways is exploring AI opportunities for both customer and employee experiences, prioritising cybersecurity. AI tools have improved efficiency, but concerns about job impacts remain, and the company is in the early stages of fostering an AI culture, focusing on equipping employees with the right tools and aligning AI with strategic objectives.
Smith said: “From an AI perspective, using it in our operations, we’ve got a system that can virtually see anything on the terminal that’s also been powered by AI, which we’ve never had before. From an employee perspective, we’re just recently sort of exploring, particularly in HR, what sort of tools we would be using from an AI perspective. But one of the things we’ve realised is how efficient some of those tools can be in helping with productivity.”
Hopper explained that AI has transformed Quietroom’s operations, from using note-taking bots to analysing pension data, and ensuring communications are accurately interpreted by AI tools like ChatGPT and Google’s Gemini to prevent bad advice and improve user experiences.
She said: “We’re now making sure that whatever content we create, we test it using loads of different forms of AI to make sure that no matter what answer a user gets, no matter how it’s been interpreted by AI, it is something that we’re comfortable for them to see, and it’s not going to give them any bad advice or result in any bad outcomes.”
Vize emphasised the importance of understanding different levels of AI comfort within an organisation and engaging employees who feel disconnected from AI usage. She stressed that market forces are driving AI integration, with consumers expecting AI in digital platforms. She said HR must identify those at risk of being left behind and consider the impact of AI on skills, career progression, and wellbeing.
She highlighted concerns about burnout for those with lower AI adoption and the need for HR to stay mindful of digital trends affecting the workforce.
Vize said: “HR teams have an essential understanding of who is at risk of getting left behind and then understanding the impact that this increase in AI-driven workforces is going to have on human resources. We know it’s going to impact skills and career progression, but there are other considerations around burnout. It’s really important to be considerate of these digital trends that we’re seeing.”
Smith noted that there is a need for human intervention, particularly when making decisions, and an ethical framework around AI, but the one in place in the UK is less robust than in the EU, leading to concerns about biases and potential claims.