The adoption of AI by pension schemes is “already reducing costs” and improving communications, however, greater use of the evolving technology also raises significant concerns, the Pensions and Lifetime Savings Association (PLSA) has warned.
The advice came as the workplace pension and savings body submitted its written evidence to the Treasury Committee’s AI in Financial Services inquiry.
A survey of PLSA members showed that they expect pension funds to have widely adopted AI by 2035. Nearly four-fifths (79 percent) said the change would enhance member engagement and communication strategies, three quarters (75 percent) said it would help detect and prevent fraud, while 72 percent said it would improve data security.
Further survey results showed that nearly two-thirds (63 percent) expect AI to support personalised retirement planning (including advice and guidance), and 59 percent expect the technology to enable customisation of investment strategies.
In its inquiry evidence, the PLSA stated that the most compelling use for AI in pensions is to improve communication and engagement between pension schemes and their members.
It said that AI tools could use pension scheme and member data to create personalised communications for scheme members about their retirement savings. AI chatbots could also help provide accessible and affordable financial guidance to scheme members.
This evolving technology can also improve administrative efficiency, by taking the minutes at meetings and providing document summaries, for example. It could also aid and enhance trustee decision making with data analysis to support investment strategies as well as providing training for trustees in other areas of their role.
PLSA said that AI’s potential to support better scheme engagement with members, drive efficiency and reduce costs means that more pension schemes will look to integrate AI.
However, adoption of this fast moving technology is not without risk, the pension body warned. It said schemes must ensure they have robust processes and strict protocols to minimise the risks of data breaches, cyber-attacks, failure to comply with regulatory requirements, and potential financial losses.
But it added that the UK’s strong regulatory environment means that human accountability is required via strong governance mechanisms, so AI is unlikely to be solely responsible for end-to-end decision making. Humans are “likely to remain central to decision making across the industry”, the body said.
Zoe Alexander, director of policy and advocacy at the PLSA, said: “The adoption of AI throughout the pensions industry should be viewed as a positive development. AI is already reducing costs for schemes and members by increasing efficiency, improving communications and member engagement – to the ultimate benefit of savers. However, there are significant cyber security, fraud and data privacy concerns that any scheme adopting AI must look to mitigate.”
The PLSA plans to draw up guidance on the risks and opportunities of AI for its members.