More than half of audit and governance workers view data privacy as the biggest concern when using generative AI at work, a survey has found.
The survey of more than 500 industry professionals, from CareersinAudit.com in partnership with Barclay Simpson, examined the impact of GenAI at work and has been released as prime minister Keir Starmer unveiled a 50-point AI Opportunities Action Plan to “turbocharge” UK productivity.
Data privacy is the biggest ethical concern, and risk, when using GenAI tools in the workplace, according to 53 percent of respondents. Accountability was the second largest concern for these professionals with 36 percent highlighting it, followed by AI bias with 35 percent. The results show that trusting in AI remains a major barrier in this sector.
Almost a third (32 percent) flagged transparency as an ethical issue when using AI, while 30 percent raised concerns about job displacement.
In spite of this, more than two fifths (41 percent) said their employer has already implemented guidelines or conducted risk assessments for using GenAI at work. Researchers said this shows employers are willing to overcome the barriers that have stopped the tech having a more prominent workplace role.
Simon Wright, director of CareersinAudit.com and the Careers in Group specialist job boards, said: “The concerns around using GenAI tools in the workplace are completely understandable.
“Whenever a new technology comes along, there is always natural scepticism, particularly when there is a lot of media speculation about jobs becoming automated.
“But it is very encouraging to see guidelines and processes are being implemented to harness the power of AI in the workplace, as it should be seen as something that will boost productivity, not harm it.
“Once businesses have the necessary processes in place, they should see the benefits the tools can bring such as automating time-consuming tasks and effortlessly analysing huge amounts of data.”