Artificial intelligence (AI) and chatbots use computer software and pre-programmed algorithms which aim to mimic human responses and thought. While AI use will no doubt increase as the technology improves; what issues should HR and businesses be thinking about now when it comes to harnessing the technology for the good?
The potential uses of AI in people management
AI will speed up and, in some cases replace, current human operated functions. The same is true for HR and people management operations. Despite the inevitable concerns about jobs, there are real benefits to using AI technology to allow for greater efficiencies and better time management. So, what could AI do for HR?
AI has the power to assist with comparing documents, reviewing and sifting information such as performance data. It can draft policies, letters or contracts of employment using its database and search options. You can ask an AI chatbot a question and it provides an answer using its vast data bases of knowledge. AI can be used to translate documents; it can spot patterns or be used for marketing and business development purposes. It could assist with recruitment by drafting job descriptions, benchmarking salaries or sifting applications – it could even shortlist candidates. Data management such as producing absence or holiday reports, staff surveys and employee engagement projects can all be run and collated using AI tools.
It can be used for developing and managing learning and development projects and programmes. You can use its analysis tools to plan your workforce and even run programmes to tell you who should be promoted within an organisation. Want to know which employees might leave the business? AI can use predictive analytics to illustrate which employees are likely to leave the business!
In fact, the future possibilities are endless; however, its use is not without issues and considerations?
Knowing whether ChatGPT or any other similar AI bot is being used to create content is important. Without knowledge that such bots are being used, employers are in a difficult position when it comes to the risks which are being created by their employees using such tools. AI use may begin creeping into an organisation, without them necessarily understanding or appreciating the risks or having adequate policies and protections in place.
Copyright and intellectual property infringement
One of those risks is the use of other’s data or copyrighted material. ChatGPT will produce and create documents from information that is stored and held on the internet or in its own repositories. Some of that information may include material which is subject to copyright. The difficulty for the business is that it may not know that it has, or is even likely to have, infringed another’s copyright, if it does not know that ChatGPT is being used. Even if it is aware of its use, it is unlikely to know whether any material used by ChatGPT is subject to copyright, because it will not know what source information has been used.
Confidentiality and data protection
Under its terms of use, users agree that ChatGPT can use any input data and the product produced to “develop and improve” the system unless a specific opt out is utilised. Everything that is asked or uploaded, every answer or document produced/reviewed will be stored and used again whenever and wherever, unless the opt-out is used. If HR are uploading a question about an employee, or an employee’s CV asking for it to be reviewed, be careful as this could breach confidentiality as well as data protection rules.
Accuracy and loss of skills
AI is not fool-proof and information generated by AI may not be accurate as chatbots can provide “plausible sounding but incorrect or nonsensical answers”. Humans can, of course do the same! However, if the bot is being used as a shortcut, those using it may well decide that the answer produced is accurate, without any further investigation or enquiry. More importantly, ChatGPT may not be fully up to date as it can only work on and use information that it has access to, which may mean that some of its responses, lack the most recent information, necessitating further work by those posing questions to it. There could be concerns with regards to how junior employees develop if they are over reliant on AI tools.
Ethical issues – loss of the “human touch”
AI is not a human; it doesn’t do feelings or empathy. It does not have human intuition; it cannot replace human judgment. It should be remembered that AI is a pre-programmed computer software system with algorithms, models and data sets. AI does not know your business’s culture or its way of working. It cannot offer a personal solution or look at problems from different perspectives. It will not look for nuances or consider mitigation. It also cannot act sensitively or fairly. It can also generate biased responses (remember, also it has been programmed by a human or according to a tech company’s programming policies). This could make its use when analysing sickness absence or as a recruitment tool limiting.
What Should HR do?
There is no doubt that ChatGPT and associated AI Bots are here to stay. The possibilities of automating HR tasks are exciting; however, employers and HR need to consider the use of ChatGPT. Pretending that ChatGPT is not being used in the workplace is pointless; it has been downloaded millions of times, it is being talked about and people are curious about it. Businesses need to consider the risks and benefits of using this particular type of AI and ensure that they have clearly communicated their position on its use to employees and that employees are aware of the risks and consequences to them and the business.
For HR, the possibilities in terms of spotting patterns in data and analysing information is exciting; however, be careful that AI does not take the “human” out of Human Resources.
Emma O’Connor is legal director at Boyes Turner LLP