The launch of ChatGPT has been a game-changer in terms of efficiency and productivity, both for personal use and for work. But when it comes to professional use, such as for writing in the workplace, the boundaries can be blurry. Should you declare it?
Many companies are still figuring out policies on the workplace use of generative artificial intelligence (AI) tools such as ChatGPT, says Ms Evelyn Chow, managing director of strategic human resources (HR) consultancy DecodeHR.
Nearly half of over 100 HR leaders polled by consultancy Gartner in June reported that they were still working on guidelines for employees on using these aids. “What these policies will look like eventually may vary widely. Some firms may embrace such tools, while others may ban their use,” she adds.
In the absence of specific guidance from employers, whether employees should declare their use of ChatGPT depends on their role and industry.
“Certain industries, such as the finance, healthcare and legal sectors, have strict regulations regarding data privacy, information security and communication practices,” says Ms Chow. “In these cases, declaring AI use can help manage potential risks and ensure that industry guidelines are complied with.”
In her firm’s view, companies should consider urging employees to make such declarations, which would help managers assess the output.
Mr Akshay Mendon, Singapore head of executive search firm EMA Partners, says that in his firm’s experience, workers largely use ChatGPT to generate templates for plans or reports, or generate initial drafts of complex e-mails.
“In both cases, as long as the employee is spending time and mental effort to customise the content, it should be totally acceptable,” he says, advising employees against copying ChatGPT’s output wholesale without attribution.
Tools such as ChatGPT may also automate tasks, including note-taking, summarising team discussions or generating ideas. However, there are many instances in the workplace where using ChatGPT would be highly inappropriate, Ms Chow says. For one thing, using sensitive data may compromise confidentiality.
She also discourages the use of ChatGPT for legal documents and contracts that require accuracy, precision and “careful consideration of certain terms and language use”. “In addition, if we rely on ChatGPT for communications, particularly in nuanced situations, our responses would come across as insincere and stilted.”
A human touch is necessary when dealing with sensitive or personal communications that require empathy, such as conducting performance evaluations. This is especially important when dealing with external stakeholders.
“Of course, the stakes are generally lower when communicating internally. When communicating with external stakeholders, we need to exercise more caution and ensure that there’s sufficient rigour and quality.”
She advises managers who think an employee is using ChatGPT but has not declared it to speak to the employee privately, and non-confrontationally. “Try to find out more about their writing process, and provide guidance and practical tips on how to use AI tools responsibly.”
In the absence of clear rules, managers can shape the responsible use of ChatGPT and other tools by ensuring that AI is used as a digital assistant, with the employee remaining in control of the task. This includes sharing potential risks, sources of error and consequences of misuse to help employees better navigate this new landscape, says Ms Chow.
“If employees are aware of ChatGPT’s limitations, they can take the right steps to intentionally check the output for facts, legality and compliance. This would help minimise the reputational, legal and other risks for a company.”