By Howard Levitt
AI poses risks to employers
Companies across Canada are rushing to bring artificial intelligence into their workflows. But what risks does AI pose to employers and what should they do to protect themselves?
1. Loss of confidential information
If employees insert company information into the AI macrosystem, such as ChatGPT and its equivalents, or any AI which can be picked up by or is already used by others, they implicitly breach their confidentiality obligations and the employer potentially loses its IP and other confidential, previously protected, information. It might even lose legal patent protection through this inadvertent disclosure.
As you can imagine, smart competitors are looking for just this sort of misadventure and will quickly seize upon and use what you are providing free of charge.
As AI powers payroll, customer information, corporate record-keeping and more, there will be troves of information rife for plunder.
While chatbots are wonderful for streamlining routine HR transactions, they are easy targets for hackers. Sharing sensitive personal or company data leaves you open to cyberattacks and identity theft.
AI also increases the speed of what is accomplished but can exceed a human’s ability to follow along. The exploitation of AI systems through automated phishing attacks that can spread viruses quickly may be difficult to uncover.
Ensure that you have employees who understand these risks and how to implement your AI so as to minimize if not avoid them.
In addition, strict IT security, rigorous policies as to what information can be provided and discipline and discharge for anyone who transgresses will and must become de rigueur. That increasingly is being incorporated into the employee contracts and polices we are drafting.
2. Unapproved use by employees
Many employees will be replaced by AI. But other employees are using it for their jobs — and should. But do their managers know to what extent?
AI has the potential of allowing employees to essentially vacate their functions by using AI to such an extent that they spend little time doing their jobs. Unless their managers are fully familiar with the programs themselves, they would never know, particularly with employees working remotely.
Resolving this risk requires frank discussions with employees about their use of AI, documenting those conversations and having knowledgeable individuals to verify what they have said. Theft of time, after all, is serious misconduct so it is essential that employees are honest in their statements as to how much of their time they are spending actually working now that AI is doing much of their former jobs. Lying during an investigation is viewed by the court as particularly serious misconduct. So would lying about what work employees are actually performing.
3. Inaccurate information
There have already been cases of U.S. lawyers disciplined for citing fictitious cases in court provided to them by AI research tools. It is tempting to many to overuse AI without being able to vouchsafe the accuracy of its work product. Needless to say, if a client gets the notion that the work product performed was largely an AI creation, that company will not likely obtain further assignments, or even be paid the project amount. After all, customers are paying for original work which they cannot perform themselves.
Internally as well, employers must maintain control as to how much AI is being used, by whom and what checks are in place to ensure its accuracy and its compatibility with the other intellectual property created.
4. Bias in the algorithms
Algorithms which are created based upon decisions previously taken by organizations can mimic human biases, for example, previous discriminatory hiring and firing practices. Since AI algorithms are built by humans, they can have built-in bias by those who either intentionally or inadvertently introduce them into the algorithm.
5. Defamation risk
If false information impugning another’s character is created in the AI and that information is distributed to anyone, it can give rise to a defamation case. Similarly if false information is used in your marketing, you could be liable pursuant to consumer protection legislation as misleading to consumers.
6. Regulatory questions
The use of AI may be subject to regulatory requirements depending upon your industry. It is important to understand the legal environment surrounding the technology’s use to ensure compliance. This requires oversight and supervision of your employees.