The introduction of artificial intelligence (AI) has been heralded for a long time and we are seeing an increasing use of AI by businesses to streamline and automate processes.
AI technologies, such as natural language processing, machine learning, and automation tools, have immense capabilities, but their use in the workplace presents a number of ethical, legal, and operational challenges. As uptake grows, it’s clear every employer needs to develop a policy that governs AI use at work.
An effective AI in the workplace policy should aim to ensure you can harness the benefits artificial intelligence can bring to your business, while assessing and minimizing risks to employees, the company, and customers.
In this article, we explore why it is critical for employers to adopt a robust AI policy that supports both business objectives and human interests.
It’s difficult to get a precise picture of AI use by Canadian organizations. 2024 research by the Canadian Chamber of Commerce reported just 14% of businesses were using generative AI (tools such as ChatGPT, Copilot, and Gemini), while a study by KPMG found 61% of Canadian organizations were utilizing the technology.
Despite these disparities, the research agrees that the use of AI by Canadian companies is growing rapidly, and there is a pronounced gap between the adoption of AI technology and employers regulating its use.
KPMG’s research found 37% of employees using generative AI were unaware of any employer controls over its use and many admitted to entering their company’s proprietary or private financial data into public AI tools.
Given that there could be aspects of your work streams that are being done with the assistance of AI technology - whether you are aware of it or not! - your organization should accurately define its use and create a formal policy to ensure responsible and compliant usage by your team.
Let’s explore the reasons why.
It’s important to recognize that publicly accessible, generative AI tools (including ChatGPT and Claude) are trained on enormous datasets, and continue to “upskill” themselves using data that users input once they’ve been released to the public.
This continuous improvement cycle is one of the reasons these tools are so compelling, but there are concerns about a lack of transparency around where a user's data goes or whether that data is used by the AI platforms to train their models or be shared with third parties.
Without a formal AI policy, employees could (inadvertently) share sensitive or confidential data, which could be used to better train the system. No one wants to run the risk of your data popping up in answers generated for another user in another organization.
There is also a real risk that sensitive data that isn’t properly secured could be exposed to scams and cyberattacks, such as phishing or malware.
Having a comprehensive AI policy helps establish protocols for handling customer and company data securely and, just as importantly, keeping it safe. This could include putting limits around input of sensitive information and promoting data anonymization.
It should also ensure that AI tools employees use are designed with built-in privacy protections, have options for turning on/off the use of data for teaching, and follow ethical data usage standards.
The regulatory landscape around AI is still developing. In Canada, there is no specific law that deals with AI, however, a robust workplace AI policy should seek to ensure your organization stays compliant with existing legislation - such as the Personal Information Protection and Electronic Documents Act (and provincial privacy laws), the Human Rights Act (and provincial human rights laws) and anti-discrimination laws - that apply to all information storage and sharing, including through the use of AI in the workplace.
Under Personal Information Protection and Electronic Documents Act (or PIPEDA), every private-sector organization that collects, uses, or discloses personal information in the course of a commercial activity must comply with 10 fair information principles to protect personal information held by the organization.
Companies must also develop and implement personal information policies and practices and appoint someone to be responsible for PIPEDA compliance. We recommend taking a ‘privacy by design’ approach, carrying out a privacy impact assessment to understand how AI tools use personal information, and use the results to inform your AI policy.
More broadly, your AI policy should also include guidelines for monitoring and auditing AI systems to ensure they meet all current legal requirements, for example, avoiding potential for biases or discrimination in recruitment or performance management processes.
At all times, your existing employment obligations to your employees apply. If the adoption of AI technology causes changes to roles and duties (for example, you reduce your administration headcount from three employees to two, because of efficiency gains as a result of AI), you are required to handle the process fairly and transparently.
If you no longer require a person to perform a role, then you must follow proper termination procedures, e.g. evaluating whether the redundancy is genuine, and providing notice (or pay in lieu).
Ensuring you identify and mitigate risks to employee health and safety, e.g. increased workloads or stress, is another statutory duty.
If you establish and adhere to clear policy guidelines (and keep them up to date as new legislation - e.g. the Digital Charter Implementation Act 2022 - is enacted), you can minimize legal risks and any potential damage to your business’ reputation.
Along with the benefits the use of AI can bring, there are also concerns about how it might cause job losses, role changes, or increased surveillance.
In a recent report by global professional services company, Accenture, 58% of workers said generative AI was increasing job insecurity, 60% worried it will increase stress and burnout, and 53% were concerned about the quality of output.
An effective AI policy can help alleviate these concerns by clarifying how your organization uses AI, how it will affect employees, and how you will help them adapt, e.g. with skills training. It gives you the opportunity to work with your people on establishing clear boundaries regarding the use of AI and how it can augment, rather than undermine, human capabilities.
It can also ensure that AI tools are used in ways that align with the company's values, adhere to ethical standards, and provide outcomes that serve the broader goals of the business
Fostering an environment of trust and collaboration is key. Employees will feel more secure knowing that they have some input into the use of AI and that there are guidelines to make sure its use is ethical and responsible, rather than being a tool for exploitation or manipulation.
It’s not only employees that want to understand how AI is affecting workplace systems, your customers and other stakeholders need to understand and trust your use of AI.
Research shared by the World Economic Forum found many consumers trust people more than they do AI, with legitimate concerns about data security, completeness, and accuracy.
An effective AI policy can explain how and when the company uses AI, how decisions are being influenced by AI tools, your security measures around data, as well as defining how people oversee, and are accountable for, its use.
Additionally, the policy should outline procedures for employees or customers to query or appeal AI-driven decisions.
Ensuring transparency helps cultivate a real sense of fairness and confidence, which is critical to maintaining a positive relationship with all stakeholders (not only your employees).
AI is a powerful tool that can drive innovation and increase efficiency across business functions. However, to realize this potential, AI must be integrated into the organization in a deliberate, structured way.
An effective AI policy can provide a framework for identifying areas where AI can improve operations - whether that’s in customer service, supply chain management, or data analysis - while avoiding unnecessary risks.
There is typically a balance to be struck between what could be automated or assisted by AI technology and what should be.
By setting guidelines for the development and implementation of AI initiatives, you can help ensure you stay ahead of competitors while fostering a culture of innovation that positions the company as an industry leader.
We realize there is a lot to understand when it comes to the use and regulation of AI in your organization. If you need any assistance developing and implementing a policy covering its use, reach out to MyHR.