Why every business needs an AI in the workplace policy

Sylvie Thrush Marsh, Chief Evangelist
By Sylvie Thrush Marsh, Chief Evangelist

The introduction of artificial intelligence (AI) has been heralded for a long time and we are seeing an increasing use of AI by businesses to streamline and automate processes.

AI technologies, such as natural language processing, machine learning, and automation tools, have immense capabilities, but their use in the workplace presents a number of ethical, legal, and operational challenges. As uptake grows, it’s clear every employer needs to develop a policy that governs AI use at work.

workers AI

An effective AI in the workplace policy should aim to ensure you can harness the benefits artificial intelligence can bring to your business, while assessing and minimising risks to employees, the company, and clients.

In this article, we explore why it is critical for employers to adopt a robust AI policy that supports both business objectives and human interests.

Do I need to regulate the use of AI in my workplace?

In 2024, a report by the AI Forum NZ found 67% of New Zealand organisations are using AI, while Microsoft’s Work Trend Index found 84% of NZ employees use generative AI (tools such as ChatGPT, Copilot, and Gemini) at work.

The Microsoft report estimated that globally, nearly 80% of AI users are using the technology independently, a trend the survey calls “Bring Your Own AI (BYOAI)”.

Other research has found a pronounced gap between the adoption of AI technology and employers regulating its use. Datacom's 2024 State of AI research found only 48% of Kiwi businesses that use AI have staff policies for its use, only 13% have audit assurance and governance frameworks, and just 33% have awareness training for employees.

Given it’s likely that there are aspects of your work streams that are being done with the assistance of AI technology - whether you are aware of it or not! - your organisation should accurately define its use and create a formal policy to ensure responsible and compliant usage by your team.

Let’s explore the reasons why.

Protecting privacy and security

It’s important to recognise that publicly accessible, generative AI tools (including ChatGPT and Claude)  are trained on enormous datasets, and continue to “upskill” themselves using data that users input once they’ve been released to the public. This continuous improvement cycle is one of the reasons these tools are so compelling, but there are concerns about a lack of transparency around where a user's data goes or whether that data is used by the AI platforms to train their models or be shared with third parties.

Without a formal AI policy, employees could (inadvertently) share sensitive or confidential data, which could be used to better train the system. No one wants to run the risk of your data popping up in answers generated for another user in another organisation.

There is also a real risk that sensitive data that isn’t properly secured could be exposed to scams and cyberattacks, such as phishing or malware.

Having a comprehensive AI policy helps establish protocols for handling customer and company data securely and, just as importantly, keeping it safe. This could include putting limits around input of sensitive information and promoting data anonymisation.

It should also ensure that AI tools employees use are designed with built-in privacy protections, have options for turning on/off the use of data for teaching, and follow ethical data usage standards.

Navigating legal and compliance issues

The regulatory landscape around AI is still developing. In New Zealand, there is no specific legislation that deals with AI, however, a robust workplace AI policy should seek to ensure your organisation stays compliant with existing legislation - such as the Privacy and Human Rights Acts, and anti-discrimination laws - that applies to all information storage and sharing, including through the use of AI in the workplace.

Under the Privacy Act, every organisation has a legal duty to ensure there are reasonable safeguards in place to prevent loss, misuse, or disclosure of personal information. The Privacy Commissioner recommends that every organisation carry out a privacy impact assessment to understand how AI tools use personal information, and to use the results to inform your AI policy.

More broadly, your AI policy should include guidelines for monitoring and auditing AI systems to ensure they meet all current legal requirements, for example, avoiding potential for biases or discrimination in recruitment or performance management processes.

At all times, your existing employment obligations to your employees apply. If the adoption of AI technology causes changes to roles and duties (for example, you reduce your administration headcount from three employees to two, because of efficiency gains as a result of AI), you have a legal obligation under the Employment Relations Act to consult with affected employees before you make a final decision about their employment.

Ensuring you identify and mitigate risks to employee health and safety, e.g. increased workloads or stress, is another statutory duty.

If you establish and adhere to clear policy guidelines (and keep them up to date as legislation develops), you can minimise legal risks and any potential damage to your business’ reputation.

Enhancing employee engagement and trust

Along with the benefits the use of AI can bring, there are also concerns about how it might cause job losses, role changes, or increased surveillance.

So far, AI automation in Aotearoa has yet to cause mass layoffs - 92% of businesses surveyed in the AI Forum NZ research said AI hadn’t replaced any workers and only 29% said AI had resulted in reduced need to hire employees.

However, as uptake grows and AI becomes more ingrained in workplace systems, members of your team are likely to have reservations about the growing role of AI.

An effective AI policy can help alleviate these concerns by clarifying how your organisation uses AI, how it will affect employees, and how you will help them adapt, e.g. with skills training. It gives you the opportunity to work with your people on establishing clear boundaries regarding the use of AI and how it can augment, rather than undermine, human capabilities.

It can also ensure that AI tools are used in ways that align with the company's values, adhere to ethical standards, and provide outcomes that serve the broader goals of the business.

Fostering an environment of trust and collaboration is key. Employees will feel more secure knowing that they have some input into the use of AI and that there are guidelines to make sure its use is ethical and responsible, rather than being a tool for exploitation or manipulation.

Ensuring transparency and accountability

It’s not only employees that want to understand how AI is affecting workplace systems, your customers and other stakeholders need to understand and trust your use of AI.

Research shared by the World Economic Forum found many consumers trust people more than they do AI, with legitimate concerns about data security, completeness, and accuracy.

An effective AI policy can explain how and when the company uses AI, how decisions are being influenced by AI tools, your security measures around data, as well as defining how people oversee, and are accountable for, its use.

Additionally, the policy should outline procedures for employees or customers to query or appeal AI-driven decisions.

Ensuring transparency helps cultivate a real sense of fairness and confidence, which is critical to maintaining a positive relationship with all stakeholders (not only your employees).

Promoting innovation and competitiveness

AI is a powerful tool that can drive innovation and increase efficiency across business functions. However, to realise this potential, AI must be integrated into the organisation in a deliberate, structured way.

An effective AI policy can provide a framework for identifying areas where AI can improve operations - whether that’s in customer service, supply chain management, or data analysis - while avoiding unnecessary risks.

There is typically a balance to be struck between what could be automated or assisted by AI technology and what should be.

By setting guidelines for the development and implementation of AI initiatives, you can help ensure you stay ahead of competitors while fostering a culture of innovation that positions the company as an industry leader.

We realise there is a lot to understand when it comes to the use and regulation of AI in your organisation. If you need any assistance developing and implementing a policy covering its use, reach out to MyHR.

Related Resources

Wage theft is now a crime
New
Blog
Blog
Wage theft is now a crime
By Sylvie Thrush Marsh, Chief Evangelist - 26 Mar 2025

Intentionally failing to pay employees is now a crime, after a Bill to amend the Crimes Act was given royal assent on 13 March 2025.

Read more
3 company policies you should have in writing
New
Blog
Blog
3 company policies you should have in writing
By Nick Stanley - 16 Sep 2019

Putting important company policies and procedures in writing makes good sense.

Read more
What’s the difference between restructuring and redundancy?
New
Blog
Blog
What’s the difference between restructuring and redundancy?
By Nick Stanley - 20 Nov 2019

The terms restructuring and redundancy are often used to refer to the same thing, but there is a difference.

Read more
Get Started with MyHR

Make HR easy

Experiencing is believing. Book a demo today.

Book a demo Start free trial