Why every business needs an AI in the workplace policy

Sylvie Thrush Marsh, Chief Evangelist
By Sylvie Thrush Marsh, Chief Evangelist

The introduction of artificial intelligence (AI) has been heralded for a long time and we are seeing an increasing use of AI by businesses to streamline and automate processes.

AI technologies, such as natural language processing, machine learning, and automation tools, have immense capabilities, but their use in the workplace presents a number of ethical, legal, and operational challenges. As uptake grows, it’s clear every employer needs to develop a policy that governs AI use at work.

workers AI

An effective AI in the workplace policy should aim to ensure you can harness the benefits artificial intelligence can bring to your business, while assessing and minimising risks to employees, the company, and clients.

In this article, we explore why it is critical for employers to adopt a robust AI policy that supports both business objectives and human interests.

Do I need to regulate the use of AI in my workplace?

In 2024, Datacom’s AI Attitudes in Australia research reported 72% of Australian businesses are using some form of AI, while Microsoft’s Work Trend Index found 84% of Australian employees use generative AI (tools such as ChatGPT, Copilot, and Gemini) at work.

The Microsoft report estimated that globally, nearly 80% of AI users are using the technology independently, a trend the survey calls “Bring Your Own AI (BYOAI)”.

The Datacom research also found a pronounced gap between the adoption of AI technology and employers regulating its use, with only 52% of businesses having staff policies for its use, 51% providing awareness training for employees, and just 39% having audit assurance and governance frameworks.

Given it’s likely that there are aspects of your work streams that are being done with the assistance of AI technology - whether you are aware of it or not! - your organisation should accurately define its use and create a formal policy to ensure responsible and compliant usage by your team.

Let’s explore the reasons why.

Protecting privacy and security

It’s important to recognise that publicly accessible, generative AI tools (including ChatGPT and Claude)  are trained on enormous datasets, and continue to “upskill” themselves using data that users input once they’ve been released to the public. This continuous improvement cycle is one of the reasons these tools are so compelling, but there are concerns about a lack of transparency around where a user's data goes or whether that data is used by the AI platforms to train their models or be shared with third parties.

Without a formal AI policy, employees could (inadvertently) share sensitive or confidential data, which could be used to better train the system. No one wants to run the risk of your data popping up in answers generated for another user in another organisation.

There is also a real risk that sensitive data that isn’t properly secured could be exposed to scams and cyberattacks, such as phishing or malware.

Having a comprehensive AI policy helps establish protocols for handling customer and company data securely and, just as importantly, keeping it safe. This could include putting limits around input of sensitive information and promoting data anonymisation.

It should also ensure that AI tools employees use are designed with built-in privacy protections, have options for turning on/off the use of data for teaching, and follow ethical data usage standards.

Navigating legal and compliance issues

The regulatory landscape around AI is still developing. In Australia, there is no specific legislation that deals with AI, however, a robust workplace AI policy should seek to ensure your organisation stays compliant with existing federal legislation - such as the Privacy Act and the Fair Work Act - and state regulations that apply to all information storage and sharing, including through the use of AI in the workplace.

The Privacy Act applies to all uses of AI involving personal information and every organisation with an annual turnover of more than $3 million must ensure personal information is protected from theft, loss, misuse, unauthorised access, modification or disclosure.

The Office of the Australian Information Commissioner recommends that every organisation should take a ‘privacy by design’ approach, carry out a privacy impact assessment to understand how AI tools use personal information, and use the results to inform your AI policy.

More broadly, your AI policy should include guidelines for monitoring and auditing AI systems to ensure they meet all current legal requirements, for example, avoiding potential for biases or discrimination in recruitment or performance management processes.

At all times, your existing employment obligations to your employees apply. If the adoption of AI technology causes changes to roles and duties (for example, you reduce your administration headcount from three employees to two, because of efficiency gains as a result of AI), most modern awards require you to consult with affected employees before you make a final decision about their employment.

Some states and territories have regulations around workplace surveillance that require notice, employee consent, or other transparency measures be put in place.

Ensuring you identify and mitigate risks to employee health and safety, e.g. increased workloads or stress, is another statutory duty.

If you establish and adhere to clear policy guidelines (and keep them up to date as legislation changes), you can minimise legal risks and any potential damage to your business’ reputation.

Enhancing employee engagement and trust

Along with the benefits the use of AI can bring, there are also reservations about how it might cause job losses, role changes, or increased surveillance.

The recent Australian AI Sentiment Report by professional services firm, EY, found 55% of people surveyed were worried AI could lead to job losses, 40% were uneasy about AI being used to monitor employee behaviour, and 49% were uneasy with AI analysing performance and influencing performance reviews.

An effective AI policy can help alleviate these concerns by clarifying how your organisation uses AI, how it will affect employees, and how you will help them adapt, e.g. with skills training. 

It gives you the opportunity to work with your people on establishing clear boundaries regarding the use of AI and how it can augment, rather than undermine, human capabilities.

It can also ensure that AI tools are used in ways that align with the company's values, adhere to ethical standards, and provide outcomes that serve the broader goals of the business 

Fostering an environment of trust and collaboration is key. Employees will feel more secure knowing that they have some input into the use of AI and that there are guidelines to make sure its use is ethical and responsible, rather than being a tool for exploitation or manipulation.

Ensuring transparency and accountability

It’s not only employees that want to understand how AI is affecting workplace systems, your customers and other stakeholders need to understand and trust your use of AI.

Research by data and analytics technology group, YouGov, found 87% of Australians believe it's important to disclose AI usage, and the World Economic Forum found many consumers trust people more than they do AI, with legitimate concerns about data security, completeness, and accuracy. Local research.

An effective AI policy can explain how and when the company uses AI, how decisions are being influenced by AI tools, your security measures around data, as well as defining how people oversee, and are accountable for, its use.

Additionally, the policy should outline procedures for employees or customers to query or appeal AI-driven decisions.

Ensuring transparency helps cultivate a real sense of fairness and confidence, which is critical to maintaining a positive relationship with all stakeholders (not only your employees).

Promoting innovation and competitiveness

AI is a powerful tool that can drive innovation and increase efficiency across business functions. However, to realise this potential, AI must be integrated into the organisation in a deliberate, structured way.

An effective AI policy can provide a framework for identifying areas where AI can improve operations - whether that’s in customer service, supply chain management, or data analysis - while avoiding unnecessary risks.

There is typically a balance to be struck between what could be automated or assisted by AI technology and what should be.

By setting guidelines for the development and implementation of AI initiatives, you can help ensure you stay ahead of competitors while fostering a culture of innovation that positions the company as an industry leader.

We realise there is a lot to understand when it comes to the use and regulation of AI in your organisation. If you need any assistance developing and implementing a policy covering its use, reach out to MyHR.

Related Resources

3 company policies you should have in writing
New
Blog
Blog
3 company policies you should have in writing
By Nick Stanley - 10 Feb 2021

Putting important company policies and procedures in writing makes good sense.

Read more
What’s the difference between restructuring and redundancy?
New
Blog
Blog
What’s the difference between restructuring and redundancy?
By Nick Stanley - 01 Apr 2021

People often use the terms restructuring and redundancy to refer to the same thing, but there is a difference. So let's take a look.

Read more
5 tips for running a good performance review
New
Blog
Blog
5 tips for running a good performance review
By Jason Ennor, Co-founder and CEO at MyHR - 23 Sep 2020

Good performance reviews work. What’s more, they can help businesses of all sizes achieve results.

Read more
Get Started with MyHR

Make HR easy

Experiencing is believing. Book a demo today.

Book a demo Start free trial