If you work in EHS, you have probably heard of ISO 45001, ISO 9001 and ISO 14001. You may not have heard of ISO 42001 yet, but it is heading in your direction.
ISO/IEC 42001 is the world's first international standard for AI management systems. Published in late 2023, it provides a framework for organisations that develop, deploy or use AI systems to do so responsibly, transparently and with proper governance in place.
At first glance, an AI standard might seem like a concern for IT teams rather than EHS. But AI is already embedded in the tools many safety teams use every day, from predictive risk analytics and automated incident classification to AI-assisted audit scheduling and hazard identification. If your organisation is using or considering any of these capabilities, ISO 42001 is relevant to you.
Here are five things every EHS team should have on their radar.
If your organisation already holds ISO 45001, 9001 or 14001, the structure of ISO 42001 will look familiar. It follows the same Annex SL high-level structure with clauses covering context, leadership, planning, support, operation, performance evaluation and improvement.
This is significant because it means ISO 42001 is designed to integrate with your existing management systems rather than sit alongside them as a separate compliance exercise. The policies, risk assessment processes, internal audit programmes and management review cycles you already run can be extended to cover AI governance rather than duplicated from scratch.
For EHS teams already managing integrated management systems, this is a practical advantage.
There is a common assumption that ISO 42001 only applies to organisations building AI products. It doesn't. The standard applies equally to organisations that use AI systems, even if those systems are provided by a third-party vendor.
If your HSEQ platform uses AI to classify incidents, predict risk trends, recommend corrective actions or automate workflow routing, your organisation is using AI in its safety management. ISO 42001 provides the framework to govern how those AI capabilities are assessed, monitored and managed over time.
For industries like construction, mining, manufacturing and energy where AI-powered safety tools are becoming more common, this is an area to start thinking about now rather than later.
This is not a future problem. NSW has already passed the Work Health and Safety Amendment (Digital Work Systems) Act 2026, making it the first Australian state to explicitly regulate AI, algorithms, automation and online platforms under WHS law. The Act creates a new duty requiring businesses to ensure that the health and safety of workers is not put at risk by digital work systems. Specifically, businesses must consider whether their digital systems create excessive workloads, unreasonable performance monitoring, excessive surveillance or discriminatory outcomes.
The definition of "digital work system" is broad: any algorithm, artificial intelligence, automation or online platform. That means AI-powered scheduling tools, automated incident classification systems, algorithmic risk assessments and performance tracking software all fall within scope. Penalties for non-compliance reach nearly $70,000 for corporations. WHS entry permit holders will also gain new powers to inspect digital systems where a WHS contravention is suspected.
The commencement dates for the core provisions are yet to be proclaimed, pending SafeWork NSW publishing guidelines. But the legislation has passed and received assent. Other states are expected to follow.
At the federal level, mandatory requirements under the updated Policy for the Responsible Use of AI in Government take effect from 15 June 2026, with Privacy Act reforms introducing new automated decision-making transparency obligations landing in December 2026. The Australian AI Safety Institute is already operational.
For EHS teams, the NSW reforms are particularly significant because they sit directly within WHS law, the same legal framework your safety management system already operates under. Organisations that can demonstrate structured governance of their digital work systems through a framework like ISO 42001 will be better positioned as these obligations commence and other jurisdictions follow.
ISO 45001 manages occupational health and safety risks. ISO 14001 manages environmental risks. ISO 9001 manages quality risks. None of them were designed to address the specific risks that come with AI: algorithmic bias, opaque decision-making, data quality issues, model drift over time and the challenge of maintaining human oversight over automated processes.
ISO 42001 fills that gap. It requires organisations to conduct AI-specific impact assessments, maintain transparency about how AI systems make decisions, ensure human oversight mechanisms are in place and establish processes for monitoring AI system performance on an ongoing basis.
For EHS teams, this is particularly relevant where AI is being used in risk assessment, incident investigation or compliance decision-making. If an AI system recommends a control measure or classifies an incident severity, you need to be confident that the system is accurate, explainable and regularly reviewed.
If your organisation has an established safety management system built on ISO 45001, you already have many of the foundational elements ISO 42001 requires: a risk management framework, document control processes, internal audit programmes, management review cycles and a culture of continuous improvement.
The work involved is extending those existing processes to cover AI-specific considerations rather than building a separate governance system from the ground up. Start by identifying where AI is currently being used across your safety and compliance workflows. Map those systems against the ISO 42001 requirements. Use your existing risk register to capture AI-specific risks. Build AI governance into your next internal audit cycle.
The organisations that will find this easiest are the ones whose management systems are already centralised, well-documented and digitally managed, because the evidence trail ISO 42001 requires is the same kind of evidence your existing ISO certifications already demand.
ISO 42001 is not something most EHS teams need to act on tomorrow, but it is something to start understanding now. The organisations that build AI governance into their existing management systems early will be better positioned when it shifts from optional to expected, and in NSW, that shift has already begun.
If you want to understand how your current ISO certifications fit together and where ISO 42001 sits in that picture, our guide to ISO certifications breaks down ISO 9001, 14001 and 45001 with practical compliance steps for each.
Download: A guide to ISO certifications →
If you are already thinking about how your HSEQ platform needs to support AI governance alongside your existing safety management, we can show you how it works.
Disclaimer: This article is intended to provide general information on the subject matter. This is not intended as legal or expert advice for your specific situation. You should seek professional advice before acting or relying on the content of this information.