The Health and Safety Executive (HSE) is working to improve understanding of how Artificial Intelligence (AI) is being used across industries it regulates. This involves documenting real-world AI use cases and identifying the associated occupational health and safety implications.
AI is advancing rapidly, and while its applications are evolving, the HSE has identified four key areas where AI is currently being used in ways that could affect health and safety:
In maintenance systems, AI is used for advanced inspection, monitoring failures, and supporting decisions. Within safety management, it assists with risk assessments, incident analysis, and generating training content. For controlling machinery and industrial processes, AI plays a role in autonomous vehicle operation, robotic systems, and data analytics. Finally, AI and computer vision are increasingly used for workplace safety monitoring and observing worker behaviour.
AI and HSE’s Regulatory Remit
The HSE regulates AI in ways that support its mission to prevent work-related death, injury, and ill health. This includes regulating AI when its use has implications for health and safety in workplaces where HSE is the enforcing authority. It also includes oversight of AI in the design, manufacture and supply of machinery, equipment, and products for the workplace under the Product Safety framework, as well as AI’s impact on areas such as building safety, chemical regulation, and pesticides.
Health and Safety Law
The legal framework for most of the HSE’s enforcement activity stems from the Health and Safety at Work etc Act 1974. This legislation is goal-setting in nature, meaning it sets out the outcomes that must be achieved without prescribing how they should be met. Because of this flexible approach, the law is applicable regardless of the technologies used, including AI. This ensures that employers remain responsible for managing risks, even as new technologies emerge.
Assess and Manage Risk
Under health and safety law, the primary duty to manage risks lies with those who create them. Employers and those in control of workplaces must assess the risks associated with AI and implement appropriate control measures to reduce them, so far as is reasonably practicable. This includes accounting for cyber security threats. The HSE aims for AI-related risks to eventually be treated like any other workplace risk, integrated into standard risk management practices rather than being seen as novel or exceptional.
Regulatory Principles
The UK government’s white paper on a “Pro-innovation approach to AI regulation” sets out cross-sectoral principles to guide regulators in managing AI risks. Regulators are expected to apply these principles contextually within their sectors. For workplace health and safety, the relevant principles include ensuring safety, security, and robustness; promoting appropriate transparency and explainability; and establishing clear accountability and governance frameworks.
Understanding Risks from AI in the Workplace
AI brings the potential to improve workplace health and safety, but it can also introduce new and complex risks. The HSE has long supported industry through technological changes and will apply its risk-based, proportionate approach to AI. It will work collaboratively with industry to understand how AI affects health and safety, considering both the benefits and challenges of this evolving technology.
AI-related health and safety risks fall into several categories. Human factors include risks like over-dependence on AI, which could reduce worker vigilance and erode safety culture, and the potential for deskilling workers whose tasks are automated. Algorithmic management can lead to stress, and frequent alerts may cause warning fatigue, making it more likely that critical notifications are missed. From a safety perspective, AI systems may provide inaccurate assessments, function without appropriate human oversight, or operate unpredictably if used outside their design parameters, all of which could lead to dangerous situations or physical harm.
Technical risks also pose a challenge. Security breaches could result in loss of control over AI systems, and biased or flawed data may cause unsafe decisions or missed hazards. There are also data privacy concerns, particularly when AI systems collect personal data from monitoring workers or incidents. Additionally, complex AI decision-making processes can be difficult to explain, making it harder to understand or prevent failures, especially if the system encounters scenarios that were not included in its training data.
Developing HSE’s Regulatory Approach to AI
To effectively regulate AI, the HSE is expanding its internal capabilities and external partnerships. It has formed an internal AI common interest group to coordinate work, share knowledge, and identify emerging issues. The HSE is also working with other government departments to help shape national approaches to AI regulation, and actively contributes to standards development through engagement with international bodies such as BSI, IEC and ISO.
The HSE is also strengthening relationships with industry and academic stakeholders to gather insights into AI use cases and their health and safety implications. Collaborative efforts with other regulators (via forums like the AI Standards Forum, the Information Commissioner’s Office AI Regulators Forum, and the UK Health and Safety Regulators Network) are helping promote a consistent regulatory stance.
Additionally, the HSE conducts horizon scanning to monitor AI developments both in the UK and globally. The organisation is investing in its internal expertise in AI across scientific and specialist domains and supports research that advances the safe use of AI. One example is the trialling of an Industrial Safetytech Regulatory Sandbox, which aims to identify and remove barriers to AI and safety technology adoption in the construction industry.
Future Work to Develop Our Regulatory Approach
The HSE will continue to evolve its regulatory approach as AI technologies develop. This ongoing work includes engaging with stakeholders to assess emerging challenges and opportunities, applying the HSE’s established regulatory expertise to ensure that AI is used safely and responsibly across all sectors under its remit.