Donate My RoSPA
    Basket is empty.
Net Total: £0.00

Game changer? How AI could transform workplace safety

Game changer? How AI could transform workplace safety?

 

Louis Wustemann looks at the potential impact that advances in artificial intelligence could have on health and safety.

In 1863 the novelist and critic Samuel Butler wrote an article suggesting that machines could evolve with human help to the point where they might be able to think for themselves. “We are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race,” Butler suggested.

That “self-acting power” was a long way off in the steam age, but in recent years, advances in computing have produced software that can process large volumes of data quickly and process them to the point that we have given the software the collective name of artificial intelligence (AI). The “intelligence” part of the name reflects the sophistication of the filters they can apply to information and the fact they can be taught to improve the way they work through feedback on the value of their outputs, “learning” from mistakes and successes. But what does its spread mean for people who are tasked with making sure workers go home safe and healthy?

AI that the public can interact with is currently mostly restricted to text-based research and communication, chatbots that range from the customer service agents which pop up in the corner of commercial web pages offering to answer questions, to the large-language models (LLMs) such as ChatGPT or Bing, which can trawl the internet and synthesise the results in many text formats.

So called agentic AI, which is the next development, combines these contained systems based on large language models with functional software and hardware to allow the AI to make decisions and put them into effect. The agentic model leads to the prospect of more savings in human effort and time – asked to create a marketing campaign, for example, it could handle everything from campaign scheduling through copywriting, to sending hundreds of thousands of emails and monitoring responses. But this potential for AI to control physical systems and to take initiative has led to calls for greater regulation before it is widely distributed because of the risk of its use by bad actors or of unintended consequences of its actions.

Pattern spotting

The potential for AI to improve occupational health and safety comes from its capacity to scan massive volumes of information, picking out the important data and flagging up patterns. An AI programme trained on all your organisation’s near-miss observations and accident reports over the previous years would be likely to yield valuable insights, spotting trends that a health and safety practitioner might have to spend hundreds of hours to find. In a recently announced study, researchers at Lund University in Sweden found that using AI to look for signs of cancer in breast screening tests provided the same degree of accuracy as analysis by skilled radiologists – based on 40,000 tests carried out by AI and the same number by humans. Using the AI to flag up abnormalities to a specialist substituted five months work by a radiologist.

A 2021 policy briefing by the EU OSHA safety body noted that “new forms of AI-based monitoring of workers may also provide an opportunity to improve OSH surveillance, reduce exposure to various risk factors, including harassment and violence, and provide early warnings of stress, health problems and fatigue”. 

An example of improved surveillance is software currently available which plugs into organisational closed-circuit TV systems and uses AI to identify hazardous events and patterns such as near-misses involving pedestrians and vehicles, individuals missing their personal protective equipment and poor manual handling practice. 

In an ideal safety culture, at the advanced stages of the various cultural models such as the Bradley curve and the Hudson safety ladder, such events would naturally be brought to light and even corrected by employees themselves. But few safety professionals would claim their organisations had reached such an enlightened state and, in the meantime, the ability to identify hazard hotspots gives them a richer dataset without the impossible task of personally scanning hundreds of hours of footage.

Cultural sensitivity

However, where the monitoring takes place day-to-day, what the organisation does with these new data is a sensitive issue. The EU OSHA policy briefing warns that “ethical decisions and effective strategies and systems are needed for handling the large quantity of sensitive personal data that can be generated” and that “it is important to ensure transparency in collecting and using such data, and workers and their representatives should be empowered through the same access to information.” The makers of surveillance-based software stress that AI should not be employed to support a “blame culture”, penalising individuals but used to find ways to improve arrangements or training and to involve the workforce in solving problems the AI highlights. 

Bridget Leathley, a writer, consultant and trainer who studies safety and health technology agrees that though real-time monitoring could be useful for urgent interventions in particularly hazardous situations, its data may more often give clues to underlying problems that safety practitioners can tackle. “Is it picking up an employee on day one doing something risky, so you can say ‘stop doing that?’” asks Leathley. “Or is it picking up that the reason the employee is doing that is that all his colleagues are doing it in that area, so it’s a design problem or a supervision problem, not a problem with the employee?”

Chat-based LLM engines such as Microsoft’s Bing, Google’s Bard and Open AI’s ChatGPT, are trained on large volumes of data from hundreds of millions of publicly available web pages. Little of this material contains detailed information about controlling hazards, and the AI cannot adjust for local circumstances beyond its database, so it would be inadvisable to try to use it to write, say, a method statement for working on a particular fragile roof. This is especially true as chat-based AI is programmed to sound convincing however little data it has to draw on and will not offer a potential margin of error for any information it serves up. This unreliability of chatbots in providing risk management information means that through AI could replace some administrative jobs, safety and health practitioners are unlikely to see their work taken over by software in the foreseeable future

For the time being chat-based AI may be most useful to OSH professionals in gathering tips on the so-called non-technical competences such as influencing skills and building a convincing business case for a finance director. These softer skills are often promoted as important for practitioners wishing to have a greater impact in their organisations and there is plenty of this sort of general management information in the pages the chat-based AI draws on. 

Advances in software (AI) could combine with developments in hardware (such as robots) to improve the automation of hazardous tasks. A UK government-funded programme supported by more than £26 million is currently underway to investigate the use of robots to replace humans in hazardous environments such as confined spaces, at height and underwater, mending welds on pipelines and wind turbines, for example. AI’s capacity to make decisions and learn from experience would allow such machines to work largely unsupervised and to improve their own performance incrementally from experience.

Bigger data

Another future development, but one that would take serious investment from a software provider and the cooperation of groups of corporations, would be AI that could examine multiple databases in a company – such as safety, HR, inventory management and facilities systems, to look for patterns that might coincide to increase risk levels; a rapid increase in flammable raw materials stored at a site where half the fire marshals had left the organisation, for instance. 

Bridget Leathley suggests another potential use of AI would be to defend against the loss of organisational memory that can result in important safety controls being diluted or removed because those who knew why they were adopted have moved on or retired. “Computers, allegedly don’t forget,” she says, “So, if you're removing a safety measure like training because it feels unnecessary, the AI could check and say ‘this thing happened before and we said we must not leave refresher training longer than six months’.”

Whatever purposes we imagine for AI will be outstripped by the reality as developers spot the niches where its capacity for synthesising huge volumes of information yields valuable data to make businesses more efficient. We can only hope that some of those developers are mindful of the potential benefits to worker protection the technology can offer.
 

  
Louis Wustemann


Louis Wustemann is a writer and editor on sustainability and health and safety. He was previously Head of Regulatory Magazines at LexisNexis UK, publishing IOSH Magazine, Health and Safety at Work magazine and The Environmentalist among other titles. He is a trustee of the One Percent Safer Foundation.

 
 
 

Already a member? Login to MyRoSPA to read more articles

 
 
Login to you MyRoSPA account
Login to MyRoSPA to view more exclusive content

Login
   
 
 

| Join RoSPA 

Become a member now
Become a member to access MyRoSPA to view more exclusive content
 
Join 
 


 
 

Already a member? Login to MyRoSPA to read more articles

 
 
Login to you MyRoSPA account
Login to MyRoSPA to view some more exclusive content

Login
   
 

| Join RoSPA 

Become a member now
Become a part of the MyRoSPA team to view more exclusive content

Register
 


 

Contact Us

General Enquiries
+44 (0)121 248 2000
+44 (0)121 248 2001
[email protected]
Contact form