Key takeaways AI differs from older computer programs in its ability to learn, adapt, and respond with a degree of autonomy. Safety and health-related aspects of AI include robotic exoskeletons to prevent musculoskeletal injuries in workers performing heavy lifting, and virtual reality safety training. Barriers to implementation include cost, data quality, and worker resistance.
Since OpenAI released ChatGPT in November 2022, the buzz around artificial intelligence has reached fever pitch. AI's potential seems limitless, evoking a range of reactions from astonishing optimism to apocalyptic nightmares (especially after hundreds of technology leaders signed a public statement in May 2023 warning that AI is an existential threat to humanity).
In reality, AI isn't just a tool, it's a general-purpose technological advancement like electricity or the internet, and it has the same potential to change the world, said Cam Stevens, CEO of the Pocketknife Group, a consulting firm focused on the intersection of technology and workplace safety.
“AI is an umbrella term for a field of computing dedicated to creating systems that can perform tasks that typically require some form of human intelligence,” Stevens explains. “AI is one of the technology megatrends that will shape the future of work.”
This includes the future of workplace safety.
How is AI being used today?
What distinguishes AI from older computer programs is its ability to learn, adapt, and respond with a degree of autonomy.
Still, AI isn't all that new: For decades before ChatGPT exploded into popularity, AI had been quietly helping us with things like planning driving routes using GPS, securing our smartphones with facial and fingerprint recognition, and sorting out the spelling in our texts and emails.
But in recent years, advances in and investment in AI have led professionals in all fields, including occupational safety and health, to explore how the technology can revolutionize their jobs.
This has resulted in a wealth of innovative health and safety applications, such as robotic exoskeletons to prevent musculoskeletal injuries, smart helmets that can monitor vital signs and working conditions, virtual reality safety training, etc. However, at present, most of these applications are still in the experimental stage or are only being used on a small scale.
“There's a lot of promise in emerging technology areas, but generative AI is a form that's accessible to everyone and is being used primarily in the workplace,” says Jay Vietas, director of NIOSH's emerging technologies division. “You can ask an AI to create a health and safety plan for your area. For example, you can ask it what the electrical safety risks are, how to design a lockout/tagout program, and so on.”
“Some will argue that this is just a beefed-up Google search, but health and safety experts should go back and ensure the information provided is appropriate and applicable to specific local areas.”
Get the latest AI information first
If you feel like you're lagging behind when it comes to artificial intelligence, that's because many people feel the same way. As Cam Stevens, CEO of the Pocketknife Group, a consulting firm focused on the intersection of technology and workplace safety, points out, technology advances so quickly that academic papers on AI are often outdated before they're even published.
Jay Vietas, director of NIOSH’s emerging technologies division, advises safety professionals to arm themselves with a working knowledge of how AI systems function and best practices for designing, implementing and maintaining them.
“The more we understand how these systems work and how they translate to improved workplace safety, the more effective they will be,” Vietas says.
Books
“Co-Intelligence: Living with AI, Working with AI” by Ethan Mollick
“Ethical Machines: A Concise Guide to Fully Open-Minded, Transparent, and Respectful AI” by Reed Blackman
“AI Atlas: Power, Politics, and the Global Costs of Artificial Intelligence” by Kate Crawford
Government Resources
National Institute of Standards and Technology AI Risk Management Framework
“Blueprint for an AI Bill of Rights”
“EU AI Law: The First Regulation on Artificial Intelligence”
European Agency for Safety and Health at Work report
Electronic Newsletter
Charter Work Tech
Useful things
Superhuman AI
What's on the horizon?
Other AI applications that Stevens sees becoming more widely used in the future are computer vision and natural language processing.
Computer vision can leverage existing closed-circuit television cameras to monitor safe work procedures and alert workers to hazards such as potential human-forklift contact on the factory floor.
“We train machine learning algorithms that essentially identify the same patterns that a human would look for, but without the human having to be there,” Stevens said. “The machine learning algorithms are applied to thousands of hours of footage to identify patterns and provide insights that we can use to take action.”
Natural language processing has a wide range of uses that can benefit safety professionals, including recording meetings and coaching conversations (with consent, of course) and providing summaries, notes, interpretation of tone and dynamics of dialogue, and on-the-fly translation.
“For organisations with a multilingual workforce, the ability to provide real-time language translation of health and safety or work-related information, typically via smartphone, is crucial,” Stevens says.
What does the future hold?
The potential applications of AI in safety are many and varied, making them difficult to predict. “When the internet first came out, it was very hard to predict exactly how it would be used,” Stevens points out.
He believes future AI solutions will allow individual workers to receive “hyper-personalized” safety training in a format that optimizes their learning (for example, a comic book in Spanish), allowing them to make informed safety decisions.
“I think the real power will come when we can put artificial intelligence solutions into the hands of frontline workers,” Stevens says, “to help them have the right information at the right time, and have everything they need at their fingertips to enhance their decision-making.”
What are the barriers and risks?
“AI has a wide range of safety applications, from faster and better analysis of the workplace, ergonomics and hazards to continuously monitoring and adjusting work interfaces and day-to-day decision-making,” said John Doney, vice president of workplace strategy at the National Safety Council.
So what barriers and risks stand between safety professionals and these possibilities?
Cost: One reason generative AI is so widely used among safety professionals is because it’s typically low-cost or free, Vietas said. The issue isn’t just the monetary cost of other AI tools, but also the resources needed to program, customize and implement them, as well as training workers on how to use them.
“But as computing power continues to grow and investments in artificial intelligence systems increase, I believe the costs will become more reasonable in the near future,” Vietas argues.
Lack of high-quality data: In computer science, they say “garbage in, garbage out,” and unfortunately, much of the health and safety data currently available to train AI systems is of low quality, Stevens noted.
“It's usually imperfect,” he continues. “It may have pretty significant biases. It may not be robust, accessible, preserved, secure, properly protected, anonymized, or private. No matter how sophisticated these AI technologies and tools become, it's foolish to think that you're going to get good results if you use poor quality health- and safety-related data or work data to feed and train these tools.”
Cybersecurity and privacy: “Many of the freely available AI tools also raise security concerns for organizations with proprietary or personally identifiable information,” says Dony. “Currently, only larger organizations or those that are early adopters of the technology are willing to purchase secure, in-house versions.”
Potential for bias and unfairness: Because generative AI relies on existing datasets, it increases the risk that it will reflect the biases and stereotypes of the humans who created that content, resulting in unfair outcomes. “If you design an AI system for one environment, for one group, it might work really well,” Vietas notes. “But if you decide to deploy it in a new environment with a completely different workforce, you shouldn't expect to get the same results.”
Worker backlash: Uneasiness about AI in the workplace stems from a few sources.
Lack of knowledge about the technology and how it works Fear among workers of being replaced by AI technology or pushed into meaningless roles Anxiety about learning to use new tools and keeping up with technological changes Concerns about privacy violations, “Big Brother” style surveillance and how data is used
Stevens says transparency and worker involvement can ease resistance.
“Organizations need to be clear about what AI means for their business: what applications (in simple terms) these solutions are being used in, how those tools are being trained, how employees are expected to interact with them, and how their work is expected to change because of it,” he adds. “And employees need to have a say in designing the implementation and adoption strategy for those tools.”
NSC and AI
Read more about artificial intelligence, technology and the future of work from the National Safety Council.
Do we still need humans? (Yes)
Ultimately, the real dangers of integrating AI into workplace health and safety lie not in the technology but in the humans who (mis)use it – especially if they don’t recognise that AI still requires significant human direction, training and oversight.
“There's a danger in relying too heavily on any tool or system, no matter how powerful, and the same is true with AI,” Doney says. “Organizations and people need to find a mutual balance and comfort zone where they view AI tools as reliable and effective (but not foolproof) guidance, and use them to act strategically and tactically more quickly and thoroughly than ever before.”
“Once this balance is achieved, there is a great chance that AI will have a real and lasting impact on safety – a true enabler of a future where no one loses their life in the workplace.”