Cyber market must watch AI regulation closely

Reuters
23 May
Cyber market must watch AI regulation closely

By Michael Loney

May 23 - (The Insurer) - The growing amount of proposed AI-related regulation could increase exposure for buyers of cyber insurance and their carriers.

A recent Arthur J Gallagher article written by John Farley, managing director of the broker’s cyber liability practice, said that AI-specific regulatory proposals at the state, federal and international levels bear watching.

Nearly half of U.S. states have proposed or adopted AI governance legislation.

This includes Colorado passing the first omnibus-style AI regulation, the Colorado Artificial Intelligence Act, which goes into effect in February 2026.

The article said that this act is likely to set the tone for other states considering similar regulation.

The Colorado legislation is aimed at regulating high-risk systems and preventing algorithmic discrimination, with distinct and separate responsibilities for both developers and deployers of AI systems.

It focuses on AI systems that may affect consequential decisions that have a material impact on areas related to employment, financing, health services, housing or insurance.

AI developers will be required to maintain accountability for known or foreseeable risks within their AI system. They'll be required to report to the Colorado attorney general and known deployers within 90 days of discovering or being made aware of the occurrence of algorithmic discrimination.

AI deployers will be required to exercise reasonable care when using AI systems, and must implement risk management programmes, conduct periodic risk assessments, provide individuals mechanisms to contest decisions and have certain reporting requirements.

The Gallagher article outlined that general trends in the other state-sponsored bills cover four areas: consumer protections when AI is used for profiling and automated decisions; use of AI for hiring and in employment contexts; deceptive media or "deepfakes”; and forming AI task forces or groups devoted to understanding AI impacts.

“Ultimately, we expect the trajectory of AI regulation to mirror the evolution of recent data privacy laws across the U.S.,” the article said.

Congress has issued more than 100 bills related to AI use, mostly focusing on transparency and accountability aimed at consumer protection, with some focusing on specific industries, including marketing, healthcare and education.

The Federal Trade Commission has issued guidelines on AI transparency and accountability that stress the need for clear documentation and consumer consent.

In addition, the Gallagher article highlighted that the National Institute of Standards and Technology continues to play a pivotal role in developing AI governance and technical standards, including guidelines for privacy-enhancing technologies.

The article also highlighted regulation that has been introduced internationally.

It said the EU AI Act “is one of the few comprehensive AI laws and has set the global benchmark to focus on preventing AI risks and harms”. The act classifies various AI systems based on the level of risk and imposes specific obligations accordingly.

The EU's General Data Protection Regulation has had a greater influence, however, with many countries opting to amend existing laws or adopt frameworks for AI governance.

In addition, the Organisation for Economic Co-operation and Development has established international principles for AI that serve as a framework for member countries to develop regulations. These principles focus on transparency, accountability and human rights.

AI BRINGS EMERGING RISK FOR CYBER CARRIERS

Talking to Cyber Risk Insurer, Farley said there are more than 200 litigated claims in courts related to AI. He added that AI is already leading to some losses for the cyber market.

Farley said that any new technology brings emerging risk. With AI, data bias claims may emerge where there is unintentional discrimination against a demographic based on the output of an AI platform.

“You're allowing a machine to make decisions for your organisation, but that machine is an agent of your organisation and therefore you may be subject to a discrimination lawsuit,” he said. “If that's the case, you could see class actions.”

Farley said that it is not just cyber insurance that could be affected.

A wide variety of losses can manifest from AI systems, and could affect cyber and tech E&O, employment practices liability, product liability, medical malpractice, D&O policies and more.

Farley gave the example of a product liability policy potentially coming into play if a manufacturer relies on AI to help produce products and the output is flawed.

“So it transcends beyond cyber, and a risk manager really has to look at it with a pretty wide lens if they're going to engage it and they'll need to implement robust policies around the usage of AI, who can access it and for what purposes, and whether it's embedded into the overall data governance plan,” Farley said.

When asked whether cyber carriers are taking underwriting actions in response, Farley said: “I think they're at the beginning of that. They may start to ask more questions, but it's not something that's as focused on as, say, certain cybersecurity controls like MFA backups and patch management. Those are the questions you're going to get almost every time.

“AI is sort of trailing there, but I think we'll see more of it as time goes on.”

CARRIERS MAY REDEFINE COVERAGE PARAMETERS

The Gallagher article highlighted that organisations may face challenges in securing cyber insurance coverage for AI-related regulatory claims as regulations evolve.

“Insurers may need to redefine coverage parameters to address AI-specific risks, such as algorithmic discrimination and high-risk system failures,” the article said.

Heightened regulatory risk has spurred some cyber insurers to use various methods to reduce their cascading losses for regulatory risk exposure around the use of technology. The Gallagher article said that “AI will only elevate that focus”.

Some carriers have already modified cyber insurance policy language to restrict or even exclude coverage for certain incidents that give rise to costs incurred for regulatory investigations, lawsuits, settlements and fines.

Gallagher also said that coverage terms and claims adjudication could be affected by determining liability between AI systems’ developers and deployers.

It added that policies may adapt to cover some costs for compliance with new AI regulations, including AI risk assessments and reporting requirements.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10