Repairer Driven News
« Back « PREV Article  |  NEXT Article »

P&C executives discuss AI use, regulation in insurance with Triple-I and SAS

By on
Insurance | Legal
Share This:

A new whitepaper from the Insurance Information Institute (Triple-I) and SAS takes the stance that property and casualty insurers are uniquely positioned to provide guidance to regulators and advance the conversation for ethical uses of artificial intelligence for all businesses.

“Pioneering Ethical AI: The Crucial Role of Property and Casualty Insurers” states that, as a first step, insurers should be expected to lead by example by developing detailed plans to deliver ethical AI in their operations.

“This will position them as trusted experts to help lead the wider business and regulatory community in the implementation of ethical AI,” the paper states.

According to the report, AI regulation is currently “fractured” and geographically based, mostly varying by state, but also by country.

“The need for leaders who understand risk and regulation is imperative and it’s a role for which P&C insurers are uniquely qualified,” Triple-I and SAS said in a news release. “Insurers leverage data but also understand the importance of imagination in navigating a hard-to-predict world. Insurers anticipate risks that have yet to materialize. Their forward-looking approach is crucial in the emerging AI landscape.

“Insurers understand the power of data… They also understand data’s limitations, and they have strategies for when these limits are reached.
Insurers navigate a complex web of regulatory environments… This knowledge is vital in shaping AI regulations that are effective, adaptable and implementable.”

Peter Miller, president and CEO of The Institutes (of which Triple-I is an affiliate), added, “Insurers are uniquely positioned to help people and businesses maximize the opportunities of AI while safeguarding against the risks. A forward-looking approach is essential as we navigate this transformative landscape, ensuring that AI benefits society as a whole.”

During a panel discussion webinar on the whitepaper and ethical AI, Miller stated that the technology “provides immense opportunities and significant risks.”

“Generative AI is enabling us to move from detecting and repairing after a loss occurs to being able to predict and prevent the loss from ever happening,” he said. “In general, AI is enabling efficiencies across all aspects of the risk management and insurance value chain. Currently, at The Institutes, we have more than 40 AI projects underway stretching across our entire value chain and affecting every level of the organization. Most of those are designed, initially, to make administrative tasks more efficient.”

Miller said projects include automating some software creation operations via AI and using generative AI to apply bill payments to the correct invoices.

Panelists included Jennifer Kyung, USAA P&C underwriting vice president; Iris Devriese, Munich Re underwriter, and AI liability lead and Matthew McHatten, MMG Insurance president and CEO.

“Information ingestion and digestion is a major issue for our industry — a lot of read-only data, significant medical and legal documents, and looking through that for salient information so you send that out into customer service transactions and underwriting transactions,” McHatten said.

“There’s a lot of additional time in nearly everything we do; a lot of back and forth which is very frustrating and will be increasingly less tolerated by the consumer as these applications get better and better in terms of what the consumer is faced with otherwise outside of insurance, they’re going to hold us to the same standards so we need to catch up.”

McHatten added that insurance company employees can benefit from AI by using it to efficiently gather more information from customers without duplication.

“There’s significant potential there for consistency, accuracy, and a better customer experience,” he said.

Kyung said USAA is using AI to help underwriters identify exposures, for example, by looking at aerial images.

“AI can take a look at those aerial images and interpret them and determine if there’s a potential condition concern for our members,” she said. “If so, we can trigger an inspection or we can reach out to those members and have a conversation around mitigation or inquire around that.”

USAA also uses AI to read through call transcriptions to identify and categorize, such as in tone and empathy, and use it to improve quality and service to its members, Kyung said.

“As we go forward, it [AI] really will help us look at much broader and larger swaths of unstructured data so where there’s many notes in the claims file, we can take a look at those and determine if there’s underwriting actions or alerts that we need to follow up on,” she said. “Or similarly with inspections, we can read through large amounts of data with the AI that will help the underwriter as an aid. I think it’s also really important to think about AI truly as an aid or additional insight, as opposed to a black box.”

In a recent survey of insurance leaders by SAS, 60% reported that their organizations had already begun using generative AI, and 90% said they plan to invest in GenAI in the next fiscal year.

“Insurance plays a crucial role in protecting lives, livelihoods and businesses around the world,” said Reggie Townsend, SAS data ethics vice president, in the release. “The global nature and influence of the sector positions insurers to model practices that emphasize responsible innovation. This is why trust is so essential, and the deployment of AI in a trustworthy manner is critical.”

Triple-I and SAS’ joint report recommends that, in the near term, insurers should strive to:

    • Implement an ethical AI framework
    • Cultivate AI literacy in their executives
    • Educate their employees on the proper use of AI, including privacy protection and systems security
    • Become active in ethical AI initiatives
    • Implement risk management consulting with policyholders around AI risk
    • Communicate their actions both internally and externally

“The accelerating adoption of AI presents an opportunity for P&C insurers to once again lead in a time of technological disruption,” said Michael “Fitz” Fitzgerald, SAS insurance industry advisor, in the release. “As regulation will undoubtedly evolve, insurers can leverage their unique risk insights, regulatory expertise, and historical data proficiency to lead all industries to develop, deploy and use AI in ways that are ethical, trustworthy and transparent.”

The panel agreed that in addition to P&C company staff and executives being educated on AI so should regulators to develop effective AI regulation and use.

“If we don’t educate, the tendency will be to overregulate,” McHatten said. “I think we all need to ask ourselves the question relative to that risk of what else could be regulated? Much of it has been pricing but could it be process? There’s a tremendous amount of downside here just by virtue of how rapidly this will be adopted and then the upstream of how fast it will change things… regulation could be added upon what we already have which obviously makes it onerous for us to do business and impacts, ultimately, the value we’re trying to deliver to the consumer.”

 

 

According to the National Conference of State Legislatures, at least 45 states, Puerto Rico, the Virgin Islands, and Washington, D.C. have introduced bills regarding AI ethics and specific uses this year. Thirty-one states, Puerto Rico, and the Virgin Islands have also adopted resolutions or enacted legislation. NCSL has a separate database that tracks legislation on AI in relation to autonomous vehicles.

A Sept. 12 article by law firm partners of Goodwin Procter states that Colorado became the first state to pass a comprehensive piece of AI legislation earlier this year. Unlike most other states and territories, Colorado’s law applies across industries and to algorithmic discrimination in several areas such as employment, insurance, and housing.

More specifically, the new law aims to protect consumers from bias and discrimination in AI systems but it won’t take effect until 2026. Colorado’s act resembles the EU AI Act, which is known as the world’s first AI regulation, the Goodwin Procter article states.

“Both laws are broad, applying to developers and deployers of AI systems used for a range of purposes. Like the EU AI Act, the Colorado AI Act focuses on regulating high-risk AI systems… It mandates that deployers inform consumers when high-risk AI systems are used to make decisions that might, for instance, affect consumers’ ability to get a loan.

“The Colorado AI Act’s emphasis on regulating high-risk AI systems could guide other states in identifying which AI applications to prioritize for regulation. Indeed, though the Colorado AI Act is the first state-level comprehensive AI law, it likely won’t be the last, particularly amid an absence of federal AI legislation.”

A bill passed by California’s legislature in August that was reported by Reuters as “hotly contested” requires Gov. Gavin Newsom’s signature or veto by Sept. 30.

“Tech companies developing generative AI, which can respond to prompts with fully formed text, images, or audio as well as run repetitive tasks with minimal intervention, have largely balked at the legislation, called SB 1047, saying it could drive AI companies from the state and hinder innovation,” the Reuters article says.

The bill would enact the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” to, among other actions, have developers comply with various requirements before beginning to train a covered model. The proposed requirements include the capability to promptly shut down the model and implement a written and separate safety and security protocol.

Some U.S. Congress Democrats have opposed the bill while proponents include Tesla CEO Elon Musk who also runs an AI firm called xAI.

Images

Featured illustration: Poca Wander Stock/iStock

Graph created by NAIC (Webinar screenshot)

Share This: