Repairer Driven News
« Back « PREV Article  |  NEXT Article »

Connecticut’s major AI bill passes Senate, has little time to pass House

By on
Legal
Share This:

Connecticut’s Senate recently passed what the Associate Press is calling “one of the first major legislative proposals to rein in bias” in AI decision-making in the nation. 

The bill passed the Senate 24-12 April 25 but leaves little time for the House to review the controversial bill before the legislature adjourns May 8. 

Under SB2, “high risk” AI developers in the state will be required to use reasonable care to protect consumers from any known or reasonable foreseeable risks of algorithmic discrimination, the bill says. 

High-risk AI is defined as an AI system that has been specifically developed and marketed, or intentionally and substantially modified, to make or be a controlling factor in making a consequential decision. 

A consequential decision is defined as a decision that has a material legal or similarly significant effect on any consumer’s access to, or availability, cost, or terms of any criminal justice remedy, education enrollment or opportunity; employment or employment opportunity; essential goods or services; financial, or lending services; essential government services; health care services, or housing, insurance or legal services. 

Developers also will be required to make a general statement for the intended use of the AI and provide documentation to any deployers that disclose any known or reasonable foreseeable limitations including but not limited to discrimination. Documentation will be required to include the AI’s purpose, intended benefits, data used to train the AI, evaluation of the AI, data governance used, intended outputs, mitigation measures, and how it will be monitored. 

The bill also requires developers to make certain information public, such as how the developer manages known or foreseeable risks of discrimination. 

Developers also would be required to disclose all deployers of the AI to the state’s attorney general. Other information, including known or foreseeable risks, also will need to be provided to the attorney general. 

The bill also requires deployers of AI to meet a number of requirements such as creating an impact assessment, a statement disclosing the purpose and intended use, an analysis of whether the AI poses any known or foreseeable risks, an overview of data categories used by the AI, metrics used to evaluate AI performance, and post-deployment monitoring. 

Deployers also would be required to review their AI annually, along with notifying consumers in plain language about the AI’s purpose and how it is used in decision-making. 

The bill includes numerous other steps deployers must take if using a high-risk AI. 

“I think that this is a very important bill for the state of Connecticut. It’s very important I think also for the country as a first step to get a bill like this,” Sen. James Maroney, the bill’s author, told the AP. “Even if it were not to come and get passed into law this year, we worked together as states.”

Senate Minority Leader Stephen Harding expressed concerns to the AP that the bill could be detrimental to businesses and people. 

“I think our constituents are owed more thought [and] more consideration to this before we push that button and say, ‘This is now going to become law,'” he said.

The bill was created out of a two-year AI task force and a year’s worth of collaboration among bipartisan legislators, the AP says. 

Washington state recently passed a bill creating a 19-member task force to study AI that is required to make legislative recommendations by July 1, 2026. 

The bill, SB5838, was proposed by Attorney General Bob Ferguson and sponsored by Sen. Joe Nguyen (D-District 35). It passed the Senate Feb. 8 with a 31-18 vote. It passed the House 68-28 and was signed by Gov. Jay Inslee March 18. 

“AI is becoming a part of our daily lives, and it’s our duty to immediately begin working in a thoughtful way to ensure we protect Washingtonians against this technology’s risks while maximizing its benefits,” Ferguson said in a press release. “I appreciate the Legislature’s partnership, and I look forward to launching an inclusive task force that will develop recommendations to guide public policy in this important arena.”

Another AI bill, SB972, proposed earlier this year in Florida by Sen. Joe Gruters (R-District 22) won’t be moving forward. The bill would have designated the Department of Management Services to oversee an Artificial Intelligence Advisory Council.

The Florida bill died in the Governmental Oversight and Accountability Committee March 8.

The National Association of Insurance Commissioners (NAIC) late last year approved a model bulletin on the use of AI by insurance companies during the association’s 2023 Fall National Meeting.

It outlines the need for processes and controls to prevent possible AI inaccuracies, discriminating biases, and data vulnerabilities.

A Collision Industry Conference (CIC) committee recently shared practical uses for the technology in collision repair shops such as help with legal, customer experience, marketing and grammar.

 

 

IMAGES

Photo courtesy of Sean Pavone/iStock

Share This:

Tagged with: