Friday, April 10, 2026
Home Business / FinanceOpenAI backs Illinois bill to shield AI firms from harm lawsuits

OpenAI backs Illinois bill to shield AI firms from harm lawsuits

by admin7
0 comments


An Illinois state bill that limits when AI developers can be sued over catastrophic harm has gained a notable backer: OpenAI, according to Wired. Under the measure, liability protection applies only to companies that neither intentionally nor recklessly caused the harm in question and that have made safety and transparency reports publicly available.

SB 3444, known as the Artificial Intelligence Safety Act, defines “critical harms” as events such as the death or serious injury of 100 or more people, at least $1 billion in property damage, or a bad actor using AI to develop a chemical, biological, radiological, or nuclear weapon. Coverage under the bill is tied to a model’s training expense: any system built on more than $100 million in compute qualifies as a frontier model, a bar that Wired reports would rope in the country’s biggest AI developers, among them OpenAI, Google $GOOGL, Anthropic, xAI, and Meta $META.

“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses — small and big — of Illinois,” OpenAI spokesperson Jamie Radice said in a statement to Wired.

In testimony supporting the bill, OpenAI’s Caitlin Niedermeyer argued against a “patchwork of inconsistent state requirements” and called for a federal framework instead. The bill itself would cease to apply if Congress enacts overlapping federal rules.

AI companies have poured significant resources into shaping AI policy at both the state and federal levels. OpenAI, Meta, Alphabet, and Microsoft $MSFT collectively spent $50 million on federal lobbying in the first nine months of 2025, according to IssueOne, a nonpartisan group that tracks money in politics. OpenAI has said it will open its first Washington, D.C., office at the start of 2026.

No federal law has yet resolved who bears responsibility if an AI system triggers a large-scale disaster, and Congress shows little sign of closing that gap anytime soon. States including California and New York have passed laws requiring AI developers to submit safety and transparency reports, and lawmakers across the country continue to advance competing regulatory frameworks in the absence of federal action.



Source link

You may also like

Leave a Comment