Anthropic is opposing an Illinois bill that would grant substantial liability protections to AI labs, even in severe cases involving critical harm, such as over 100 deaths or $1 billion in property damage. This legislation, backed by OpenAI, represents a divide in the AI industry, where safety-focused companies like Anthropic push back against broad exemptions. Recently, Anthropic has also taken steps to enhance AI safety, including signing a memorandum of understanding with the Australian government to advance safety research.
OpenAI: OpenAI is a research organization developing advanced AI models including ChatGPT and GPT series to ensure artificial general intelligence benefits humanity. It supports safety initiatives such as the recently launched Safety Fellowship for alignment research. OpenAI testified in favor of an Illinois bill limiting AI developer liability unless models are directly instructed to cause harm.
Anthropic: Anthropic is an AI safety and research company building reliable, interpretable, and steerable AI systems such as the Claude models. It emphasizes mitigating AI risks through efforts like government collaborations on safety and contributions to open-source security. Anthropic opposes an Illinois bill backed by OpenAI that would shield AI labs from liability even for critical harms like mass casualties.
`json
{
“Bill Provisions”: “The proposed Illinois legislation aims to exempt AI firms from liability for model-induced harms, except in cases of explicit promotion of misuse.”,
“Industry Divide”: “Leading AI companies have differing views on liability protections, with some labs prioritizing safety and opposing broad exemptions.”,
“Recent Safety Efforts”: “Anthropic has recently engaged in a collaboration with the Australian government to promote AI safety research.”
}
`
