Sam Otto, business administration junior and co-founder of the vibecoding club, finds artificial intelligence fascinating. When he took BUS 404, Governmental and Social Influences on Business, he learned about a new state bill, the Transparency in Frontier Artificial Intelligence Act, or SB 53.

Signed into law by Gavin Newsom, SB 53 will require the largest AI companies, like Meta AI or OpenAI, to publicize safety frameworks and transparency reports. They must issue statements on catastrophic risks or critical safety incidents that happen due to their models on their websites, along with whistleblower protections, as previously reported by CalMatters.

Catastrophic risks are defined as an AI model contributing to the death or injury of 50 people or more, along with $1 billion in damages. Developers must report any incidents within 15 days or 24 hours depending on the severity of the incidents. 

By analyzing requirements in the bill, Otto understands the legal side of AI. Experiences in and out of the classroom have led him to “hackathons” at Duke University and a position as vice president of external relations for the vibecoding club. The club teaches students with different majors how to create AI projects tailored to their career fields.

”There can be issues when these companies have unrestricted liberties with AI,” Otto said. “You need to have more controls in place in order to prevent misinformation and disinformation.”

Frank Kopcinski teaches BUS 404 and is a civil rights and landlord/tenant attorney. By using Cal Poly’s learn by doing motto in his lectures, students like Otto are able to better understand legislation.

”Our society right now is so pervasive with Artificial Intelligence,” Kopcinski said. “It’s really surprising to me that these laws have just come into effect a few months ago,” 

California is one of the few states to impose AI safety regulations, including New York, Arkansas and Oregon. More states have attempted to pass AI regulation laws, according to the national conference of state legislatures.

Democrat State Senator Scott Weiner drafted SB 53. The bill comes in response to increasing AI usage and the uncertainties of its future. AI developers surpassing a revenue threshold of $500 million annually will be required to follow the highest safety protocols. Developers that violate regulations listed in the bill can face up to a maximum of $1 million per violation, according to SB 53. 

“I think it’s an impactful bill, but it’s focused on transparency around safety protocols,” Weiner said in an assembly hearing. “If someone who’s working at an AI lab sees something happening with these models that is dangerous to public safety, they should be able to say something.” 

There are currently no federal laws regulating AI across the board. According to President Donald Trump, state-by-state regulations can make it more difficult for companies to comply fairly. 

“Until such a national standard exists however,” Trump said. “It is imperative that my administration takes action to check the most onerous and excessive laws emerging from the states that threaten to stymie innovation.”

“My students confirmed my concern of there being a lack of federal regulation,” Kopcinski said. “We have an AI action plan but we are still waiting on federal regulation on AI.” 

AI company Anthropic released a statement in September of last year endorsing the bill, saying that it can assist with “thoughtful” AI development. Anthropic is currently suing the Trump administration over failed negotiations to keep the company’s AI models out of the US Department of War, according to court documents. The DOW would be able to use its AI models under “lawful purposes” through agreements.  

“While we believe the frontier AI safety is best addressed at the Federal level instead of a patchwork of state regulations, powerful AI advancements won’t wait for consensus in Washington,” Anthropic said in their endorsement announcement. 

Professionals debate SB53

Paul Lekas, senior vice president of global public policy at the Software and Information Industry Association, opposed the bill due to fears of a “burdensome framework” and pushed for federal regulation instead.

“We believe AI safety is best advanced through federal coordination and flexible standards rather than fragmented state rules that weaken innovation,” Lekas said in a statement. 

Although critics of the bill fear for a decrease in innovation, some argue that these regulations can produce more jobs in the future, not just in the tech industry. 

Neela Paykel, a fractional general counsel, has represented medical tech companies in business dealings, health regulations and revenue generation that utilize AI in their products or business models. She currently has two children attending Cal Poly. 

“I think what’s really going to happen is that career opportunities are going to expand,” Paykel said. “I think it’s going to start expanding into law, public policy and teaching. We know AI is here to stay.” 

She stated that she believes the bill was designed to regulate the biggest of AI companies. 

“Whats interesting about it is it’s one of the first laws to come out and say AI is really important, AI is everywhere and people need to deploy AI responsibly,” Paykel said.

Christopher Lupo, director of Cal Poly’s Noyce School of Applied Computing, states that as of now, the bill will have no impact on AI research at Cal Poly.

“I don’t anticipate this bill will have any impact on what we do here,” Lupo said in an email to Mustang News. “We generally don’t work with sensitive/protected data (like FERPA or HIPAA), and this new system will be fully compliant with Cal Poly’s information security protocols.” 

The school will have its own computer system with NVIDIA that students and faculty can use in the future. 

READ MORE: Cal Poly to build $3M AI Factory with NVIDIA partnership