In his latest posts on artificial intelligence, Dean Ball, board member of The Alexander Hamilton Institute for the Study of Western Civilization (AHI), writes that the recent, widely reported disbanding of the “superalignment” safety team at OpenAI is a good thing. Passage of sweeping AI regulatory legislation by the California State Senate this week is not. Indeed, Mr. Ball observes, California has passed legislation that will serve as a “silly and sad milestone.”

“Defenders of the bill have written that it ‘doesn’t ban open-source AI,’” Mr. Ball writes of SB 1047, which cleared the Senate with just one vote in opposition and now goes to the state Assembly. But although no current model would be illegal, “any future model at or near the frontier of performance … would be.”

Among the very troubling things about SB 1047, in Mr. Ball’s view: The state government is “gearing up for the ‘increased incarceration costs’ and other costs associated with criminalizing AI development if this bill passes.” It gives regulatory officials wide latitude in defining the forms of perjury which artificial intelligence developers can be charged with for misconduct by users; they can, among other things, be charged with perjury for insufficient “rigor” and “detail” in their safety protocols. Officials would also be able to “change the threshold” that determines which models will come under the new law, widening their jurisdiction over time.

“The security and safety best practices within AI are live fields of science,” Mr. Ball notes, and are therefore “subject to dispute and debate by world-class practitioners. We don’t even know the security and safety practices of top-tier AI companies like OpenAI, DeepMind, Meta, and Anthropic. Does this sound … like the limited, unambiguous statute Senator Wiener [its sponsor] makes it out to be?” The post gives extensive details on the bill.

Mr. Ball’s comments on the disbanding of OpenAI’s high-level safety team are more favorable. The team’s leaders, Ilya Sutskever and Jan Leike, left the company, and Leike explained on X (formerly Twitter) that his decision resulted from disagreements with management about how best to ensure safety in AI development. Some observers, Mr. Ball notes in a separate piece for his online newsletter Hyperdimensional, praised Leike “as a hero whose revelations … underscore the urgent need for regulation of frontier AI companies,” while others said OpenAI is full of “counterproductive safety extremists.”

The latest news is positive, Mr. Ball suggests, since “in high-stakes engineering and research, it is generally better—for safety—not to rely solely” on an assigned team. With a specific internal watchdog or regulator, “it is easy for an ‘us vs. them’ dynamic to develop,” potentially causing “a lack of collaboration and innovation, or even an instinct by product-focused engineers and results-oriented management to discount legitimate warnings from safety engineers.”

This dynamic “has played out time and time again. When the post-mortem reports were written about … the Columbia and Challenger Space Shuttle disasters, investigators found that safety-focused teams had raised concerns about the problems that led to the crashes … they were ignored, in part because safety was viewed as a discrete box to be checked rather than a holistic responsibility” of the entire organization.

Because a safety unit is necessarily removed from many parts of a complex enterprise, it “can also become myopic,” adds Mr. Ball, an AI-focused research fellow at George Mason University’s Mercatus Center and former manager of the Hoover Institution’s State and Local Governance Initiative. “The Deepwater Horizon oil spill and the Fukushima accident are … examples of safety teams that had become overly focused on regulatory compliance and policing the conduct of employees,” rather than “holistic risk assessments and proactive identification of problems.”

Industry leaders Meta and Google/DeepMind have, like OpenAI, “made similar moves to disband discrete safety functions and instead integrate them into product-focused research or engineering teams.”