California legislators are continuing to push legislation that threatens to create major problems for artificial intelligence, warns Dean Ball, board member for The Alexander Hamilton Institute for the Study of Western Civilization (AHI), in his latest piece for his online newsletter Hyperdimensional. An “alumnus” of the AHI as one of its former undergraduate fellows, Mr. Ball served as director of the Hoover Institution’s State and Local Governance Initiative. Earlier this year he joined the Mercatus Center affiliated with George Mason University, where he specializes in AI issues.

AB 3211, which has unanimously passed the California state Assembly, “starts with good intentions, “but “by focusing on one goal” to the exclusion of all others it would create “headaches for everyone,” Mr. Ball predicts, “while only questionably improving the problem it purports to address. Sadly, that sounds about right for AI policy these days.”

The state-level legislation he considers most important in the United States and has criticized in recent pieces, SB 1047, is also nearing adoption in California. In some ways, AB 3211 is “considerably more aggressive.”

The bill requires implementation of standards that would allow internet users to tell what content is AI-generated and what is generated by humans. It places an extensive list of major requirements on all AI developers (including mere startup firms) plus large websites and apps.

For example, providers must “identify their products as AI models and receive the user’s affirmative consent to speak to an AI model every time it is used,” Mr. Ball notes. This proposed policy would mean “every time you start a new chat with ChatGPT, or ping Siri on your phone, you will have to acknowledge … you are aware that you are interacting with an AI system. I do not see how anybody benefits from this.”

Among AB 3211’s dangers are the prospect that it might “make the problem of AI deepfakes and other deceptive media worse, by creating a false sense of security about what is and is not synthetically generated.” This issue “could actually help bad actors while burdening good actors.”