Dean Ball, a board member of The Alexander Hamilton Institute for the Study of Western Civilization (AHI), writes on new developments in artificial intelligence (AI). He recently considered the problem of highly plausible fakeries—known as “deepfakes”—that can result from AI capabilities.

For the average person, he says, deepfakes are probably the most tangible risk from AI. Several states have already passed laws to regulate or restrict them, and more probably will soon—but they don’t generally say how to determine what is or isn’t faked.

It would make more sense, Mr. Ball writes in “Deepfakes and the Art of the Possible” at his online newsletter Hyperdimensional, to approach the problem from the opposite end—with a focus on deciding how to tell which content was human-generated, since that kind will ultimately be scarcer. Which photos, for example, were actually taken in the real world?

A technical standard for knowing which content is man-made, put forward by the Coalition for Content Provenance and Authority (C2PA), “seems to be gaining momentum” but unfortunately has major flaws, according to Mr. Ball—an Alexander Hamilton Institute “alum” and former manager of the Hoover Institution’s State and Local Governance Initiative.

“In its current form, it’s not obvious to me that C2PA [also the term for the coalition’s proposed standard] would do much of anything to improve our ability to validate content … It seems designed with a series of well-intentioned actors in mind: the freelance photojournalist using the right cameras and the right editing software … It is far less clear … that C2PA can remain robust when less well-intentioned or downright adversarial actors enter the fray.”

The challenges are formidable:

“Smartphones and other cameras would need to be updated so that they can automatically sign the photos and videos they capture. Media editing software … would need to be updated to be able to cleanly [present] data about their edits … viewing software would need to build new user interfaces to give consumers visibility into all this new information.”

Despite the “strong social, economic, and legal incentive to get this right,” and the fact that the technology industry “has gotten much better … at technical transitions of this kind” over the years, these efforts won’t be of much help “if the C2PA standard does not prove robust.”

For a good understanding of the challenge, Mr. Ball recommends some “outstanding” research on C2PA’s weaknesses. Among its main conclusions: “Metadata can be easily removed … eliminating the provenance information” and can be “forged using open-source tools to reassign ownership, make AI-generated images appear real, or hide alterations.”

But an industry-based process for setting anti-deepfake standards should be allowed to “play out for a while longer” before government tries to formulate them itself, urges Mr. Ball, a research fellow focused on AI at George Mason University’s Mercatus Center. For one thing, it’s not clear that government even has such a capacity.

“Unfortunately,” he concludes, “we will have to accept that some amount of fake content will be part of our digital lives.”