Dean Ball serves as a board member for The Alexander Hamilton Institute for the Study of Western Civilization (AHI).  In “Science, Standards, Laws,” a recent piece in Hyperdimensional, an online newsletter about emerging technology and its relation to governance, he warns that general regulation of artificial intelligence (AI) would be counterproductive and unfair at this time because we do not yet know enough about it.

Except for narrowly targeted laws about carefully specified aspects of AI, a regulation-first approach is “backwards,” Ball writes. It would mean “regulation that outlaws or punishes developers for the [potential] bad outcomes before we have reasonable technical standards for what constitutes a properly functioning AI model.” This situation runs the risk “that any misuse of an AI model will be considered the fault of the … developer, even if they … did nothing wrong.” Such an approach “is not how liability works for any other product” and, in addition, “could result in a highly constrained AI industry.”

Before proceeding to general regulation of artificial intelligence based on realistic and appropriate standards of care, Ball urges, we must find out enough “about the uses of AI, the failure cases, and even the mechanisms of AI models.”

Ball is also the manager of the Hoover Institute’s State and Local Governance Initiative and a 2014 graduate of Hamilton College.