Dean Ball, board member of The Alexander Hamilton Institute for the Study of Western Civilization (AHI), discusses a major fear about artificial intelligence in his new online newsletter Hyperdimensional.  In “AI Biorisk: A Dose of Reality,” he stresses that the software—and thus the AI-side of possible biological and chemical weapons development by would-be terrorists— is inherently hard to control.

Mr. Ball, the senior program manager at the Hoover Institution’s State and Local Governance, remarks it might be better to focus on concrete rather than conceptual side of the process, its “physical constraints,” including the special equipment and experimentation needed to produce such weapons. AI can accelerate those steps, but it is “far from clear that it can radically simplify the process.”

Amid signs of a nascent regulatory approach toward the fear of biological and chemical weapons risk, it is not evident, Ball writes, how AI can simplify the necessary experimentation and production to such an extent that “untrained individuals or groups” would suddenly have a developmental capability. And in any case, limiting the spread of “knowledge and software,” including AI, is “fiendishly difficult.”