In recent pieces for his online newsletter Hyperdimensional, Dean Ball, a board member of The Alexander Hamilton Institute for the Study of Western Civilization (AHI), challenges prevailing criticism about developments in artificial intelligence (AI). In “On Technological Destiny,” Mr. Ball stresses that questions as to where “we” are going with AI, or whether “we” want such transformations to occur, are misplaced. “If humanity were a singular mind,” he writes, “these kinds of questions would be legible. But … the world is not governed by a singular mind, and for that reason ‘the world’ is not governed at all in any meaningful sense. There is no ‘we’ who ‘decides’ questions such as these … History does not unfold by show of hands.”

But this does not mean people have no say: “Because there is no destination and no way to set one, individual efforts can meaningfully impact our direction. In this sense, ‘you’ are more powerful than the ‘we’ we all imagine …You can build tools, develop ideas, and persuade others to do the same. You can create value, and thanks to technology, your ability to do so is increasing by the month. Each of us can do that hard work, and together, ‘we’ can bring about a better future.”

Mr. Ball’s latest piece, “Open-Source AI Has Overwhelming Support,” covers the results of a public-comments solicitation from the National Telecommunications and Information Administration, issued in response to President Biden’s Executive Order on AI, last fall. It asked “a broad range of good questions about this vital topic.”

Comments came from more than 300 sources, including industry groups, academic institutions, think tanks, corporations, and individuals. Using a cutting-edge AI program to analyze the more than 1,000 pages (more than half a million words), Mr. Ball was easily able to obtain a good analysis of them.

He reports: “the comments overwhelmingly support maintaining [public] access to open models … The vast majority of respondents articulate the benefits of open models to science, business, competition, transparency, safety, and freedom of speech. Almost everyone acknowledges that there are risks … A minority of respondents believe these risks merit policies to ban or severely restrict the development of open models. Almost all [such] organizations … are AI safety-focused non-profits. Most other people and groups think the risks are worth it.”

Allowing AI to reach its potential is not the only issue, Mr. Ball adds. Another concern is keeping government from exerting too much power over it: “Policies that ban open-source AI altogether or forbid it above a certain [computational or capability] threshold require a staggering level of government control over human beings’ use of computers and a fundamental reshaping of our norms about digital freedom and privacy. The burden of proof, therefore, is high.”