Dean Ball, board member of The Alexander Hamilton Institute for the Study of Western Civilization (AHI), specializes in the study of artificial intelligence at George Mason University’s Mercatus Center. Last week Mr. Ball provided an overview of “Where I Stand on the Biggest AI Issues.” He shared his thoughts on artificial intelligence (AI) as they have continued to develop since he started writing about it for an online audience several months ago.

Mr. Ball, a 2014 graduate of Hamilton College and an “alumnus” of the AHI as one of its former undergraduate leaders, thoughtfully applies his liberal arts education to the broader concerns arising from this anxiety-provoking topic. “At its core,” he reflects, “AI is a philosophical endeavor. Nearly all our current debates about AI—even ostensibly prosaic and technocratic matters—boil down to one’s beliefs about the nature of reality.”

The beginning of wisdom on artificial intelligence, he suggests, is to recognize that its future—the focus of most hopes and fears about AI—simply “does not exist. We have to get from here to there” and therefore can discuss it only in terms of probabilities and possibilities. Citing a concept stressed by the 20th century English philosopher Michael Oakeshott—a favorite of Mr. Ball’s even in his college years—he writes:

“Life … is a ceaseless improvisatory adventure.” With AI in particular, “We are in an endless and bottomless sea. There is no anchor. So if you want to achieve something—AI safety, say—you have to build it. You cannot just declare it from the top-down … You can try to cast an anchor into the bottomless sea, but the ship will just start sinking, inch by inch.”

Mr. Ball also suggests that we may be led astray by an unrealistically unitary, rather than pluralistic, vision of what AI will become. A plausible, although far from certain, scenario for its eventual meaning in our lives would be something like this: “You’ll talk to an orchestrating agent [AI assistant or servant], and other agents will be deputized on the fly to do research, write code, etc. … It will be possible to ask a question, and, in essence, create a company of digital minds exclusively for … solving your specific task. The company will exist as long as it needs to: a day, a week, indefinitely.”

“In my view, the biggest threat from AI … is not AI-generated deepfakes or bioweapons, not war with China, and not the loss of the ‘human element.’ It is a debt crisis provoked by this surge in interest rates.” But in any case, “there will be … some scarce resource, over which competition is waged. And that, too, will slow down the ‘superintelligence.’”

The piece, published in Mr. Ball’s online newsletter Hyperdimensional, also summarizes his perspective on AI regulation.