“AI’s Black Boxes,” the lack of transparency or opaqueness for some artificial intelligence systems, concerns Dean Ball, board member of The Alexander Hamilton Institute for the Study of Western Civilization (AHI). It is important, Mr. Ball explains in his recent article at Hyperdimensional to be “realistic about how ‘explainable’ or ‘auditable’ AI systems will ever be.”
Mr. Ball is a former AHI undergraduate fellow, later a key staff member with the Hoover Institution, and now a research fellow specializing in artificial intelligence with the Mercatus Center at George Mason University. AI systems may never “be perfectly explainable,” he writes, “in a way that satisfies human rationality.” Because AI systems or “neural networks” arise like some things with which we are more familiar without being intentionally organized or designed by humans. They grow but are not planned. A philosophical term for such phenomena is “emergent orders.”
These developments could have major implications for governmental AI policy: “Regulators’ desire for perfect, rational explainability may come into tension with the reality of emergent orders.”
Recalling his education at both Hamilton College and AHI more than ten years ago, Mr. Ball notes that he was acquainted with the idea of emergent orders by the work of Friedrich Hayek, largely in the context of free markets. “But almost immediately, I had the intuition that emergent orders described far more than just free market economies.”
Emergent orders are “computationally irreducible,” a concept used most notably in the work of the computer scientist Stephen Wolfram. Here’s how Wolfram describes it (emphasis added): ‘In traditional science it has usually been assumed that if one can succeed in finding definite underlying rules for a system … ultimately there will always be a fairly easy way to predict how the system will behave. ‘But now computational irreducibility leads to a much more fundamental problem … it implies that even if in principle one has all the information … to work out how some particular system will behave, it can still take an irreducible amount of computational work actually to do this.’”
The implications of Wolfram’s work are, remarks Mr. Ball, that we will make AI systems fairly predictable although not perfectly so.
Leave A Comment