Press "Enter" to skip to content

AI and Political Economy: Citizens Beware

By Gregory Robson, University of Notre Dame, and Justin Tosi, Georgetown University

Lately, we’ve been getting daily surprises about new and impressive things that AI can do. Some commentators have begun to wonder whether AI will revolutionize prospects for economic planning. Nobel Laureate Daren Acemoglu (1, 2) recently asked, “What if the computational power of central planners improved tremendously? Would Hayek then be happy with central planning?” And legal scholar Feng Xiang claims: “[B]ecause AI increasingly enables the management of complex systems by processing massive amounts of information through intensive feedback loops, it presents, for the first time, a real alternative to the market signals that have long justified laissez-faire ideology—and all the ills that go with it.” In a new political era marked by executive power flex, might AI give policymakers the computational power they need to plan the economy? Or was that other Nobel laureate, F. A. Hayek, right to imply that even the best future supercomputers cannot plan an economy?

Information Impossible to Obtain: In the age of AI, economists and policymakers will increasingly look to AIs (e.g., ChatGPT, Claude, Copilot, Gemini, Meta AI) to inform their plans. But even today’s AIs, which are built on sophisticated large language models, cannot centrally plan an economy. They lack access to information in the right amounts, of the right kinds, and at the right times—and, we suggest, they always will. Markets approach efficient outcomes when consumers and producers account for countless variables in coordinating supply and demand behavior. You, like all economic actors, are an amalgam of numerous desires, beliefs, motivations, needs, interests, and preferences, all of which persistently change in real time. It is no surprise that the “hardware” behind these, the human brain, is the most complex object in the known universe. To gather and use the right economic information, an AI would need to track or model the brains, or the relevant thoughts they produce, of every participant in the global economy—in real time—when deciding how to allocate economic resources. This is impossible. A public that believes it is possible risks manipulation.

A Countermove Refuted: Perhaps the best way to defend the promise of AI as an economic planner is to argue that economic planners need only know facts about human nature when rationally allocating resources. This argument fails. After all, even the basic needs of human nature, such as the need for water, are persistently subject to trade-offs. We can’t know what water is worth without also knowing how people value all related goods by comparison.

The Problem of Other AIs: Worse still, for Acemoglu and others to be right, and Hayek wrong, AIs would need to model the behavior of other AIs. With AIs themselves becoming economic actors, each planning AI would need to model how other AIs behave, including how other AIs model every other AI. This fact is crucial and makes the task impossible. AIs, which already power everything from our search engines and phones to corporate databases and missile defense systems, will indeed play outsized roles in the near term. But can AIs shift from powering particular domains to coordinating entire economies? Again, because diverse AIs will create, use, gather, classify, store, and purvey economic data, a planner-AI would need to take full measure of what other AIs do and think. Unless nearly all citizens were willing to let AIs sync in multifarious ways, the AIs will be unable to coordinate decentralized data across all producers and consumers in an economy.

AI is Not Neutral: Even if AI could plan an economy well, it likely would not, because it (often) is not a morally neutral actor. For one thing, AIs attempt to collude, bending the rules in their favor. As Winston Wei Dou, Itay Goldstein and Yan Ji have shown, AI bots have tried to collude in simulated stock markets. For another, as Tristan Harris has discussed, there is also (what we call) the manipulation problem. AI manipulates and bribes to avoid extinction. It has “skin in the game” and responds accordingly. AI models have, for example, attempted to blackmail people 80-90% of the time to avoid replacement, scanning an executive’s emails to threaten exposing their affair! Why assume that AI would act any differently when planning an economy? It stands to reason that AI would be guilty of the same sort of cronyism and other malfeasance that human beings today are, but quite possibly more clever in its execution.

Citizens Beware: Members of political societies should be wary of overly optimistic promises about AI. Markets respond in real time to millions of multifarious decisions that people make worldwide. Even a flawless AI cannot predict what those people will do tomorrow, given data that dynamically evolves and ever becomes outdated. Moreover, suppose AI could, in principle, make the economic predictions we need. Even so, it would quite possibly operate in biased ways that favor its interests and survival and disfavor ours.

So if and when your congressperson, college professor, president, prime minister, or favorite scholar eventually appeals to AI as the economic wave of the future, don’t fall for it—at least where central planning is concerned. Such appeals ignore the technical impossibility of coordinating whole economies, doing so in many cases to secure the appealers’ interests.

Leave a Reply

Your email address will not be published. Required fields are marked *