Will LLM’s Collude?

The rise of algorithmic pricing raises concerns of algorithmic collusion. We conduct experiments with algorithmic pricing agents based on Large Language Models (LLMs), and specifically GPT-4. We find that (1) LLM-based agents are adept at pricing tasks, (2) LLM-based pricing agents autonomously collude in oligopoly settings to the detriment of consumers, and (3) variation in seemingly innocuous phrases in LLM instructions (“prompts”) may increase collusion. These results extend to auction settings. Our findings underscore the need for antitrust regulation regarding algorithmic pricing, and uncover regulatory challenges unique to LLM-based pricing agents.

Fish, S., Gonczarowski, Y. A., & Shorrer, R. I. (2024). Algorithmic Collusion by Large Language Models. arXiv preprint arXiv:2404.00806.

And from the conclusion of the paper:

Indeed, it is quite plausible that in the near future, merchants who wish to price their goods without any intent to collude with competitors would turn to the technology that assists them throughout many day-to-day decisions: LLMs. Each of the merchants would describe the market to the LLM and tell it to focus on long-term revenue without so much as a hint to collude. The merchants would not know how LLMs work, and yet have no reason to believe an LLM might engage in any uncompetitive behavior on their behalf. Some of them
would even ask the LLM whether it might engage in collusive behavior and be reassured by the LLM that it would not do so (see Figure 11 for an example with GPT-4). There would be no red flags whatsoever. And then, each of them would put the LLM in charge, and as we have demonstrated, the LLMs might engage in seemingly collusive behavior to the detriment of consumers, despite each of the merchants having acted with in good faith and in a completely reasonable, even cautious, way.
What are best practices for using LLMs for pricing? Should certain terms or phrases be mandated or forbidden? And, how should firms monitor the “strategic intentions” of their pricing algorithms? As the use of LLMs becomes more commonplace, these questions and others will become pressing, and will make regulation and enforcement even more challenging

Fish, S., Gonczarowski, Y. A., & Shorrer, R. I. (2024). Algorithmic Collusion by Large Language Models. arXiv preprint arXiv:2404.00806.

Teaching using LLM’s is going to be fascinating, and students will get to learn economics in ways that we could only dream of until a couple of years ago. Actually, scratch that: we couldn’t even have dreamt of this two years ago!

Here’s Ethan Mollick in Co-Intelligence, a book that has come out only today (and you really should be reading it. Yes, all of you, and yes, all of it. It is excellent, and a review of it will be out here on Monday):

But the AI is not just acting like a consumer; it also arrives at similar moral conclusions, with similar biases, to the ones we have. For example, MIT professor John Horton had AI play the Dictator Game, a common economic experiment, and found he could get the AI to act in a way similar to a human. In the game, there are two players, one of whom is the “dictator.” The dictator is given a sum of money and must decide how much to give to the second player. In a human setting, the game explores human norms like fairness and altruism. In Horton’s AI version, AI was given specific instructions to prioritize equity, efficiency, or self-interest. When instructed to value equity, it chose to divide the money equally. When prioritizing efficiency, the AI opted for outcomes that maximized the total payoff. If self-interest was the order of the day, it allocated most of the funds to itself. Though it has no morality of its own, it can interpret our moral instructions. When no specific instruction was given, AI defaulted to efficient outcomes, a behavior that could be interpreted as a kind of built-in rationality or a reflection of its training.

Mollick, Ethan. Co-Intelligence: Living and Working with AI (pp. 68-69). Ebury Publishing. Kindle Edition.

AI won’t just cause you three sleepless nights for the reason Ethan Mollick talks about at the start of his excellent book. AI will also cause you sleepless nights because you will be wondering about the awesome ways in which you can become better at your job, especially if you are a teacher:

But AI has changed everything: teachers of billions of people around the world have access to a tool that can potentially act as the ultimate education technology. Once the exclusive privilege of million-dollar budgets and expert teams, education technology now rests in the hands of educators. The ability to unleash talent, and to make schooling better for everyone from students to teachers to parents, is incredibly exciting. We stand on the cusp of an era when AI changes how we educate—empowering teachers and students and reshaping the learning experience—and, hopefully, achieve that two sigma improvement for all. The only question is whether we steer this shift in a way that lives up to the ideals of expanding opportunity for everyone and nurturing human potential.

Mollick, Ethan. Co-Intelligence: Living and Working with AI (p. 177). Ebury Publishing. Kindle Edition.

Yes it really is happening, and at least where education is concerned, please, pretty please, bring it on!

Update: apologies, I forgot to mention that I landed on the paper via Ethan Mollick’s timeline on Twitter.