… which, if you haven’t heard about it yet, can be found here.
Here’s the key paragraph:
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratoriumhttps://futureoflife.org/open-letter/pause-giant-ai-experiments/
Lots of different ways to think about this, and as always, the truth lies somewhere in the middle. But forget all of the arguments for the moment, homilies from a variety of different languages go a long way towards helping you understand that this is a (mostly) lost cause. You can talk of hungry sparrows in the field in Hindi, or you could talk about getting the genie back in the lamp in English. You might as well talk about saying “statue!” to a tsunami, and you might actually have better luck with that plan.
But LLM’s are here, they’re about get better capabilities, and they will be used for good and for bad.
That’s it. C’est tout.
As with everything else, there is a lot to read about this issue, but there are two pieces in particular that I enjoyed reading. The first is a piece written by Sayash Kapoor and Arvind Narayanan in their excellent newsletter, AI Snake Oil. Worth subscribing to, if you ask me.
Begin with this framework, taken from their post, and please read the entire post.
The letter positions AI risk as analogous to nuclear risk or the risk from human cloning. It advocates for pausing AI tools because other catastrophic technologies have been paused before. But a containment approach is unlikely to be effective for AI. LLMs are orders of magnitude cheaper to build than nuclear weapons or cloning — and the cost is rapidly dropping. And the technical know-how to build LLMs is already widespread.https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci
LLM’s are already in the wild, they can now run on devices manufactured three years ago, and the models will likely become more efficient over time, and hardware capabilities will get better over time. It’s all well and good to want to pause, but I don’t think the letter spends nearly enough time in asking “how”, let alone answering the question.
Speaking of omissions from the letter:
Is there any mention of public choice/political economy questions in the petition, or even a peripheral awareness of them? Any dealing with national security issues and America’s responsibility to stay ahead of potentially hostile foreign powers? And what about the old DC saying, running something like “in politics there is nothing so permanent as the temporary”?https://marginalrevolution.com/marginalrevolution/2023/03/the-permanent-pause.html
Might we end up with a regulatory institution as good as the CDC?
You know you’re in trouble when Tyler Cowen decides you’re worthy of some gentle trolling.
But on a more serious note, the meta-lesson here is that if you are going to recommend a particular policy, you’d do well to ask how feasible it is in the first place. There is always the temptation to imagine the end-state Utopia when you make a recommendation. Fixating on that Utopia often distracts us from asking which route to take to reach said Utopia. And every now and then, one realizes that there isn’t any route available at all.
Outcomes over intentions!
One final point: I mentioned that the trust lies somewhere in the middle. In the context of this post, what does this mean, exactly? Should we stop or not? Well, as I’ve explained, I don’t think we can stop – but there is merit to the idea of proceeding cautiously.
Festina lente remains good, underrated advice: