by Malcolm Murray

Aella, a well-known rationalist blogger, famously claimed she no longer saves for retirement since she believes Artificial General Intelligence (AGI) will change everything long before retirement would become relevant. I’ve been thinking lately about how one should invest for AGI, and I think it begs a bigger question of how much one should, and actually can, act in accordance with one’s beliefs.
Tyler Cowen wrote a while back about how he doesn’t believe the AGI doomsters actually believe their own story since they’re not shorting the market. When he pushes them on it, it seems to be that their mental model is that the arguments for AGI doom will never get better than they already are. Which, as he points out, is quite unlikely. Yes, the market is not perfect, but for there to be no prior information that could convince anyone more than they currently are seems to suggest a very strong combination of arguments. We need “foom” – the argument, discussed by Yudkowsky and Hanson, that once AGI is reached, there will be so much hardware overhang and things will happen on timescales so beyond human comprehension that we go from AGI to ASI (Artificial Super Intelligence) in a matter of days or even hours. We also need extreme levels of deception on the part of the AGI who would hide its intent perfectly. And we would need a very strong insider/outside divide on knowledge, where the outside world has very little comprehension of what is happening inside AI companies.
Rohit Krishnan recently picked up on Cowen’s line of thinking and wrote a great piece expanding this argument. He argues that perhaps it is not a lack of conviction, but rather an inability to express this conviction in the financial markets. Other than rolling over out-of-the-money puts on the whole market until the day you are finally correct, perhaps there is no clean way to position oneself according to an AGI doom argument.
I think there is also an interesting problem of knowing how to act on varying degrees of belief. Outside of doomsday cults where people do sell all their belongings before the promised ascension and actually go all in, very few people have such certainty in their beliefs (or face such social pressure) that they go all in on a bet. Outside of the most extreme voices in the AI safety community, like Eliezer Yudkowsky whose forthcoming book literally has in the title that we will all die, most do not have an >90% probability of AI doom. What makes someone an AI doomer is rather that they considered AI doom at all and given it a non-zero probability.
However, a non-zero, below 90%, belief might be hard to know how strongly to act on. So let’s assume we do believe most of the AGI hype, how should we then know how to invest money, or know how much to put aside for retirement, or even where to focus one’s career given that? In order to make some progress on this, I would suggest the following playbook. We need to disentangle various related beliefs from each other, create scenarios for each of them and put distinct probabilities on each scenario.
The first question needs to address how transformative AI will be, distinct from whether its effects will be positive or negative. Here, Nate Silver, in his latest book, deploys a good scale of the level of impact of a technology – the Technology Richter scale. This ranges from a minor improvement to a niche process to civilization-wide change. We can adopt this and, for simplicity, condense it to four scenarios:
- AI is a complete dud, a fad that will be forgotten in a few years.
- AI is a “normal technology” as Narayanan and Kapoor recently called it. A useful productivity enhancement, perhaps on par with earlier automation technologies like Robotic Process Automation (RPA).
- AI is a General-Purpose Technology, that will have an impact similar to that of the internet.
- AI is the General-Purpose Technology, commensurate only with fire or writing, and will change absolutely everything.
I think everyone should try to put their own rough credence on each of these. For me, the probabilities would be ~5%, ~25%, ~60% and ~10%. Typically, in the risk and value calculation, we would then turn to estimating impact for each scenario and multiplying the two. However, in this case, I would advise sticking to probabilities only, to avoid Pascal’s Wager scenarios. Scenario 4, where everything changes, holds practically infinite value (or risk), and multiplying with infinities is problematic. Even scenarios with the tiniest probabilities end up dominant if their impact is large enough.
Once you have your scenarios, and probabilities for each in place, you have the beginning of a platform to position yourself, whether that is in terms of investments, savings or career capital. Some scenarios can be addressed directly, while others will require further decomposition. For example, scenario 1 can be addressed directly. If this is your top scenario, you just short Nvidia and don’t waste any time learning how to use LLMs. Scenario 4 is also straightforward, but for the opposite reason. Since there is no way of foreseeing what will happen if AI is on par with fire, you can’t take any specific action. We cannot see anything beyond the singularity. This ironically actually means business as usual in terms of decision-making.
In scenario 2, AI will be valuable, but not spectacularly transformative. This is then likely a call for general optimism about future economic growth. You can buy the market as a whole, assuming most companies will see efficiency improvements. You should ensure you educate yourself to be able to use AI. Not knowing how to use AI in this scenario would be like not knowing how to use Microsoft Office.
Scenario 3, however, requires further breakdown for investment purposes. One cleave to apply is the question of whether AI will, on balance, be a force for good or for ill. If AI is mostly a force for ill, that could mean investing in companies that will deliver security of different types. The cybersecurity industry might boom as malicious actors make more use of AI to build offensive cyber weapons. The latest Anthropic model, Opus 4, was just announced to have the ability to enable inexperienced actors to create CBRN (chemical, biological, radiological and nuclear) weapons. This will likely lead to more asymmetrical threats and could benefit defense and protection stocks of various kinds.
Another way to slice this scenario is deciding your beliefs on whether AI will lead to winner-takes-all dynamics or not. This determines if AGI investing should be about picking winners or making broad-based bets on whole sectors. Earlier technology innovations, such as the internet, have contributed to a widening gap between leading and lagging firms within each sector, as reflected in rising market concentration metrics such as the Herfindahl-Hirschman Index. The effects of AI could follow that same pattern, but they could also arguably go the other way. Several early studies on AI’s effects on white collar work suggested that it was leveling the playing field, bringing mediocre performers up to par rather than turbocharging the stars even more. This might apply on the company level also, where “intelligence on tap” benefits smaller companies disproportionately.
Most importantly, one should stay fluid, revisit one’s beliefs often and track indicators that would suggest moving probability between scenarios that are as forward-facing as possible. In a recent paper, I outlined a tentative set of “TAI KRIs” – Transformative AI Key Risk Indicators. These indicators, across a broad set of categories, could be used to get early warnings regarding whether AI will truly transform the economy or not.
***
Enjoying the content on 3QD? Help keep us going by donating now.
