Table of Contents
Along with a few internet blogger friends, inspired by Erik Hoel’s post (parts of which I disagree with, including the headline), I’ve decided to make predictions for 2050. The rest of this post discusses some meta aspects of making these predictions and then lists out the predictions. In case you’re waiting for my (hopefully) less speculative 2022 annual predictions, fear not! Those will be coming in the next few weeks.
Here is a running list of other people’s 2050 predictions:
- Slime Mold Time Mold
- Jehan at Atoms vs Bits
- Rohit at Strange Loop Canon
- Roger’s Bacon
- Sasha Chapin
- Max at My Super Secret Diary
- Experimental History
Slime Mold Time Mold’s post is also keeping track of a probably more up-to-date list than mine.
Making these predictions was harder than I initially expected. I started out trying to make very precise quantitative predictions, similar to what I did for my 2021 predictions post. However, I quickly realized this was way too hard. First of all, even superforecasters struggle to make accurate predictions more than a year out (for geopolitical events, but I suspect it applies more widely) (source). So it seems quite unlikely I can make particularly accurate quantitative predictions 28 years into the future. Second, my quantitative predictions often rely on having a consistent, reliable source of data I can look at. Given link rot base rates, almost any source I pick will no longer exist in 2050.
For these reasons, I’ve stuck with qualitative predictions for the post. Of course, this puts me at severe risk of punditry, since qualitative predictions are notoriously easy to equivocate about after the fact (see commandment 6 here for discussion of this). I’m trying to mitigate this in three ways. First, I’m trying to include at least one relatively unambiguous sub-prediction in each overall (potentially vaguer) prediction. Take the example of my military AI prediction:
AI systems are (believed to be) making US military decisions with limited to no human-in-the-loop. This isn’t as apocalyptic as things like the slaughterbots video make it out to be but does lead to mistakes (in hindsight) and exacerbates the trend of the average person being emotionally distanced from war and conflict.
While the degree of apocalyptic-ness is hard to measure, the first sentence is hopefully not too subject to interpretation. If the US military is (for example) known to be using drones piloted by AIs without human-in-the-loop for kill decisions in 2050, this unambiguously resolves as “yes”.
The second way I intend to avoid devolving into punditry is by putting a stake in the ground that if I’m predicting something, it means I think it’s more likely than not. As discussed, resolution will be hard but at least this will ensure that if much less than half of the things I predict come true, my “track record” should take a hit. This also means I will entertain bets related to these predictions. All that said, if, in 2050, you see me citing this post as evidence for me having a good track record, I give you permission to cite this sentence that these predictions mostly shouldn’t count towards my track record.
Another challenge I faced in making qualitative predictions was that I put low-but-not-nearly-as-low-as-I’d-like probabilities on certain catastrophic outcomes that would dramatically change my predictions for 2050. Any of: an engineered pandemic that wipes out a significant fraction of the world population, misaligned AGI(s) being developed, nuclear war, non-nuclear world war, US government downfall, or the Yellowstone supervolcano exploding would basically obviate all of the predictions I’ve included below. For probabilities on each of these, I don’t have super strongly formed views and defer to others who I trust. That said, I view all the risks I mentioned as big enough to be worth explicitly trying to prevent and not for Pascal’s wager-y reasons. Because writing about these risks is important but too big a job for this post, all my 2050 predictions are conditioned on no existential catastrophes. If there’s popular demand for a version of this post where I just predict all the ways in which we might all die before 2050, I suppose I could write that too…
On the flip side, I also put some probability on hopefully positive, societally transformative scenarios occurring before 2050. This includes things like aligned superintelligent AI or human brain uploads (more recently, “digital people”). My predictions are also conditioned on these things not happening.
Put a different way, these predictions are describing my modal future rather than the distribution of futures I think are possible.
Finally, contrary to my natural tendency to wildly speculate about topics I don’t know about, for my predictions I decided to focus on areas where I think I’m at least above average informed. This means my predictions focus on AI, bio(logy|tech), health, and technology more broadly. I also made a few predictions about cryptocurrency because I couldn’t resist. If I’m much more wrong on these than in other areas, it’ll be a humbling learning experience. If you’re interested in seeing which topics I explicitly decided not to predict about, see the Things I decided not to predict about section.
NB: I use AI as a stand-in for ML here and in subsequent discussions. I generally prefer the term “ML” to “AI” because the latter’s more hype-y, but since the world increasingly seems to prefer “AI”, I’m making a concession here.
Overall Worldview #
At a high level, my perspective on AI progress is informed by a worldview based around three qualitative predictions. If any of these are wrong, I suspect the more specific predictions throughout this section will end up being wrong in highly correlated ways. These three qualitative predictions are:
- AI is ubiquitous in the same way the internet is ubiquitous now.
- We don’t have artificially general intelligent systems and domain-specific progress continues to be a function of: ability to describe and evaluate goals, thinking time horizon required, and data availability (in that order).
- Automation still follows colonization wave dynamics.
Detailed Predictions #
- Language model-based chat bots are widely used for applications like tech support and Siri & “Hey, Google” style assistants. These assistants are capable of tasks such as booking a flight or ordering an item on Amazon but struggle with anomalies encountered during these tasks (the hardest 5-10%). In other words, they’re worse than an especially competent human assistant still.
- Robotics has finally started to become useful as well, with robots increasing manufacturing productivity. That said, robots still aren’t capable of “messier” tasks at the level required for widespread adoption. For example, >95% of hedge clipping is still done by humans.
- AI systems are (believed to be) making US military decisions with limited to no human-in-the-loop. This isn’t as apocalyptic as things like the slaughterbots video make it out to be but does lead to mistakes (in hindsight) and exacerbates the trend of the average person being emotionally distanced from war and conflict.
- Because of the synergies with VR, having a personal avatar that’s based on images & motion tracking of yourself + AI-aided customization is 100X more common than currently. In general, art becomes even more democratized with basically anyone able to produce expert-level art using generative models. This will include videos in addition to images.
Biology (research) #
- Progress on GWAS continues apace. Problems with non-European backgrounds is mostly solved at least for other large (non-European) populations. Polygenic risk scores’ predictive power continues to grow for complex diseases, improved by a combination of the growth of eQTL-style strategies and at least one biobank of >2x the number of individuals in the UKBB.
- ML models see widespread adoption as another tool in the biology toolkit essentially as described in this awesome blog post.
- Protein engineering becomes much easier due to a combination of improved standardized workflows, better ML-enabled tools, and cheaper synthesis and (perhaps even protein) sequencing.
- AI does and doesn’t “transform” drug development. It does in the sense that: 1) AI is nearly as ubiquitous in drug development as it (currently) is in technology companies and 2) there is at least one new big pharma (top 10 by market cap) company that based its early thesis around AI. It does not in the sense that for Amdahl’s Law reasons, drugs still take years to develop and clinical trials are still risky and expensive. Also, AI will continue to be more helpful for earlier clinical stages of drug development.
- Eroom’s Law breaks but doesn’t reverse.
- Producing biologics and other non-small molecule drugs is >10x cheaper, more reliable, and accessible than currently. (Technically this could also fall under “Synthetic Biology”.) Non-small molecule drugs – gene therapy, cell therapy, etc. – gain major (>20% from ~2% currently) market share in disease areas outside of cancer. (Small molecules still constitute the majority of prescriptions though.) Awareness of these alternative modalities is also higher, with informed but not experts knowing about them in the same way they currently know what (e.g.) chemo is.
- Gene therapy provides (approved) cures for 10s or even 100s of currently untreatable genetic diseases. Several gene therapies are also approved for common severe diseases such as cancer, heart disease, Alzheimer’s, Parkinson’s.
Healthcare & healthtech #
- ‘Omics screens combined with biomarker models see broader, although still not widespread, adoption. See current aging clocks for an early peek at both what this looks like and the challenges associated with them. This continues to drive a divide in the US where the healthiest people become even healthier and more health conscious but people who don’t actively pursue health improvements don’t benefit.
- AI systems will routinely be used by pathologists and radiologists but will not replace them entirely. This means I think the number of radiologists will not decrease by more than 1/2 from its current number.
- Electronic health records are still a balkanized, painful-to-interact-with mess. (This one’s especially vague but I just wanted to register pessimism here.)
- Direct-to-consumer personal health tracking has become even more common. Smart watches or their next iteration include glucose monitoring and other currently not available metrics. Outside of ‘omics screens, the healthcare system continues to mostly not integrate the more finely grained, time series data into its decision-making apparatus except for athletes, high net worth individuals who pay for personalized care, and individuals with specific issues (diabetes, cancer).
- Whole-genome sequencing continues to grow, although not as fast as its biggest proponents (me included) would like. Say between 5 and 10 million (was ~1 million as of 2018) people have had their whole genomes sequenced. (See next section for my take on sequencing & synthesis costs in general.) Tech-forward doctors and heatlthcare systems use genomic risk scores more than they do now but still mostly for a relatively small set of diseases like breast cancer,
- The obesity epidemic has plateaued or started to reverse but, if the latter, only slightly. This is in large part due to continued progress on next generation weight loss drugs. E.g., maybe obesity drops back below 40% but remains above 30%.
- The war on cancer isn’t “won” in the sense that >1 million people die of cancer every year in the US still. On the other hand, cancer rates continue to go down, with deaths at least halving from their current rate. Unfortunately, heart disease is also still a significant cause of death in the US, with >300,000 people still dying of heart disease every year. (In some sense, both of these are quite optimistic predictions since 1 million cancer deaths is ~1/10th of the current number.)
- Longevity continues to become less esoteric and fringe as both a research area and industry. The industry has a combined market cap of $1B+. Qualitatively, working on healthspan extension is considered “reasonable but a little out there” in the same way working on making more general artificial intelligence is considered “reasonable but a little out there” currently.
- The max verified lifespan of a human will still be under 125 years old. I less confidently predict this is because adoption of the highest potential stuff is still concentrated amongst currently young to middle age, then middle aged to older people.
- The number of people who use IVF to have kids continues to grow from slightly more than 1% to somewhere in the range of 3-5% (although possibly even higher).
- In vitro gametogenesis is possible and has been used to produce human babies. However, it’s still quite expensive and much less popular than classic IVF.
- Polygenic scores for at least one common complex disease risk (heart disease, diabetes, etc.) are used for embryo selection by $\geq 5\%$ of people who get IVF.
Synthetic Biology #
- Synthetic biology toolkits mature enough to make interesting DIY bio projects possible. 100X more people than now are involved in DIY bio. An individual working out of a community biolab can do projects like making plants fluorescent as a side project. At least one successful synthetic biology startup is founded by people who started working out of an accelerator or community lab rather than academia.
- Unfortunately, sequencing and synthesis cost reductions continue to underperform what would be predicted by a naive Carlson Curve but also don’t plateau at their current price (potentially due to increased adoption of nanopore-like technologies combined with computational techniques for reliable base-calling). As a result, commercial whole genome sequencing for an individual still costs at least \$10 but less than \$100. Projects like the one proposed by GP-write that involves synthesizing whole genomes from scratch becomes feasible but remain expensive (in the millions of dollars range).
- De novo design & engineering (different from de novo synthesis of modified versions of existing genomes) complex organisms mostly remains science fiction. Companies still mainly focus on biomanufacturing of homogeneous materials and compounds or mammalian cell engineering for therapeutic applications.
- Biomanufacturing has had several (~5) major successes. An example of what I’d consider a major success would be >5% of global palm oil being produced via fermentation.
- People routinely use decentralized smart contract-based applications – that aren’t exchanges – or cryptocurrencies (on a blockchain) in their daily lives.
- The market cap of the top blockchain tokens/cryptocurrencies is larger than the market cap of the top precious metal (in 2050).
- At least one country has adopted a Central Bank Digital Currency.
Things I decided not to predict about #
As I was going through this exercise, I realized there were several topics about which I don’t think I’m equipped to make predictions. I’ve included some examples of these topics/predictions to make my perceived ignorance explicit.
- Impact of AI on law, consulting, & finance. Initially, I had something about how AI would make a dent here but not as much as you might naively expect. It turns out this is incredibly hard to operationalize and I’m not knowledgable enough to succeed at doing so.
- Is the Great Stagnation ending? I was tempted to predict that the Great Stagnation would end but then realized I am not that informed on enough of how economic productivity even works to make good predictions here.