AI Omens: Signs of AI acceleration

A friend recently asked me what my “AI timelines” were. Especially with recent progress in image generation, language modeling, code generation, robotics, protein folding, and other areas, this has become an increasingly popular question amongst those who are paying attention.

However, I find trying to answer this question pretty frustrating. There already exist long, thorough reports (another) which build quantitative models and carefully define terms to make sure there’s as little ambiguity as possible. These reports then get carefully debated (here, here and here). I genuinely admire the authors of these reports and critiques for putting in so much time and energy into answering this question, but as someone who doesn’t work directly on advancing “pure” AI, I find it emotionally difficult to get myself to invest the time required to develop an opinion on them and then have them inform my own mythical personal timelines estimate. I recognize all that shouldn’t stop me because it’s important, but I suspect some of my aversion also comes from feeling like it’s also overrated as an activity, especially by certain sects of the internet.

On the other hand, even if it’s not worth putting that much time into or trusting that much, it’s probably worth having a repository of thoughts I can point to as my current views & confusions. Furthermore, as a fan of predictions and bets, I feel like a total shithead just saying that I don’t have any clue whatsoever and am not gonna think about it. (Yes this is of course what a well adjusted person would do, but I’ve never claimed to be even close to well adjusted!) Up until now, I’ve solved this by saying that I think it’s likely (>=60%) that we’ll have transformative AI in 50 years but unlikely (<=20%) in 10. At the prodding of the same friend, I recently realized that although the prospect of refining this into a probability distribution makes me want to vomit, I do have a fairly clear idea of signs that AI is progressing faster than I currently expect (which is already pretty fast relative to the average person). “Done is better than perfect” as they say, so I decided to try and write these up under the title of “omens”.

The rest of this post describes a set of “omens” that would constitute faster than expected (by me) AI progress. I’ve grouped these omens under headings: Rapid Economic Growth, Demonstrations of formidable intelligence, and Progress towards open-ended and longer time horizon reasoning. (I may add a fourth, Miscellaneous research progress, in the future, but for now I’m going to leave it out.) These groupings capture the different buckets under which accelerated progress would contribute to me expecting faster-than-expected overall AI progress. There’s some overlap between them but I still found it helpful to break things down this way. In each section, I first give some brief context on why I think accelerated progress in this bucket would lead me to expect faster progress overall and then describe a set of concrete short term (0-5 year) and medium term (5-15 year) omens. Where possible I’ve linked to prediction markets or other means of tracking predictions. I’ve also tried to give credit in cases where someone else first came up with an “omen” but my memory’s not perfect so I apologize if I failed to credit you (and please email me if that’s the case). (1)I expect to come up with more omens after writing this post both through (hopefully) discussion of it with others, through watching AI continue to advance, and doing the magical think real hard thing. I plan to add these omens to the appropriate section with an “(Added: )” addendum to differentiate them from the initial set of omens.

As always, if you have ideas for more omens, disagree with me, or have any other feedback on the post, don’t hesitate to email me or comment. A big motivation for writing this post is to take advantage of the power of the internet to correct my naive views.

Omens #

Rapid economic growth #

I expect that leading up to AI having a transformative impact on the world, we’d expect some sort of economic acceleration. I’m not sure exactly how much so this is asymmetric in the sense that the more we see accelerating AI-driven economic growth, the more I’d update towards faster timelines.

I also think it mostly makes sense to break this down into smaller pieces both by industry category and more specific predictions. E.g., if we start seeing LLMs automate all non-government captured white collar work but not replace lawyers and doctors because it’s illegal to do so, we can just update on that even if the GDP #s don’t look wild.

One flaw in this view is that we’ve made GDP growth driving changes illegal (2)In a comment on a draft of this post, Will Hickman said: “I think there’s a less intense version of this which is just that there’s an insane amount of intertia inside a lot of big companies and the process of identifying, isolating, and automating workflows is much harder in the core of a company than it is on the fringes (fringes being stuff like customer support, warehouse workers).” in tons of industries. For example, if I had to guess radiologists and pathologists may not get replaced until we have AGI even though we could potentially replace them now if we had sane data sharing, didn’t let asymmetric concerns about AI mistakes (without accounting for human mistakes) dictate our policies, and constructed active learning loops for them that accepted that they’d occasionally get things wrong.

Another flaw in this view is cost disease combined with serial processes being bogged down by their slowest step means that automating one step in a complex process may not massively increase its productivity. This especially true in business areas that are zero sum,. E.g., finance pays people a ton of $ to basically get on the phone with clients, which AI isn’t going to do any time soon, so if AI automates other aspects of finance, people will just spend more on selling clients.

Short term (0-5 years) omens #

On a draft of this post, Will Hickman pointed out that you can imagine senior members of banks, law firms, etc. not wanting to use these tools to replace analysts because it would suggest that these tools also are better at analyst work than they were when they were analysts. I think both Will and I expect factors like this combined with general inertia that wouldn’t appear in naive economic models to slow adoption much more than AI hardliners seem to.

Medium term (5-15 years) omens #

Demonstrations of formidable intelligence #

AI systems beating highly skilled humans at more diverse, open-ended intellectual tasks or just doing intellectual feats that I consider especially intellectually impressive would update me towards AI being on track to become superhumanly intelligent. To me as of 6 months ago’s surprise, there are already signs that certain intellectual tasks for which we have ample data, such as math, may be easier for AIs to solve than we thought.

However, one thing that still seems quite far away is AI systems being able to invent new concepts in the way the best scientists and investors do. For that reason, signs that AIs may also be close to doing this would be especially strong omens for me.

Short term (0-5 years) omens #

Medium term (5-15 years) omens #

Progress towards open-ended and longer time horizon reasoning plus continual learning #

An additional axis along which seeing AI progress would push me towards anticipating transformative AI earlier is in terms of open-end, longer horizon reasoning in messier domains. Right now, AIs tend to be best at system 1, shorter horizon tasks like perception or short(er) text generation but struggle to maintain coherence when reasoning about longer time horizons. I believe that in order to solve hard problems and operate in messy domains, AI systems will need to be able to maintain some sort of coherence along longer time horizons. In humans, this ability seems to be entangled partially with the ability to do system 2 reasoning (plus memory and some other things). I suspect that prior to AI systems becoming powerful, they will have to expand their reasoning time horizon beyond the sub-second level.

I also suspect that for AI systems to operate effectively in the real world, they’ll need to do some sort of continual learning. While I don’t know if this’ll look exactly like what it looks like for humans, I expect it to be visible in terms of AI systems updating based on experience on a shorter time scale than they do now (i.e. without full re-training).

Unfortunately, relative to the other sections, I’m less sure what omens would signify progress in this direction. So as of writing this, I have fewer things under the short and medium term sub-sections than I do for other directions. If you have more ideas on how to crystallize open-ended reasoning, longer time horizons, and/or continual learning into omens, please let me know!

Short term (0-5 years) omens #

Medium term (5-15 years) omens #

Acknowledgements #

Big thanks to Alexey Guzey and Rohit Krishnan for convincing me to actually write up my thoughts on the topic. Also big thank you to Will Hickman for his thoughtful comments (often included as sidenotes in this post) on a draft of this post.