Stephen Malina

This is my blog. There are many others like it but this one is mine.

AI Omens: Signs of AI acceleration

A friend recently asked me what my “AI timelines” were. Especially with recent progress in image generation, language modeling, code generation, robotics, protein folding, and other areas, this has become an increasingly popular question amongst those who are paying attention.

However, I find trying to answer this question pretty frustrating. There already exist long, thorough reports (another) which build quantitative models and carefully define terms to make sure there’s as little ambiguity as possible. These reports then get carefully debated (here, here and here). I genuinely admire the authors of these reports and critiques for putting in so much time and energy into answering this question, but as someone who doesn’t work directly on advancing “pure” AI, I find it emotionally difficult to get myself to invest the time required to develop an opinion on them and then have them inform my own mythical personal timelines estimate. I recognize all that shouldn’t stop me because it’s important, but I suspect some of my aversion also comes from feeling like it’s also overrated as an activity, especially by certain sects of the internet.

On the other hand, even if it’s not worth putting that much time into or trusting that much, it’s probably worth having a repository of thoughts I can point to as my current views & confusions. Furthermore, as a fan of predictions and bets, I feel like a total shithead just saying that I don’t have any clue whatsoever and am not gonna think about it. (Yes this is of course what a well adjusted person would do, but I’ve never claimed to be even close to well adjusted!) Up until now, I’ve solved this by saying that I think it’s likely (>=60%) that we’ll have transformative AI in 50 years but unlikely (<=20%) in 10. At the prodding of the same friend, I recently realized that although the prospect of refining this into a probability distribution makes me want to vomit, I do have a fairly clear idea of signs that AI is progressing faster than I currently expect (which is already pretty fast relative to the average person). “Done is better than perfect” as they say, so I decided to try and write these up under the title of “omens”.

The rest of this post describes a set of “omens” that would constitute faster than expected (by me) AI progress. I’ve grouped these omens under headings: Rapid Economic Growth, Demonstrations of formidable intelligence, and Progress towards open-ended and longer time horizon reasoning. (I may add a fourth, Miscellaneous research progress, in the future, but for now I’m going to leave it out.) These groupings capture the different buckets under which accelerated progress would contribute to me expecting faster-than-expected overall AI progress. There’s some overlap between them but I still found it helpful to break things down this way. In each section, I first give some brief context on why I think accelerated progress in this bucket would lead me to expect faster progress overall and then describe a set of concrete short term (0-5 year) and medium term (5-15 year) omens. Where possible I’ve linked to prediction markets or other means of tracking predictions. I’ve also tried to give credit in cases where someone else first came up with an “omen” but my memory’s not perfect so I apologize if I failed to credit you (and please email me if that’s the case). I expect to come up with more omens after writing this post both through (hopefully) discussion of it with others, through watching AI continue to advance, and doing the magical think real hard thing. I plan to add these omens to the appropriate section with an “(Added: )” addendum to differentiate them from the initial set of omens.

As always, if you have ideas for more omens, disagree with me, or have any other feedback on the post, don’t hesitate to email me or comment. A big motivation for writing this post is to take advantage of the power of the internet to correct my naive views.


Rapid economic growth

I expect that leading up to AI having a transformative impact on the world, we’d expect some sort of economic acceleration. I’m not sure exactly how much so this is asymmetric in the sense that the more we see accelerating AI-driven economic growth, the more I’d update towards faster timelines.

I also think it mostly makes sense to break this down into smaller pieces both by industry category and more specific predictions. E.g., if we start seeing LLMs automate all non-government captured white collar work but not replace lawyers and doctors because it’s illegal to do so, we can just update on that even if the GDP #s don’t look wild.

One flaw in this view is that we’ve made GDP growth driving changes illegal In a comment on a draft of this post, Will Hickman said: “I think there’s a less intense version of this which is just that there’s an insane amount of intertia inside a lot of big companies and the process of identifying, isolating, and automating workflows is much harder in the core of a company than it is on the fringes (fringes being stuff like customer support, warehouse workers).” in tons of industries. For example, if I had to guess radiologists and pathologists may not get replaced until we have AGI even though we could potentially replace them now if we had sane data sharing, didn’t let asymmetric concerns about AI mistakes (without accounting for human mistakes) dictate our policies, and constructed active learning loops for them that accepted that they’d occasionally get things wrong.

Another flaw in this view is cost disease combined with serial processes being bogged down by their slowest step means that automating one step in a complex process may not massively increase its productivity. This especially true in business areas that are zero sum,. E.g., finance pays people a ton of $ to basically get on the phone with clients, which AI isn’t going to do any time soon, so if AI automates other aspects of finance, people will just spend more on selling clients.

Short term (0-5 years) omens

  • Self-driving car deployment continues apace and even accelerates, starting to replace a meaningful (10%) fraction of the ride share market.
  • At least one LLM-driven product such as Copilot has 1 million unique active users. One benchmark here is Figma, which according to this article from June, 2022, has 4 million unique users, There’s also a subtlety here that’s a little hard to deal with, which is that AI tools like DALL-E 2 get a ton of sign-ups but then experience rapid user drop-off. I’m just going to have to use my judgement here since I’m looking for widespread and persistent usage. I’d buy even faster acceleration if white collar AI tools routinely start getting used to make spreadsheets, powerpoints, etc. for businesses, but I suspect this is unlikely even in a relatively short timelines world due to structural factors.

On a draft of this post, Will Hickman pointed out that you can imagine senior members of banks, law firms, etc. not wanting to use these tools to replace analysts because it would suggest that these tools also are better at analyst work than they were when they were analysts. I think both Will and I expect factors like this combined with general inertia that wouldn’t appear in naive economic models to slow adoption much more than AI hardliners seem to.

  • AI manufacturing tools from companies like covariant start achieving real market penetration.
  • Radiologists and pathologists at least become more productive as a result of AI tooling. I controversially do think they could probably be replaced in the medium term if not the short but I highly highly doubt they will be for regulatory capture + extreme precautionary principle reasons.
  • AI systems start seeing slow but real adoption in food industry tasks like making burgers and taking orders. Or for another example, automated coffee machines like Cafe X (which I recently used on a layover in SF) spread to more airports or similar settings.
  • LLM companies’ products achieving promising uptake.
  • Signs that robotics systems that help humans are on the cusp of being useful. E.g., Google or Everyday Robotics demonstrates an AI janitor that actually operates autonomously in their or another office.

Medium term (5-15 years) omens

  • Self-driving cars have nearly entirely replaced ride share drivers, have taken a serious chunk out of the truck driving market, and are increasingly common amongst car owners. For a close-to-home example of what faster-than-expected progress could look like here for me personally, if I have children by then (likely), then we trust self-driving cars to drive them places without us in the car.
  • White collar businesses start reducing the # of junior employees because mid-level people can “just ask” for what they want.
    • If you want a concrete example, imagine an “AI analyst” that replaces what a bunch of my friends did for the first two years out of school in an investment bank. Also see the sidenote above for why I expect this to be slower than it would be if it were purely a question of AI capabilities.
  • At least one LLM-driven product reaches $1B in annual revenue or an LLM company becomes a deca-unicorn. As a benchmark, Jetbrains’ annual revenue was $360M in 2020 and Github’s was supposedly $200-300M in 2018.
  • Entirely automated factories are in operation for at least a small set of goods or maybe Amazon warehouses reduce their employee count by a factor of 10 or more.
  • Robotic helper systems achieving market penetration. Examples:
    • People use AI cleaners that actually navigate their homes/apartments and can do multiple tasks.
    • AI nurses assist doctors or in senior care homes.
    • AIs replace people who clean sidewalks, empty garbages, etc.
  • Radiologists and pathologists replaced at least for certain situations. (Unlikely for reasons mentioned above.)
  • At least one type of surgery can be performed totally automatically (even if a human oversees it).

Demonstrations of formidable intelligence

AI systems beating highly skilled humans at more diverse, open-ended intellectual tasks or just doing intellectual feats that I consider especially intellectually impressive would update me towards AI being on track to become superhumanly intelligent. To me as of 6 months ago’s surprise, there are already signs that certain intellectual tasks for which we have ample data, such as math, may be easier for AIs to solve than we thought.

However, one thing that still seems quite far away is AI systems being able to invent new concepts in the way the best scientists and investors do. For that reason, signs that AIs may also be close to doing this would be especially strong omens for me.

Short term (0-5 years) omens

  • AI wins a medal in the IMO by 2025
  • AI proves an important, broad theorem that humans have been unable to prove up until now
    • Examples: Riemann Hypothesis
  • AI programming model outperforms best competitive programmers (Manifold questions: 1).
  • AI programmers start replacing junior programmers in industry. Example: Can I use an AI system to take a first pass at designing a web app that I’ve described in text/maybe mock-ups?
  • AI programming system can automatically and reliably perform large scale refactorings. Examples include migrating a codebase from one framework to another, “remove huggingface from this modeling script and replace it with pytorch”, and other similar tasks.
  • AI research assistants helpful enough for analyzing papers that I actually trust their responses to questions vs. just using them as glorified search.
  • AI systems beat humans at forecasting medium to long term events.
  • AI engineering systems design new materials or artifacts that unblock a key bottleneck in some engineering process. Examples:
    • Nanosystem that has a previous unseen capability designed primarily by an AI system.
    • New material better than a material that’s had a lot of human effort put into improving it such as carbon fiber and is actually usable.
  • Models continue to knock down biological prediction tasks. Examples:
    • Protein complex structure prediction for large protein assemblies vs. small sets of monomers. As a close to home example, if a model could predict the entire structure of the AAV capsid, that would be very impressive.
    • In a similar vein, if structure models become able to predict variant effects accurately, that would constitute greater than expected progress.
    • Accurately predicting time series dynamics vs. static structures.
    • Improved receptor binding prediction, with bonus points for if it’s clearly getting used to design drugs more quickly/effectively.
  • Various forms of progress towards the world described in A Future History of Biomedical Progress in the next 5 years. Examples:
    • Models that learn from scientific papers and writing actually doing something useful. (I will put myself out there for this one and say that I think this is very (<10%) unlikely.
    • Models drive automated robotic laboratories that both run experiments and automatically decide which experiments to perform next.

Medium term (5-15 years) omens

  • AI makes a truly novel scientific discovery. (Note that Alphafold or a similar system “discovering” a new protein wouldn’t count unless the human direction was at the level it would be for e.g. a graduate student.) Examples include:
    • AI invents an entire new class of useful protein or molecule akin to mRNA therapies or CRISPR given a text description of a goal.
    • AI invents an entire nano-system for accomplishing a goal that proves possible to create.
    • AI discovers some entirely new biological principle or structure. Think something on the order of the DNA helix, the central dogma, etc. Specifically, this would mean the AI crystallized the concept vs. just assisting in the discovery.
    • AI discovers a new physical law or somehow contributes to resolving the problems with the Standard Model of Physics.
  • AI designs entire engineering systems, either software or physical based on design documents & Q&A with humans.
  • AIs not only prove theorems but also make novel mathematical conjectures and invent new mathematical objects.
  • AIs can understand facets of entire fields of research and pass reasonable tests of knowledge about them.
  • Entire robotic labs in operation running in closed loop for meaningful periods of time.

Progress towards open-ended and longer time horizon reasoning plus continual learning

An additional axis along which seeing AI progress would push me towards anticipating transformative AI earlier is in terms of open-end, longer horizon reasoning in messier domains. Right now, AIs tend to be best at system 1, shorter horizon tasks like perception or short(er) text generation but struggle to maintain coherence when reasoning about longer time horizons. I believe that in order to solve hard problems and operate in messy domains, AI systems will need to be able to maintain some sort of coherence along longer time horizons. In humans, this ability seems to be entangled partially with the ability to do system 2 reasoning (plus memory and some other things). I suspect that prior to AI systems becoming powerful, they will have to expand their reasoning time horizon beyond the sub-second level.

I also suspect that for AI systems to operate effectively in the real world, they’ll need to do some sort of continual learning. While I don’t know if this’ll look exactly like what it looks like for humans, I expect it to be visible in terms of AI systems updating based on experience on a shorter time scale than they do now (i.e. without full re-training).

Unfortunately, relative to the other sections, I’m less sure what omens would signify progress in this direction. So as of writing this, I have fewer things under the short and medium term sub-sections than I do for other directions. If you have more ideas on how to crystallize open-ended reasoning, longer time horizons, and/or continual learning into omens, please let me know!

Short term (0-5 years) omens

  • Reinforcement learning agents can beat humans at hard exploration games like Montezuma’s Revenge with a comparable amount of gameplay time.
  • Large language model context lengths get longer resulting in large language models that can read and discuss entire books.
  • Dialogue systems can maintain a dialogue for >5 minutes while keeping track of things previously discussed.
  • Robotics systems learn in real-time from their mistakes and continue to do so after being deployed.

Medium term (5-15 years) omens

  • AI systems can navigate and operate semi-autonomously in helper roles like nurse, aid, house cleaner, etc.
  • Dialogue systems learn and remember facts about their individual users and use these facts to be more helpful to them.


Big thanks to Alexey Guzey and Rohit Krishnan for convincing me to actually write up my thoughts on the topic. Also big thank you to Will Hickman for his thoughtful comments (often included as sidenotes in this post) on a draft of this post.

comments powered by Disqus