Reflections on 2022

2022 was another exciting, fun year. To round it out, I wrote up some scattered reflections. Enjoy, or if you aren’t a fan of this genre of post, don’t! Also, each section is mostly self-contained and they’re not organized in any special order, so don’t hesitate to skip a section if it sounds boring.

Writing #

2022 felt like a productive year for blogging even though my post count dropped down from 11 in 2021 to 6. I’m thrilled with how my and Austin’s decentralization post came out. The topic’s something I’d thought about a lot but been unable to put into writing until Austin and I hashed it out together. Collaborating with Austin also reminded me how much I enjoy collaborating. It’s also a strength of the medium and so I hope to do more of it in 2022.

My other favorite post from this past year is my review of Elmer Gates and the Art of Mind-using. While this post may appeal to a more niche audience, it helped me finally collect my thoughts about the book after years of mulling it over and struggling to wrap my head around it. The book’s also obscure enough that I suspect my counterfactual impact in terms of exposing people to its ideas is high.

Looking ahead to 2023, this past month, I’ve been have been rediscovering the joy of dense writing. My writing of the past few years has heavily skewed towards a Frankensteinian patchwork of the plain style and a related style I associate with the SlateStarCodex sphere. While the Platonic ideal of this style (which my writing doesn’t live up to) achieves valuable clarity and precision, my heart longs to produce writing that achieves aesthetic in addition to utilitarian goals. I’m not yet sure what form my attempts at this will take, but I intend to do something.

Ironically, I’ve also been feeling a desire to write more pedagogical material about some combination of ML and bio. While the internet is filled with ML tutorials, bio has much less of a culture of posting explanations on the internet. As a result, ML plus bio topics lack clear tutorials and even relatively basic bio methods often can be hard to find good information on. In contrast to exploring a different style of writing, I’m not sure whether I’ll actually act on this desire. Explaining things trades off more directly with trying to learn new things (unless I’m explaining a thing I’m learning, which isn’t a bad idea) and the former is something I want to be more deliberate about this upcoming year.

Zooming out, I feel like the benefit of writing on the internet is massive (see Ben Kuhn’s recent post) but can take some time to manifest. For me, 2022 was definitely the year in which the promised benefit of my internet presence fully manifested. New friendships based on initial internet interactions solidified, collaborations happened, and ongoing correspondence led to new ideas. I’m incredibly grateful for all these things and continue to think that, for all its issues, the internet is still underrated.

Work #

This was another great year for work. My job at Dyno continues to be my favorite job I’ve had and I remain passionate about the mission and team. While most of my work isn’t shared publicly, I played a small role in a paper we published and presented on at both MLCB and the NeurIPS LMRL workshop.

The biggest shift in the latter half of the year was moving from being an individual contributor to a manager. I feel green enough at it that I don’t think I have novel things to say yet, although I hope to in the future. I’m learning a ton but enjoying it quite a bit while experiencing some of the classic struggles (such as the desire to solve problems by coding).

At the company and industry level, reading business books, talking to people at similar companies, and observing our own challenges, have all strengthened my conviction that large chunks of software wisdom rely on an implicit presumption of fast feedback loops. Ideas like getting things in front of customers quickly and optimizing for cycle time may still apply in the world of therapeutics but at least require a lot of rejiggering to deal with the reality of biological experimentation. So far, my thinking about this has been scattered and led to me exploring ideas from other areas such as manufacturing (see section on reading and learning), but I also hope to write more about this in the future.

AI #

I’ve been working in ML (for biology) for over 3 years now and had kept track of work on scaling laws and other signs that that suggested AI progress wasn’t slowing since ~2020, yet I have still been surprised by AI progress this year. The capabilities that combining pre-trained language models with fine-tuning unlocked shocked me, in particular for areas like math, where early signs suggested scaling might provide less of an advantage, but subsequent results suggest the right fine-tuning dataset combined with prompting techniques unlock a scaling advantage. In image generation, where I’d been paying less attention, the rapid transition of image generation models from amusing toy to potentially transformative tool snuck up on me.

Something I haven’t seen discussed as much is how the second half of the year’s progress has been a triumph for innovation in the Matt Ridley sense. As more models have transitioned from research artifact to product, many discoveries about how to deploy them have come from outside the traditionally research community. On the LLM side of things, while credit attribution often hides or minimizes this, many prompting improvements either originated or were refined on Twitter by people like Riley Goodside and then only later packaged into papers or other legible artifacts. I expect this trend to continue as companies built on LLMs refine prompting strategies in the gauntlet of the consumer marketplace, although many of these hardwon learnings will likely remain secret. On the image generation side, while all three of DALL-E 2, Midjourney, and Stable Diffusion’s enabled an explosion of public experimentation with prompting tricks, only Stable Diffusion allowed for tool-building on top of it.

In terms of my own state of mind on macro questions around AI, like anyone who’s not bullshitting, I don’t know what’s going to happen. I’m unconvinced by the harshest critics who declare that deep learning is fundamentally flawed and doomed to hit a wall but not yet convinced AI is on the precipice of catalyzing rapid economic change on the scale of the industrial revolution. My views also shift around depending on who I’ve read or talked to in the past week or which new results just came out. So, consider the rest of this a snapshot of rapidly evolving views. I expect my views to have changed dramatically and hopefully solidified in the next 6 months and certainly in the next year.

I’m unconvinced by the critics because I’ve observed them move the goalposts as large language models continue to trample some of the previously pointed to benchmarks (such as Winogrande). I’m also just not convinced by their arguments. While I suspect LLMs on their own still lack some key ingredients of “general” intelligence, any ingredient I can point to has multiple groups of researchers actively working on it. For example, I pointed to long term memory and continual learning as capabilities to watch out for in my omens post. Both of these areas have had significant active work in the past year (Memorizing Transformers as an example of the first) and are discussed as areas for improvement by the community. On top of this, seeing how pre-trained models enable reinforcement learning both for language generation and open world game tasks (VPT, MineDojo, Building interactive agents in open world), I have a spidey sense that scaling combined with architectural innovation may have been more of a bottleneck than some of these other steps towards enabling breakthrough applications.

On the other hand, while AI developments are already improving translation, chip design, ETA prediction, and many other areas beyond language, I’m still not yet sure how much the AI developments we’re seeing will translate to broad accelerating economic growth in the next 5-10 years. Like Mulder in The X-Files, I want to believe, but seeing how previously heralded technological breakthroughs failed to achieve the promised level of transformative impact and reading things like Testing the Automation Revolution Hypothesis have made me wary of prematurely assuming impressive looking technologies translate to real world impact. Especially when it comes to impact in the world of bits not atoms, I have the sense that regulatory barriers combined with reality’s surprising detail place a speed limit on how quickly AI can transform industries.

Playing my own devil’s advocate, one thing that I can see accelerating economic growth would be continued progress in robotics. Broadly applicable, easy-to-steer robotics seem special to me because so much of the economy appears to bottom out with humans understanding commands and doing some amount of physical work. The messy interaction between these two things – having to interact and deploy physical dexterity

– has made me skeptical of rapid robotic deployment in the past, but seeing like PaLM-SayCan, RT-1, and VIMA enable robotics to ride on the coattails of scale and LLM improvements has me wondering if robotic workers may come sooner than I previously expected. Even with this though, I put some weight on the views of people like Rodney Brooks who emphasize the gap between impressive demos like SHRDLU and real world deployment.

Of course, the possibility of artificial general intelligence (AGI), transformative AI, or PASTA looms over all of this like a spectre. To this point, much of my discussion has assumed that AI will behave like a transformative, albeit “normal” technology (such as electricity or mechanization) to which we can apply relatively standard economic erasoning. All this goes out the window if you assume we’re going to develop AI that exceeds human intelligence and/or multiplies the amount of human-level intelligence by a factor of 10-1000. In that world, the entire economy could change overnight as entirely new fields of science and technology emerge at the speed of melting GPUs or it could be our doom. My thinking on this topic is even more confused than on the rest of this so I’m going to declare “Mu” here.

Jumping back to the more personal and related to the above, part of me definitely is drawn to the areas in which AI progress seems to only be limited by the number of GPUs. One of the challenges of working with the biological world is that experiments take time. Given that, I can’t help but occasionally feel jealous seeing how people working with language models and image generation can iterate on prompts, models, and other things at the speed of code. On the other hand, I remind myself that this exact psychological advantage the world of pure software has may (partially since money was also a factor) explain why for so long tech failed to influence the world of atoms and that building requires dealing with the messiness of the real world (at least for now).

Going a bit more meta, I’ve also been thinking about how to continue refining my views on AI. One fear I have is falling prey to the trap Gwern describes here. While I think I avoid the false certainty described there, I am definitely prone to the status quo adaptation he describes. (“The voice utters simple arguments about why the status quo will prevail, and considers only how the wild new idea could fail (and not all the possible options).”) I can already feel the sense of surprise slowly fading as I quickly adapt to the new normal in which an AI assistant helps me code better and can help me rewrite text or find answers to questions. To try and combat this pundit-style fading of surprise and goal post shifting, I’ve been aggressively capturing my thoughts and predictions on my blog and on Manifold. Most of these predictions will resolve in the future, but by the end of 2024, but I expect to have a clearer sense of whether my current expectations around the pace of progress are calibrated. However, I worry that making predictions and occasionally writing about it isn’t enough to keep me evolving my view on AI as things develop. If any readers have thoughts on other things I can/should be doing, I’d love to hear them.

Wrapping up this section, I feel incredibly grateful to be (broadly) part of a field that’s clearly going through a scientific and technological renaissance. In addition to the ML researchers who came before, I’m especially grateful to the unsung heros who toiled on maintaining Moore’s Law for long enough to make all this possible. I look forward to seeing what the next year(s) bring.

Learning and reading #

I failed to stick with my plans for structured technical learning in 2022. A storm of faltering discipline, work competing for energy (I use “energy” not “time” for a reason), and getting distracted by AI progress/current events/drama, my technical learning efforts followed a consistent pattern of starting a textbook or course and then stalling out a few chapters/lectures in. The one counter-example to this was Immune, which I highly recommend but borders on popular science in its readability. One takeaway from these repeated failures is that it’s better to make slow consistent progress vs. set unrealistic expectations and then fail to achieve them at all. I’d rather have made it through my entire plan for learning reinforcement learning, intermediate cell biology, organic chemistry vs. the 20-30% of each I ended up going through.

On the flip side, I enjoyed reading about management and strategy much more than I expected. While I’m wary of business books in general, I got lucky with the books people I either found or had recommended to me. On the management side, I read High Output Management and An Elegant Puzzle.

For a book written before email was widespread, High Output Management impressed me with its relevance. Grove’s writing and thinking has a groundedness that many other business authors’ lacks, I suspect related to his thinking being clearly connected to his observations in the trenches at Intel. When Grove tells a story about Cindy who’s balancing the needs of her functional and business vertical manager, you can tell that while the names may be changed, the example is both real and not dumbed down. Grove also balances an analytic perspective with a real understanding of human behavior and incentives better than other management authors I’ve read without falling prey to idealistic or ideological thinking.

Relative to High Output Management, An Elegant Puzzle had more relevant things to say about the nuts & bolts of managing software teams, which I found helpful for getting a sense of best practices beyond what I’ve seen firsthand and heard about from friends. Will Larson, the author, shares Grove’s pragmatism and understanding of trade-offs, which shines through in his discussion of challenges managers face. Related to my point in the work section, reading this book fed into my realization that software wisdom feels ecologically adapted to environments with fast feedback loops from reality and the ability to draw clear abstraction boundaries. This means that figuring out which and how I can adapt ideas from this book to my own experiences requires creativity but realistically that’s always partially true with advice anyway.

On the strategy side, I read Good Strategy, Bad Strategy at the recommendation of multiple coworkers and reread Working Backwards on a whim. Good Strategy, Bad Strategy crystallized the idea of strategy for me. This is self-indulgent but it feels a bit like a more thoughtful version of Ideas not mattering is a psyop with actual evidence backing it up. I have had notes for a review sitting around for a few months now but haven’t gotten around to converting them to prose. If you’re interested in them, I’m happy to share them in their current form.

Working Backwards seems like one of the best books in the genre of “this is how a company actually functions with some but minimal sugar-coating or hindsight bias.” This is an amazing genre and I wish there were way more books like this. That said, it’s important to recognize that this genre is not the same as the hypothetical genre of “these are all good ideas for your company.” I don’t feel the need to write a review of this book because Cedric Chin has already written a better one than I could.

Circling back on technical stuff, while my structured efforts failed, I still managed to learn a lot via more haphazard means such as discussions with coworkers and friends, journal clubs, and Twitter/Arxiv trolling. Through a slow process of accretion and mystical background subconscious mental jigsaw puzzle pieces falling into place, I feel much better about my ML knowledge than I did a year ago in terms of my ability to understand the core of results in different areas. As an example, it took me a long time to really grok what the hell probabilistic ML people were talking about but I now feel like I can generally follow when people start going on about posteriors in the context of not originally probabilistic methods (such as neural networks). One obvious lesson I seemingly need to have hammered into me again and again here is that learning can be a lot more pleasant when it involves other people even if that comes at the expense of some structure. I’d like to find a way to integrate this insight into more structured learning efforts in 2023 if I can.

On the leisure reading side, I’m no longer embarrassed to admit that a highlight of my year was discovering progression fantasy. The genre has a lot of crap but pushes all my buttons by focusing on characters who continuously grow. My two favorites that I’ve read so far are the Cradle series by Will Wight (who is also a writing machine) and Mother of Learning. Azalea Ellis’s A Practical Guide to Sorcery is a close second but has the advantage of chapter releases on a weekly cadence if you support Ellis on Patreon like I do. I’ve been disappointed by some of the other books I’ve tried in the genre such as Virtuous Sons, so I’d definitely recommended trying before you buy if you dip your toes in.

As always, I read a fair bit of science fiction this year. In the past week, I’ve received some great recommendations from Twitter, which I started over the break. Other highlights include the hokey but memetically fit for my exact brain Upgrade, Singularity Sky and Iron Sunrise by the consistently excellent Charles Stross, and House of Suns by Alastair Reynolds (hat tip: Twitter again).

To close this section, a harder-to-place urge I’ve been feeling over the past month – maybe related to reading David Goggins’s new book – is the desire to conquer more concrete challenges in 2023. This may mean going through a hard course or textbook, working on a more defined side project, or something else. I’m not sure, but if you have ideas or interest, let me know!

Predictions #

As discussed in my 2021 predictions post, I decided not to make annual predictions in 2022. In hindsight, I made the right decision to instead focus on making more predictions on Metaculus and Manifold. I ended up spending most of my time on the latter, making what looks like >100 predictions over the course of the year. Based on profit alone, I’m happy with the result given that I took a major loss on a not that representative doubled down bet about how much charity people would donate and had to climb my way back. In line with the post, I tried to focus my prediction efforts on areas like AI, general tech, and longevity where I have more inside knowledge. I’ve been happy with this decision both because I’ve had more predictive success here and because I think it has and will continue to help me adjust my views on AI more rapidly as things evolve.

I’ve also enjoyed creating new markets on Manifold and letting others bet on them. Creating precise but still interesting questions is surprisingly hard and some of my markets (example) had some helpful discussion pointing out resolution criteria ambiguity.

Focusing more on Manifold did come with the downside of me making many low effort intuitive predictions but fewer predictions where I put effort into building an actual model. I’m not sure 1) how much I should care about the latter type and 2) whether this is a me problem, the incentive gradient of Manifold, or both. Regardless, in 2023 I may try to push myself to hit some minimum budget of higher effort markets/predictions.

Indulgent musings #

I have a sense that in 2022 I settled into or maybe came to terms with certain parts of myself I will or won’t try to change in the future and feel more equipped to focus on the former vs. agonizing over the latter. This may not be obvious from what I’ve written above but feels entangled with a lot of it from my perspective. Accepting my desire to take on more ambitious learning challenges but not letting it mislead me towards idealism. Moving into a management role vs. previously obsessing over becoming less technical. These both have subtle but real ties to previous personality knots that now feel less tangled.

On a totally different note, although it stresses me often, I feel blessed to live in interesting times. I remember becoming disillusioned during college realizing that all the exciting developments I’d read about on Phys Org and Singularity Hub didn’t seem to be having much of an impact on tech I actually used. Seeing the deployment of novel vaccines, exciting research and investment into reversing aging, ML, cheap solar panels, and much more makes the younger version of me feel warm fuzzies. While the COVID pandemic (obviously) came with devastating downsides, it partially catalyzed a bipartisan movement around abundance and progress which I’m very excited about. Of course, I wish the above didn’t come with war and an increasingly tense geopolitical situation, even there I find pockets of things to be optimistic about.

Here’s to another year of doing, thinking, learning, screwing up, and having fun with all of it! I know it’s popular to hate on the internet and social media but I for one look forward to another year online.