State of my Mind
Things I've changed my mind about and things I haven't made up my mind about.
This page tracks things about which I’ve changed my mind over time and things about which I’m still confused. For some things, there was one moment or event that caused me to revise my view. For others, it was just a slow process of evidence accumulating and then me retroactively realizing I no longer believed what I once did. Interestingly enough, none of the mind shifts listed here came about through me setting out to change my view. The only other trend I’ve noticed is that in most cases, I didn’t go from one extreme view to another. Instead, I mostly started at a more extreme view and drifted to a more nuanced one.
The second section of this page tracks things about which I’m still confused and would like to become less confused. In some cases, I’m confused because there’s some unresolvable question which I believe society as a whole doesn’t know the answer to. In others, I suspect the answers are out there, I’m just not willing to put in the work to understand them myself… yet.
Each sub-section includes a status subtitle that describes some combo of how much work I’ve done to understand the issue being discussed and how likely it is I think I’ll work more in the future at understanding it.
Changed My Mind About #
Does industry drive scientific and technological more than academia? #
Last Updated: 2018-10-12
In college, I had the naive view that most progress in modern day society was coming from startups and companies. I now realize that most pioneering research still happens in universities and that industry and academia work symbiotically to advance science and technology.
Is the paleo diet an optimal human diet? #
Last Updated: 2018-10-12
Status: Consider this all a long-winded recommendation to donate to SENS or the Longevity Research Institute!
In college, I was convinced that the paleo diet maximized human health and longevity. I still think paleo as a placeholder for a cluster of diets, not for the strawman “live like a caveman” version gets a lot right in terms of maximizing health. However, I’m no longer convinced that a reasonable paleo diet is enough to get longevity gains. I now expect that non-trivial longevity gains require a more tailored approach based on some combination of a restrictive diet, periodic methionine restriction, circadian rhythym management, and targeted supplementation.
I’ve also become more skeptical of nutrition science and that prosaic interventions like diet can significantly lifespan.
Still Confused About #
Would science and technology progress faster if all intellectual property laws were eliminated? #
Last Updated: 2018-10-12
Status: Pretty confused but would like to learn more.
On one hand, a well-functioning intellectual property system should allow innovators to reap more of the social benefit that comes from their innovations than they would otherwise. I remember a study introduced on Don Boudreaux’s blog by recent Nobel Prize winner, William Nordhaus, estimated that innovators only reap 2.2% of the total social benefit of their innovations. Assuming that number’s anywhere close to correct, helping innovators reap a larger portion of the social benefit of their innovations seems good to the degree it incentivizes more innovation rather than just regulatory capture of the intellectual property system. On the other hand, the key word in my argument is “well-functioning”. The little bit I’ve read about our intellectual property system as it relates to software and the bit I’ve seen first-hand has led me to the view that our patent system encourages hoarding more than true innovation. Also, to the degree a good intellectual property legal apparatus would reward innovation, it would also discourages imitation. I tend to assume that imitation precedes innovation and discouraging the former would lead to less of the latter all else equal.
Studies that somehow compare the amount of innovation between states with and without strong intellectual property laws could cause me to update in either direction, but even here I worry that there’s too much noise. Taking a degenerate example, I doubt weak intellectual property enforcement in Somalia accounts for the lower level of innovation there. Maybe economists are better at controlling for these sorts of variables than I realize?
I’d also be interested in studies that compare the amount of innovation between companies with transparent and more siloed cultures. The current business book dogma seems to be that the more open you are the more innovation you get, but this is non-obvious to me. Apple has the reputation for internal and external secrecy and they certainly innovated a lot at some point. Also, a lot of the early successful industrial research labs like PARC and Skunkworks siloed themselves from the rest of their parent organizations.
Should we prioritize research that aims at specific goals? Does this close us off from serendipitous discoveries? #
Last Updated: 2020-03-03
Status: I’m slowly moving towards the view that goal-driven research with the right goals is underrated.
I worry that the current version of this question munges together a few distinct questions:
- How important is basic research for technological progress?
- Are long-term research goals better achieved by slowly accumulating successes at incremental progress towards the bigger goal?
- To what degree are theoretical advances, in particular in fields like math and computer science, the bottleneck for further progress?
That said, I’m starting to think that goal-driven research with good goals is underrated having now spent some time back in academia.
For now, only scattered thoughts and a repository of links and quotes that seem relevant follow.
The Wrong Question? #
Perhaps my original question needs to be dissolved rather than answered. A salient example of where this question breaks down is for a discovery like the Turing machine.
Ed Boyden on Rationally Speaking #
Julia Galef brings up this question in her discussion with Ed Boyden on her podcast. Galef asks Boyden,
I want to ask you about this ongoing friendly dispute I have with some of our mutual friends, about which approach to progress is more promising? I’m gonna call the two approaches the “rationalist” approach and the “Hayekian” approach. You could also maybe name it after Michael Polanyi, if you’ve read him. Those are just my shorthand labels for them. So the rationalist approach to progress would basically be: Identify which problems would be most impactful to solve, most important for understanding or global well-being, and then strategize how best to solve them. The Hayekian, or Michael Polanyian approach, would say that instead, important progress is more likely to result not from intentionally pursuing progress and optimizing for progress, but instead, from smart and creative people playing around with ideas that catch their fancy. Some of which ultimately spark discoveries, but in ways that we could never have predicted in advance.
Now, it certainly sounds, from talking to you, like you lean more towards the rationalist approach, but is that correct?
I like Galef’s terms and will use them throughout the rest of this discussion.
Boyden gives a wishy-washy answer but then subsequently replies,
But the problem is meant to be a deep enough problem that it underlies a lot of other problems. It’s a foundational problem. So as I mentioned, the two problems that I often thought about the most over the last 20 years were: How do we see everything, and how do we control everything.
So is that a problem first? It’s not a problem the way that, let’s say a classically trained physician might want to tackle tuberculosis or brain cancer, right? I said we’re trying to dig one level deeper and think about, what’s the underlying problem of biology. And as I mentioned earlier, I trained in physics and chemistry. The way I think about things is, in physics and chemistry, you have a small number of things, like protons and electrons, and a small number of ways that they interact. Like electromagnetism and the laws thereof. And of course the laws of quantum mechanics. Now the problem in biology is you have a lot of stuff, and a lot of ways they interact. We don’t even know how many cell types there are in the human body, much less the molecules within, right? Maybe there’s millions and millions and millions of variants that we haven’t yes described. So in some ways, when I look at all the struggles of biomedicine and how very little’s been really cured in the last several decades in terms of major diseases… And look at brain diseases and cancers and aging related diseases, and the list goes on and on… What’s the underlying problem, that if we solved it, might help clear up all the downstream problems? So, I feel like there’s an element of the latter, in the sense that you have to quest for the right problem. And maybe, once you of course find the right problem, then you should go after it full force. And I think very often the problem is in finding the problem.
As I understand it, Boyden’s saying the key to doing goal-directed research well is not settling on the first problem you find. When I first read this, I concluded that Boyden was too much of a rationalist (as Galef uses it) to engage with the Hayekian approach. But then, Boyden, in another interview, says the following, which I view as a steelman of the Hayekian approach.
The rush to get a short-term treatment, I worry, can sometimes cause people to misdirect their attention from getting down to the ground truth mechanisms of knowing what’s going on. It’s almost like people often talk about we’re doing all this incremental stuff, we should do more moon shots, right? I worry that medicine does too many moon shots. Almost everything we do in medicine is a moon shot because we don’t know for sure if it’s going to work.
People forget. When they landed on the moon, they already had several hundred years of calculus so they have the math; physics, so they know Newton’s Laws; aerodynamics, you know how to fly; rocketry, people were launching rockets for many decades before the moon landing. When Kennedy gave the moon landing speech, he wasn’t saying, let’s do this impossible task; he was saying, look, we can do it. We’ve launched rockets; if we don’t do this, somebody else will get there first.
Moon shot has gone almost into the opposite parlance; rather than saying here is something big we can do and we know how to do it, it’s here is some crazy thing, let’s throw a lot of resources at it and let’s hope for the best. I worry that that’s not how “moon shot” should be used. I think we should do anti-moon shots!
In other words, people now use the term moonshot to mean, “do this thing we’re not sure is possible”, but Boyden’s more interested in understanding the phenomena enough that it’s clear what moon shots are and aren’t possible. I imagine in Boyden’s own field of neurotechnology, an example of the former approach would be a naive entrepreneur saying, “we don’t know if we can build brain-computer interfaces that actually allow you to control stuff with your brain, but let’s just figure out a way to get them implanted and then see what we can do.” My model of Boyden would reply, “no first we build the tools that allow us to understand and control the brain enough that we have a good model for what circuits a consumer BCI would listen to and/or modify and then entrepreneurs will do the hard but tractable work of industrializing and commercializing them.”
Cryptography: A Triumph of the Hayekian Approach? #
A representative and oft-cited example of the Hayekian approach is number theory proving useful for crypto. Number theorists spent decades finding and proving new theorems, and then one day computer scientists realized they could use the theorems to construct provable unbreakable cryptography algorithms. Without the number theorists exploring the least applied of disciplines, we might have missed out on one of the key enabling technologies of the internet and online payments.
On the other hand, I’ve read elsewhere that a lot of the number theory used in foundational crypto is fairly basic and could’ve been invented just-in-time had it not existed. In addition, cryptocurrency seems to be a good example of the pull-based approach to research working well. I.e., zero-knowledge proofs are becoming popular and now there’s a ton more research into them.
How hard would it be to modify the genes of a human in vivo? #
Last Updated: 2018-10-12 Status: Hoping to learn more about this this Summer during my internship at Dyno. People discuss the promise and perils of genetics in the context of modifying the next generation. Is it significantly harder to modify the genes of already born humans (and have those modifications propagate in humans’ phenotypes) than it is to modify those of embryos?
George Church’s views #
George Church seems to think it’s safer than germline modification and is founding companies based on this belief.
Are things more partisan than ever before? #
Last Updated: 2019-03-24
Status: Coddling of the American Mind and conversations with politically informed friends have pretty much convinced me that things are the most partisan they’ve been in recent history in the United States.
How solved is physics? How likely is another major paradigm shift in physics? #
Last Updated: 2018-10-12
Status: Uncertain, willing to read non-technical material about it. Very interested in the more general notion of anticipating scientific paradigm shifts, but it seems unlikely I’ll ever resolve the physics version of this question unless some physicist literally writes up the evidence in a post or quantum computers suddenly break RSA cryptography and the entire internet. Reading the comments on Scott Aaronson’s blog convinced me that the level of understanding require to answer this question from a technical perspective is far above what I’m interested in or willing to achieve.
When I read about the early 20th century’s series of major physics paradigm shifts–relativity, Heisenberg’s uncertainty principle, all of quantum mechanics—and compare what happened then to what I know of recent developments in physics, it sure seems like more was happening then than now. I’m skeptical that in 30 years people will look back on the physics done between 1980 and 2018 with the same sense of awe with which we now look back on the 30-40 year period leading up to the Manhattan project. On the other hand, maybe that’s because the science biographers haven’t written the biographies of more recent physicists like string theory’s inventor, Ed Witten (besides this article in the Baltimore Sun), readers like me to appreciate their discoveries.
The whole thing’s also quite hard to measure. If empirical evidence substantiates string theory, that will be a big development for physics but won’t, as far as I can tell, have an immediate application in the way atomic theory made nuclear weapons and power possible.
But then there’s quantum computing. First of all, what era even gets credit for that? Feynman proposed the idea of exploting quantum effects for computing, but Shor’s algorithm wasn’t invented until 1994. David Deutsch is also often credited as the inventor. Second, how big a deal is it? Working non-trivial quantum computers don’t exist yet (as far as I know) but quantum supremacy is here.
Another edge case, the Many Worlds Interpretation was only invented in the 1950s and seems like a pretty big deal.
Not As Confused About Anymore #
What’s the moral status of non-human animals? #
Last Updated: 2018-10-12
Status: Comfortable with my own views on the matter but not proselytizing.
In terms of traditional perspectives, my view on this (not all moral issues) is closest to a weak form of negative utilitarianism where we should seek to reduce suffering of non-human animals but not prioritize doing so at the same level as reducing human suffering. In my ideal world, we’d either only eat animals raised on humane farms where the animals live good lives or not eat animals at all. I value some version of Earth’s current ecology enough that I wouldn’t eliminate all predation in this ideal world even if I could but would genetically modify all animals to not experience pain above a certain threshold. The pain animals experience after a fatal wound and while being eaten is an evolutionary artifact that adds no value as far as I can tell.
Side note, I don’t subscribe to David Pierce’s view that we should eliminate all physical suffering. I value the increased pleasure that I get from doing something hard, maybe experiencing some pain (think running or working out at my limit) and then relaxing, and I don’t think different levels of pure pleasure would provide this same satisfaction. On the other hand, I do support placing an upper bound on pain animals (including humans) experience. I’m not sure what that level would be but headaches, papercuts, and burns all exceed the maximum level of pain I’d want to experience given the choice and I don’t buy any counter-argument along the lines of “maybe the animals value the pain”.
I’m still uncertain about my view of the relative weight of animal suffering against human preferences. For example, I’m not sure which I’d option I’d choose if I were given the choice of feeding 1 billion more humans with factory farmed chickens and cows or not having the 1 billion more humans at all.