There is a microbe, discovered in 1958 on Sapelo Island in Georgia, that divides every 10 minutes. This is the fastest division time ever observed in any bacterial species, and it belongs to Vibrio natriegens. (It divides so quickly, in part, because its genome is split across two chromosomes, each of which gets copied simultaneously.)
Some idealistic researchers in George Church’s lab at Harvard spent several years making genetic tools for Vibrio. These scientists thought that if they made it simple to work with this microbe — engineer it, grow it, that sort of thing — then everyone would surely use it instead of E. coli. After all, Vibrio divides twice as fast as E. coli (10 instead of 20 minutes), meaning that you can get visible colonies on a plate in four hours instead of eight. This, in turn, means that common experiments become much quicker. Church’s group released these genetic tools in 2019, waited for others to use them, and quickly became disappointed (I’d imagine).
Few researchers read the article and switched over to Vibrio. And if you go to most academic laboratories today, researchers will likely admit they know about Vibrio but then gleefully point you to their massive stockpile of E. coli strains instead. If you ask why they don’t use Vibrio, which would speed up their experiments, they will usually answer with some variation of, “Well, there’s little difference between 10 and 20 minutes. And besides, I just grow my E. coli cells overnight and continue my work the next day. Faster division times don’t really make a difference for me.”
Humans can reasonably make this argument! But it will, I think, become an increasingly moot point in a world where AI tools propose hypotheses. As intelligence becomes cheaper, the bottleneck for discoveries will increasingly shift toward wet-lab experiments. And when wet-lab experiments become a bottleneck, more money will flow toward automating experiments. Human scientists can certainly design experiments around their own schedules, but robots don’t sleep or eat lunch. When experiments run fully automated, then, a time savings of ten minutes per cell division becomes a big deal indeed.
DNA cloning is a method scientists use in nearly every biotechnology experiment. (And, unlike most other methods, researchers can actually automate it fairly easily.) Cloning GFP into a plasmid, for example, has a handful of basic steps. First, scientists use PCR to amplify the GFP gene (~3 hours). Then, they digest and ligate GFP into the plasmid (2.5 hours). Third, they transform this DNA into bacterial cells (1 hour). Next, they grow the transformed cells on agar plates and wait to see whether colonies grow. (It takes at least 24 cell divisions to see a colony and each division takes 20 minutes with E. coli, so this step alone takes 8 hours.) Finally, researchers pick these colonies and send them for sequencing, which can take a day or more.
Assuming all these steps happen sequentially, without any breaks, then DNA cloning to sequencing takes a minimum of about 15 hours. Note that more than half of this time goes to waiting for cells to grow! Cell division times literally limit the rate of DNA cloning, an experiment that scientists collectively run millions of times each year.
If scientists swapped E. coli for Vibrio, this DNA cloning example would drop by four hours, to about 11 hours in total. (Visible colonies of Vibrio appear on an agar plate in about four hours.) This represents a time savings of 27 percent, which may seem small. But again, imagine repeating this for thousands of plasmids across years of work. If a theoretical scientist had one robot cloning one gene at a time full time, autonomously, then switching to Vibrio would basically give three months of “free” productivity in the first year.
Even if you don’t think that AI will generate meaningful hypotheses, progress in biotechnology remains broadly limited by wet-lab capabilities — specifically, the speed and cost of experiments. Whereas many PhD students aim to change the world by inventing a useful new method or medicine, they ought to consider instead how even a marginal improvement to a ubiquitous method can have far more consequential effects, mainly by speeding up discoveries at scale.
(Thanks to Henry Lee for inspiring this article.)
Enter your email