How Autonomous Labs Will Transform Scientific Research: Ginkgo Bioworks’ Jason Kelly
Jason Kelly founded Ginkgo Bioworks in 2008 with a simple but radical idea: DNA is code, and cells are programmable. Sixteen years later, AI is finally making that vision real in ways that could reshape science itself. Jason describes a landmark collaboration with OpenAI in which a reasoning model with access to a robotic lab beat the state of the art in biochemistry by 40% – not by being smarter than scientists, but by running experiments 24 hours a day and sharing data across a hundred parallel hypotheses simultaneously.
Listen Now
Summary
Autonomous labs shift cost structures from overhead to usage-based pricing: By automating lab work, Ginkgo is flipping research economics from 95% overhead (people, space, equipment) to 90% reagents—the actual cost of doing experiments. This enables 10x more data per dollar and creates true usage-based pricing for science.
AI reasoning models can already outperform humans at experimental design: In their OpenAI partnership, reasoning models beat state-of-the-art by 40% in cell-free protein synthesis optimization after six rounds. The breakthrough wasn’t superior intelligence—it was the ability to run experiments 24/7 and iterate continuously without human bottlenecks.
Solving the “high-mix, low-volume” automation problem unlocks autonomy: The hard part isn’t automation (which already exists for repetitive tasks) but achieving flexibility. Like Waymo did for transportation, Ginkgo is solving for variability—integrating thousands of different lab instruments and handling complex liquid handling scenarios that previously required human judgment.
Information sharing between AI scientists creates unfair advantages: Instead of labs sharing findings every 1-2 years through papers, autonomous labs could run 100 AI scientists pursuing different hypotheses simultaneously, sharing raw experimental data daily. Failed experiments from one hypothesis might solve another—creating a collaborative intelligence impossible in traditional research.
Dogfooding on real research accelerates development: Ginkgo’s advantage is having ~50 scientists actively using their 50+ robots for actual research projects, not demos. This creates the equivalent of Waymo’s early engineers sitting in cars grabbing the wheel—they discover edge cases and usability issues (like sealed plates sent to pipetting robots) that drive product improvements like natural language interfaces via Claude/Codex.
Transcript
Chapters
Full Episode
Jason Kelly: All of the previous revolutions in tech—internet, social media, like, whatever, have been totally meaningless to biotechnology and biopharma. Like, yeah, it’s nice. We communicate slightly better or whatever. It’s just some, like, back office IT crap, right? Like, nothing. This is actually gonna change the fundamentals of how we do science, and our big science industries like biopharma are gonna get disrupted. I really believe that. And that’s not been true for the last 30 years of tech
Sonya Huang: We are thrilled to have Jason Kelly, founder and CEO of Ginkgo Bioworks, with us today. Thank you for joining us.
Jason Kelly: Yeah. Thanks, Sonya.
Sonya Huang: So you started Ginkgo Bioworks in 2008 with the goal of making biology programmable. And “programmable” has taken on a completely different meaning in the era of AI. So I’m very excited for the conversation today. Maybe tell us about the journey so far.
Jason Kelly: I mean, I’ll do the Ginkgo journey in short, right? So yeah, we started in 2008, but we didn’t actually raise any capital until 2014. So we bootstrapped for four or five years, which, like, if you’re not a bio person, this doesn’t make sense. But in biotech VC, they really don’t like, like, young people, for example. So we had started the company straight out of grad school. It was 2008. We weren’t trying to make a drug. So we were, like, totally uninvestable.
Pat Grady: Were you full-time focused on the company for those first six years?
Jason Kelly: Oh yeah.
Pat Grady: Okay.
Jason Kelly: Oh yeah, yeah. We were basically, like, applying—we did government grants and service business. It was a pretty brutal start. And then summer ‘14, Sam Altman, now Mr. Famous, writes this blog post because he just took over YC and he’s like, “Hey, I think the Silicon Valley model can work for, like, deep tech, you know, nuclear fission, biotech, material science.” And so I wrote him an email. I was like, “Oh man, like, thank you for—” I mean, we’re, like, five years old. I got 15 people in a lab in Boston. We don’t make any sense for a YC, but this is like an oasis in the desert, you know? Like, nobody will invest in weird companies like this. And he’s like, “No, you got to meet me.” So I flew out to San Francisco, met him, and he’s like, “You should do YC.” I was like, “I should do YC.”
Pat Grady: [laughs]
Jason Kelly: So then we did YC. So we kind of—that was sort of when, you know, if you really want to mark Ginkgo from, like, having capital, it was sort of in 2014.
Sonya Huang: And how has the product changed since 2014?
Jason Kelly: Well, so the mission hasn’t changed, but the product has gone all over the place, different, different roads. So we’ve always wanted to make biology easier to engineer. That was the idea. And so if you’re—you know, this is very much …
Pat Grady: Hang on, I remember “Make biology programmable.” Have the words always been “Make it easier to engineer?” Because I feel like that’s a slight downgrade.
Jason Kelly: When I was talking to Sequoia, it was always like this, the computer science wrapper on “Make biology easier to engineer.” But yeah, that was always our mission.
Pat Grady: Okay, got it.
Jason Kelly: But the analogy is solid, right? So, you know, DNA is code, right? It’s A, T, C, and Gs, not zeros and ones. It’s really our only other, like, coded product other than computers is really biotechnology. And so the core idea of Ginkgo was, well, if you could design DNA code, you can program cells to do things. And cells are—you know, they’re programmable like computers, right? But unlike computers, which just move information around, cells move atoms around. So if you can build whatever you want, that’s, we think, ultimately going to be a huge market, a huge opportunity. But the challenge is our ability to program cells today is really bad. And so how could you fix that? That was the core idea behind Ginkgo.
Sonya Huang: And how has the product itself changed over time?
Jason Kelly: Yeah, so the way we went to market at first was we’re going to try to build what we called “foundries.” Which was sort of a centralized laboratory that would kind of automate the lab work associated with doing biotech. And the reason, again, if you’re a computer scientist, the way to think about this is if you want to compile and debug DNA code, that’s a physical process.
Pat Grady: Yeah.
Jason Kelly: Right? So you’re like A-T-C-G-G-G, like we have to do phosphoramidite chemistry, got to build the piece of DNA you want and then put it into a cell, grow the cell and test the cell. And that’s your kind of compile-debug cycle. Does that make sense? And so one half of what we worked on technologically was how do we make that cheaper, right? Because if you want to get better at doing this, you got to do more of it faster and for less expense. And then the second thing we worked on was how do you get better at your programming? In other words, that design you choose to test in the lab, how do you improve the odds that it does what you want? So sort of like get better at designing the biology and make it cheaper to try things were, like, basically for the last 15 years, the twin activities. And in an era of AI, you see opportunities on both sides for that today. And we’ve shifted a little bit over the years in terms of how we do it. But does that make sense? The two kind of halves?
Pat Grady: And roughly speaking, the design piece sounds like it’s a bit more software. The testing piece sounds like it’s a bit more hardware controlled by software.
Jason Kelly: Very much. Yeah, that’s exactly right. And today, the folks leading on the design side, you might see companies like Chai Bio, for example, that has these protein models, Boltz, the folks at Arc Institute just came out today with a paper called Evo 2, which is like a genomic model. There’s a whole community of people now trying to solve the problem of designing biology with AI. The big change at Ginkgo over the last two years is I’ve kind of stopped working on that problem. We had our own approach to solving it. It’s hard, right? Like, designing cells is tricky. We’re going to try to solve this half of the problem, which is how do you make it cheaper and faster to try things in the lab? And how can you—we can talk a little bit more, we did a project with OpenAI—how could you have AI models help you do that?
Pat Grady: Does anybody else focus on the back end, so to speak? I kind of think about design as the front end of the process and testing as the back end. And when I say “anybody else,” obviously people do this. Is anybody else with a similar approach focused on that part of the market?
Jason Kelly: There’s some new companies, right? So there’s companies like Meddra out here is one that’s doing it with, like, robotic arms trying to accelerate it. You have the life science tools industry, but I would say it does not have a Silicon Valley attitude about things in the sense that they’re not really trying to change the fundamentals of how you do it. They’re sort of just providing the next tool to people doing it the way they’ve always done it. And so we’ve always been this kind of unique force trying to say, hey, is there a new platform, right? Is there something like the jump to planar semiconductor manufacturing in electronics at the beginning of Intel? You know, like, is there some way we should just do all this stuff differently that could make it way better in the future? And that’s always been the Ginkgo—like, what I think is unique about what we’ve been doing the last 10 years.
Pat Grady: What catalyzed the focus on this part of the business?
Jason Kelly: On this half of the house?
Pat Grady: Yeah.
Jason Kelly: I think this is an engineering problem and I think this is a science problem.
Pat Grady: Okay.
Jason Kelly: And I went after both initially, and I kind of took my licks for that. And I think the good thing about an engineering problem is you can ultimately render it to dust, right? A science problem, I think, is great if you hit it, but it’s much more unpredictable. And so in this era of Ginkgo, I’ve got the resources marshalled at this point to go after this and kind of see it through. And so that’s why you see me pointing in that direction.
Pat Grady: And with the efforts that we see with the Arc Institute and others of that ilk, what inning are we in, so to speak? Like, is the ecosystem around the science problem going to start producing meaningful results soon other than papers?
Jason Kelly: It’s a good question. I think the hard part about designing biology—so biology is amazing, by the way, right? Like, just as a substrate again, right? Like, if you think about what’s happening inside of a cell, it is producing, you know, Intel or now Nvidia, TSMC-level caliber atomic placement basically for free, right? So it’s able to do molecular assembly, it self-repairs, it self-replicates. Like, as a physical substrate, it’s insane. It is the product of four billion years of evolution, so the complexity embedded in a cell is actually a lot bigger, I think, than people give it credit for. And so there’s a mark there. Now that said, you know, more than half of your drugs today are produced by biotechnology. We cure cancer, we do this. We have huge value coming out even with the limited tools we have today.
So you don’t have to solve the whole problem over here to create a lot of value. You just have to be better than how we do it today. Does that make sense? And so I think they have really good opportunities already in the near term with all the protein models. You’re seeing that, right? Like, I just did a big deal with Lilly. Like, there’s real opportunities there, I think, right in the near term. Does that make sense?
Pat Grady: Mm-hmm.
Sonya Huang: So speaking of OpenAI and Mr. Altman …
Jason Kelly: Yeah.
Sonya Huang: … you recently announced a partnership research result with them. Can you say more about that?
Jason Kelly: Yeah. Okay. So this is pretty exciting. I think for the folks that are following AI, it’s pretty neat. So basically what we did was we took our—we call it “autonomous lab,” right? And so I can talk more about this, but the short answer is if you really want to drive efficiency on the lab side, you need to get the human beings off of the lab bench, right? So the way we do—and this is true in biotechnology, is kind of true for science broadly—the way we do science today, 95 percent of science, the stuff that’s not theoretical, so not—you know, everybody’s working on math, like, let’s work on Terence Tao. Let’s get a Terence Tao in a box or whatever, right? Like, you know, the reason is that you could just simulate all that stuff on a computer. Let’s play chess. You know, right? Like, yeah, no kidding, right? But if you look at the majority of what we spend money on in the United States and just generally across the world in science, it’s largely on experimental work.
Pat Grady: Yep.
Jason Kelly: And the reason is if you want to learn something new about the world, which is what science is fundamentally, you have to go out usually and, like, poke it. You have an opinion, a hypothesis, but you got to go test it to actually figure it out. So it’s experimental science that moves the needle in my view. And so the question was: Could a reasoning model do the work of experimental science if you gave it a robotic lab? That was the question. And the answer was, yeah, it’s actually pretty damn good. So we did—basically the way the project worked was we had—there’s a biochemistry problem called cell-free synthesis.
Pat Grady: Okay.
Jason Kelly: So you take a piece of DNA—ATCGGG, right? If you were to put it in your cell right now, remember, like, central dogma in high school, right? Like, it’s like DNA makes RNA makes a protein, right? And so you put that DNA into a cell and it’ll make a protein. Well, you can do a thing called “cell-free” where you pop a cell open, take the guts, put it in a test tube, and then add the DNA to that. And because the guts are still there, it makes the protein.
Pat Grady: Mm-hmm.
Jason Kelly: So this is kind of like—it’s like the world’s smallest 3D printer or something, right? Okay, and so scientists use this. They try to optimize. It’s very expensive usually. And so there was a paper that came out of Stanford from Mike Jewett’s lab in August that set the benchmark for, like, how cheap people had been able to do cell-free protein synthesis. And so we said, all right, let’s try to optimize that with the model. And so we gave the model—we did each round, we would do a hundred 384-well plates, okay? So each well in a plate is like a little kind of cup of liquid, and you can do an experiment in there. And so we gave it 30,000 experiments to run, and after it would run those experiments, gets the data back and designs another set. So after four rounds of that, we beat state-of-the-art. And after six rounds, we beat it by 40 percent.
Sonya Huang: Wow.
Jason Kelly: And so that was a—I think it’s the most interesting sort of model doing experimental work result that’s been shown to date by a lot.
Pat Grady: And the 40 percent was a function of what, just faster cycle time, or more intelligent experiment design?
Jason Kelly: Yeah. Like, how did it beat the state of the art? So this is my point. This is, I think, my larger point about science, because I think we’re going to do science differently in the future, in my view, based on what I’m starting to see here. So what does a scientist do when they’re doing experimental work? They’re coming up with an idea, and then they’re trying to design an experiment to ask a question about that idea. And then they’re going to run the experiment, take the data back, interpret it, and then poke again based on what they learned. And they’re going to go through that process a few times to, like, resolve something. Oh, this is how you know, whatever, this cancer works. This is how this piece of material, you know, works. This is this, this is that. And so that cycling is just logic.
Pat Grady: Yeah.
Jason Kelly: Right? And so it doesn’t require you to model biology or simulate anything. It’s not that half of the house. It just requires you to be almost like a programmer. Like, you need to be logical, run through a set of things, do data analysis and draw conclusions. And so that—like, that’s all it has to do. Does that make sense?
Pat Grady: Yeah.
Jason Kelly: And so we didn’t do anything other than that.
Pat Grady: Okay.
Jason Kelly: Right? What really let it break through wasn’t that it was so smart. It was that it could run experiments. And the question was just: could it design them like a scientist could? And the answer was yeah. Hell yeah, it could. And so now I think that opens a real interesting question about, like, how we do science in the United States.
Pat Grady: Well, and it’s easy to imagine a version of the future in which the scientific method, the design and the hypothesis testing and all that is done by reasoning models of some sort, and the actual testing is done by autonomous labs.
Jason Kelly: Yeah.
Pat Grady: What’s wrong with that vision of the future? And if that is the right vision of the future, how far out is it?
Jason Kelly: I mean, I think this is how it’s going to happen. I really do. I mean, I’m probably substantially more aggro on this than the average scientist today, but I’ll just explain why I think if you had a heads-up competition, which I want to do this, right?
Pat Grady: And then tell me how the average scientist would push back. And say, like, “No, no, no, it’s not going to happen that way because …”
Jason Kelly: Right. So I think what the scientists will push back on is, like, this thing can just be as creative or as me or something. Right? Which actually, I’m sympathetic to that.
Pat Grady: Yeah.
Jason Kelly: I’m not saying it’s going to be more creative. I’m saying …
Pat Grady: It’s going to be way more creative.
Jason Kelly: I’m saying you don’t—yeah.
Pat Grady: [laughs]
Jason Kelly: No, not in Silicon Valley. I live in Boston. I’m saying it can run a lab 24 hours a day.
Pat Grady: Yeah.
Jason Kelly: I’ll give you another example. Like, the way science works today is you would have a lab, you’d have a lab, I’d have a lab. We’re all working on the same area. Let’s say we’re working on, like, Alzheimer’s. You have a hypothesis, you have a hypothesis, I have a hypothesis. We each kind of pursue it. We’re collecting data over the course of a year or two, and then based on what I see at the end, I write a paper. And when it comes out in the published literature, you get to read it and you get to read it. And you’re all doing the same thing. So we’re kind of like exchanging information every year or two. And I’m not getting to see every experiment you did, by the way. I’m getting, like, the distilled output of what you think you saw over two years. Does that make sense?
Pat Grady: Yeah.
Jason Kelly: All right. So let’s contrast that to, like, what I think should start to happen now based on what I saw with this OpenAI project. What I think should happen now is you should have a robotic lab that has every piece of equipment that we all have in our labs. So it can run any experiment you want. We can talk in a minute—that’s actually a pretty technically difficult problem, but let’s just wave that away. Okay. Solved. All right, great. So then I’m going to put a hundred AI scientists on top of this thing. Each one is going to pursue a different hypothesis for Alzheimer’s. All right, great. And they’re going to run their experiments just like you would in your lab that day. But at the end of the day, they’re going to pass the data on those experiments, like, what experiment they ran and the raw data that came off it to the other hundred AI. Daily. Every fucking day, okay? And so they’re going to learn from each other. Like, even though your hypothesis is different, we’re working in the same area. So your failed result might—like, for example, say your experiment went the wrong way from your hypothesis, that data might be relevant to my hypothesis and I would never see that normally. Does that make sense?
Pat Grady: Yep.
Jason Kelly: And so that’s all just chugging along. And every week it dumps a lab notebook entry or like a mini paper, like a conclusion about what the hundred of them have figured out that week that we can all read and see and use that. We can direct, we can say this, “Hey, cut that line of research” or whatever. And so, like, that’s number one. I think the information sharing and, like, the ability to handle really broad context across a lot of projects for the AIs is just better than—it’s just socially different even than how we do it today. Does that make sense? That’s unfair advantage number one.
Pat Grady: Yep.
Jason Kelly: Okay. Unfair advantage number two. If you look at how we spend money in science, remember all this stuff, like the NIH was like, what’s up with the indirect rates at the academic universities, and all this hullabaloo, right? Well, what’s an indirect rate? Well, it’s basically paying for manual laboratories. That’s what it pays for. Okay? You’ve got these labs and they’re there 24/7, but they’re used five days a week.
Pat Grady: Yep.
Jason Kelly: Okay? They have equipment, but every lab, all three of our labs, we have the same copies of the same equipment because we all got to do the same work. We don’t share each other’s—no, no, no. I have, like, a door in my lab. Only my lab gets to use it. Your lab uses yours. So we have all low utilization rate of our equipment. Just how it works, okay? And so you have a very inefficient—like, if you look at the spending on research—and this is true—the $60 to $80 billion a year that biopharma spends or the $40 billion that NIH spends, less than five percent is on the reagents. Everything is on overhead. It’s basically overhead—the people, the regulatory, the lab space.
If we were running it efficiently, you would budget a research program at the NIH not on indirect and heads and everything else, but just on the reagents, because that’s, like, the usage-based pricing of science. Because to actually do experimental work, I have to consume some chemicals. I have to consume a piece of plastic plateware. Like, whatever the hell it is. Like, I’m actually doing atoms in the physical world. I gotta burn some stuff up. That should be the dominant cost. It’s the opposite right now. It’s less than five percent.
So the other advantage those AIs will have is if they’re able to run robotic labs, Now they’re running where 90 percent of the cost of a research project goes to the reagents.
Pat Grady: Yes.
Jason Kelly: Oh my God. Right? So that’s like a 10x increase in the amount of data per dollar that you’re getting compared to how we do it today. So I think you combine those two things, but without the AIs even being smarter, right? They can even be dumber than the scientists. I think they win. I really think they win.
And so I think we got to reevaluate, like, how we fund, what we fund with the NIH. I think every biopharma head of R&D needs to care about this. And, like, I think there’s a blind spot. By the way, like, we did YC, I know all the tech people and the tech, you know, CEOs. I’ve always been adjacent to this stuff, right? All of the previous revolutions in tech—internet, right? Like, social media, like, whatever, have been totally meaningless to biotechnology and biopharma. Like, yeah, it’s nice. We communicate slightly better or whatever. It’s just some, like, back office IT crap, right? Like, not this. This is actually going to change the fundamentals of how we do science, and our big science industries like biopharma are going to get disrupted. I really believe that. And that’s not been true for the last 30 years of tech.
Pat Grady: Our partner Konstantine has a good framework for that. He talks about how there are revolutions in computation and revolutions in communication.
Jason Kelly: Yes.
Pat Grady: Communication is about the distribution of information. Computation is about the processing of information.
Jason Kelly: Yes.
Pat Grady: And what you’re talking about here is just a different way to process the information. And so the last several revolutions have been about the distribution side of the equation, which doesn’t get to the core of what it is you’re doing.
Jason Kelly: Completely agree. And that’s, I think, fundamentally true. And so again, I think the leaders of biopharma companies, and also the leaders of research universities and these people that are in the business of doing science to produce either products or for the government cannot ignore this. They cannot ignore AI. It is just different. And I’m telling you, I’m a person who has been adjacent to this crap for 15, 20 years now. And this is the first time I’m like, “The tech guys finally did something cool.”
Sonya Huang: Yeah. And just so I understand, so you think AI is the catalyzing force behind—you know, cloud labs should be a thing. Nobody really ever moved to them but, like, AI will be the reason they move.
Jason Kelly: Yeah. Well, we can talk about cloud labs. Yeah. So let’s talk about autonomous labs, and then I’ll explain the cloud. All right. So why has it been hard, right? Like, the average tech person’s look at how science is done where you have literally PhD-trained people. These are brilliant people, paid decent amount of money standing—I did a PhD at MIT in bioengineering. It’s five years of moving liquids around the lab bench by hand. I swear to God it is. Like, my friends from undergrad, just they would never, right? Like, you know, right? Like, it’s ridiculous you would do, like, manual labor, right? Like absurdity, right? But, like, that’s what you have to—you have to do that. If you want to play at the edge of science, you got to do physical work. And so that’s what you learn to do. Okay.
And so it’s like, well, everyone in Silicon Valley is like, “Well, just automate it, bro.” Like, you know, like, why don’t we just do that? Okay, so why is it hard? And the reason it’s hard is it’s—like the technical, automation-y term is like, high mix, low volume work.
Pat Grady: Yeah.
Jason Kelly: Okay? And this is true at places like Hadrian today, for example, that are working on this on the manufacturing side, industrial. High mix, low volume is hard to automate historically, all right? And my transportation analogy that I’ve been giving to people in bio is like, okay, so imagine on the y-axis you have, like, level of automation. And then on the x-axis, you have, like, flexibility, like that variability, the mix in what you’re asking it to do. So in transportation, low mix, high automation, that’s like a subway. Sit down, takes you away, right? Like, you know, maybe you don’t have to do anything, but you got to want to go to one of the stops on that subway line.
Pat Grady: Yep.
Jason Kelly: Low automation, high variability. That’s a car. Right? Hands on the wheel, foot on the pedal, take you right to your house or the grocery store. And that’s what the transportation system looked like for the last hundred years until, thank you very much, you know, Google, we got Waymo up here in the corner where you get the flexibility—you know, the automation of a subway, but the flexibility of a car, okay? And it’s so surprising that we don’t even call it “automation” anymore. We make up a new word. We call it “autonomous.” It’s autonomous car, because the way I look at it is since the Industrial Revolution, we’ve basically been automating everything that is low mix, that’s low variable, from the loom on, okay? Right? And we just hit a wall. And AI—we hit a wall on flexibility of what you can do, and AI pushes us past it. That is every part of our physical infrastructure post-industrial revolution. Everything has to get looked at again with that lens as we move up the variability. Does that make sense? And so that’s what we’re—so, like, that’s the tricky bit. In lab land, we actually have automation. But it’s like subways. It’s just repeat the same experiment. You know, at a diagnostic company like Quest, they would have automation. If you’re a high-throughput screening in pharma, there’s automation, but it can’t do the variability. So 99 percent of the work, just like 99 percent of miles traveled is in cars, 99 percent of the lab work is still at the frickin’ bench. And that’s what you gotta fix.
Pat Grady: And so, like, the Waymo analogy is an interesting one, because it’s now such a magical thing that so many people have gotten to experience. And in that case, you know, you kind of had your sensor suite, you know, you have your radar and your LiDAR and your cameras. You know, then you had your software suite with the perception and the planning and the actuation, which then had to tie back into, you know, whatever vehicle manufacturer you’re working with. And then there are a gazillion corner cases that you have to simulate because you can’t get enough of them in the real world.
Jason Kelly: Mm-hmm.
Pat Grady: Like, what would that set of words be for your world? Like, what are all of these specific things that are hard to get right?
Jason Kelly: A hundred percent. And I think if you’re trying to generalize the problem of bringing automation to autonomy, it’s gonna be different in every domain. So for cars, the hard part is the physical world is changing. Every mile you drive, “Where—oh my God, I’m in a new place,” right? Like, this one has a cone, it’s raining, like, whatever, right? Like, that’s not the problem in the lab at all. Lab, I can make it, it’s my lab every day. It’s the same fucking room that the robots are in. Nothing is changing. The physical environment’s not changing at all. So, like, it is not at all the same stack that brought you autonomous cars that will bring you autonomous labs. That’s not my problem.
So in my world, it’s the variability in what the scientist is asking for that makes it hard, right? So they’re like, “I want to use this piece of equipment, I want to use that piece of equipment. This is my combination of things.” Right? And so one of your big problems is getting a thousand long tail pieces of third party—like, you can’t believe the software on these things—benchtop lab equipment integrated into one big system so that they all can be controlled by your software. That’s, like, problem number one. It’s integration of benchtop equipment.
Problem number two, what are we doing with our hands when we do science? Largely in bio anyway, it’s liquid handling. So you pick up a pipette, which is like—if you didn’t do this in high school, it’s like the world’s fanciest straw, and you suck up a little bit of liquid and you squirt it out in the right spot. But the thing is if the liquid is viscous, like, it’s like syrup, then you naturally with your thumb, like, adjust the pressure of the straw, because you can see with your eye if it’s working or not. So actually liquid handling turns out to be a trickier problem than you think. And you are doing some work as the human to, like, manage that. So you have two big buckets. One, solve liquid handling. Two, send samples to a thousand different pieces of equipment. Does that make sense? If you nail those two, you’re done.
Pat Grady: That sounds tractable.
Jason Kelly: It is fucking tractable. Yeah, I agree. Yeah, it’s totally tractable. And so it’s just a lot of work.
Pat Grady: Where are you guys in working through that?
Jason Kelly: I think we basically have it working. Yeah, that’s the honest truth. I mean, we basically …
Pat Grady: What’s the last major hurdle?
Jason Kelly: One reason it’s hard for people to do this technically is they want to build the hardware, but they don’t do research. So they kind of have to go to a customer and be like, “Hey, you want to use my robot?” Customer’s like, “No,” right? Like, they’re at the bench, they’re like, “No, you know, I don’t want to try.” So there’s like an adoption issue that I think has made it really hard. And so we have this advantage that because we have a research—we still have, like, our original business, which was research partnerships, which we still do. It means I have a bunch of scientists employed at Ginkgo. These scientists are basically like, remember the Google engineers that, like, sit with their hands like this next to the wheel five, seven years ago in Palo Alto, like, grabbing it if, like, the Waymo drove into a mailbox or something, right? Like, that’s my scientist today.
Pat Grady: Yeah.
Jason Kelly: So they are dogfooding on our—we have, like, 50 robots, we’re going to 100 in our big lab in Boston. And so they’re the ones trying out and breaking it, things that have broken. Running a bunch of work in parallel across that system is a scheduling challenge. So you have to be able to manage all that. So just handling the scheduling with tight timing on experiments is algorithmically tricky, and we’ve had to figure out a bunch of stuff. Getting the equipment to work all—like, when it works all day long, making that reliable compared to it being barely used at the lab bench, that’s tricky, right? So there’s just these things that we have to keep knocking down, but they’re engineering at this point.
Sonya Huang: So have you solved pipetting and liquid handling?
Jason Kelly: Yeah. The good thing is there is a whole industry that’s worked on that problem, like, liquid handling robotics. It’s just a matter of having all the different liquid handlers. And if you have them all, and you know your liquid class you’re dealing with, you can manage it. Oh, one other one. Big one. Scientists don’t code.
Pat Grady: Yep.
Jason Kelly: Okay. So, “Oh cool. Use my robot.” This is what everybody’s done. They’ve made, like, visual programming languages. Like, if you’re a scientist, there’s a thing called LabVIEW. It’s complete trash, but, like, it’s, you know, make a flowchart, right? Because you can’t write Python or whatever. Horrible. Okay? Like, even that they hate, okay? Right? And so no one will program shit.
And so we ran into this issue where we now have all these scientists using the automation directly. So, like, I don’t know, three weeks ago or something, we had two instances where we sent a plate. So, again, this is kind of a bunch of little wells with liquids in them. And we seal the plate when we put it in storage so it doesn’t evaporate. So they sent a sealed plate to the pipetting robot. And pipette comes down and it gets stuck in the seal. And people on our Slack are like, “Hey, what the hell? You know, like, deseal the plate before you send it to the liquidator, dumdum,” right? And this is horrible for a scientist who is basically an expert at liquid handling at the bench and now they’re, like, making, you know, basic mistakes here and they feel horrible. That’s a bad UI. Okay. And I was just like, this is nonsense. From now on, the way we’re going to interact with writing the code is through Claude Code or Codex.
Pat Grady: Yeah.
Jason Kelly: Like, you will now submit a written protocol of what you want and the model will figure it out. And if the model sends a plate sealed, we will update the skills file and it will never do it again, and we will get through this, okay? And so that is a big win for usability of robotics for scientists is what’s happening with Cloud Code and Codex. Does that make sense?
Pat Grady: That does make sense.
Jason Kelly: So, like, thank you. So these are the things that had to get knocked down, too. And so all this is in flight. But right now in Boston is a very unique experiment happening where we have 50 scientists submitting jobs into one big robotic setup. That exists nowhere else on the planet right now. And so it’s pretty neat to watch it.
Sonya Huang: Do you see a future for humanoids?
Jason Kelly: No.
Sonya Huang: Really?
Jason Kelly: In the lab?
Sonya Huang: Because the best argument I’ve heard for humanoids is, like, the world, the physical world was designed for them. And, like, I would think that the existing labs are made for humans walking around, pipetting things, walking between different machines. And so you could try to create a robotic arm that’s able to orchestrate all this, or you could just have a humanoid do what a human lab scientist does.
Jason Kelly: Yeah. So again, the primary thing that the human is doing is moving samples around the environment.
Sonya Huang: Yeah.
Jason Kelly: There are much better ways to do that than, like, walk them bipedally among things. You just put them on a track. Like, our system has a nice little track, and the plates move with extremely high reliability. They get delivered with micron specificity to where they are. The arm picks them up. It’s like, you know, that problem just disappears, okay? And then the other reason is in the long run, the humans are the limitation, right? It’s not like, oh, are humanoid robots gonna disrupt TSMC? Are they going to go in there and etch the fucking chips? Like, obviously not, you know, right? Like, no, it’s a microscopic discipline. Like, biology is a microscopic discipline. Like, these things are—no, they make no sense.
Pat Grady: How does it change the unit of scale for a lab? Like, I’m imagining that labs in the future are going to be these enormous things the way the data centers have become these enormous things.
Jason Kelly: So it’s actually going to make them smaller.
Pat Grady: Really?
Jason Kelly: Yeah.
Pat Grady: Okay.
Jason Kelly: Yeah. Because again, you’ve probably not seen this, but if you were to walk through Merck’s campus, you’ll see a million square feet of laboratory benches across a bunch of different buildings everywhere or whatever, right? And they’re set up for basically humans to be able to walk in and find a piece of equipment—again, underutilized, but basically available whenever they need it to run whatever experiment they’ve come up with by thinking over the last two weeks and not even working in the lab.
And so that kind of like cycling is kind of how it operates. And you need it local. Like, if you have a team now in this new place because you bought this company, they need a lab. So you replicate another lab, right? The labs have to go wherever your scientists are. Well, let’s now instead imagine the scientists are ordering all their experimental work through computers, and it’s going to some centralized autonomous lab. You can think of that like a local cloud if you want, okay? Well, now you don’t need a lab where the scientists are. So you got rid of all the duplication that you have because of physical people. You also get wildly better utilization of the benchtop equipment. Like, we’re talking going from, like, sub-20 percent utilization at the bench to, like, 70 percent. So now you need less equipment. And then, assuming you didn’t decide to have humanoids, like, you can just jam it all in around a track system, so it’s actually a lot tighter.
Pat Grady: Yeah.
Jason Kelly: Yeah, so we have, like, a major space reduction at the moment. That’s one of the big savings. Like, we just sold 97 robots to the Department of Energy for this, like, Genesis mission—this is like the AI for science thing that Trump’s doing. And, like, that is basically going to be ultimately much more dense than the equivalent set of labs would have been that would have, like, housed that otherwise. And that’s part of the sales pitch. It’s like you could have less spending. Remember I told you earlier, like, the spending is not on the reagents, it’s on basically, like, roof space, like, laboratory space and people.
Sonya Huang: What is the unit of work? You said you sold 97 robots. Is that 97 boxes?
Jason Kelly: Yeah. So our particular device, we call it a RAC, like a reconfigurable automation cart. It’s basically like a box that has a piece of benchtop equipment in it, a six-axis robotic arm. This is like nothing special for labs. This is like tradition. This is like coming out of manufacturing tech. And then a piece of maglev track. And what you do is you Lego block the carts. So we have, like, 50 of them all together in our lab in Boston. And then a sample can move on the track, and in front of every piece of equipment is an arm, and the arm picks it up and puts it on the equipment.
Sonya Huang: Got it.
Jason Kelly: Okay? Does that make sense?
Sonya Huang: Yeah.
Jason Kelly: Yeah, and so our unit is we sell the box, we sell a subscription, basically like service fee plus software subscription per box. And then eventually what I want to sell is, like, automation-friendly reagents, because that’s kind of like the usage pricing. So that would be my—that’s, like, one half of my business now is, like, I’ll build you an autonomous lab, you know, Pacific Northwest National Lab, DOE, or Merck or whoever. Does that make sense? And then the other half is what you said. I’ll run my lab in Boston as a cloud, and you could just order from it.
Sonya Huang: Yeah, totally. Can we talk about training? Like, a lot of what we’ve been talking about to me seems like inference. Like, it’s a use case of the reasoning. It seems to me that you are generating an enormously helpful dataset here that should be used to back drop into some cells?
Jason Kelly: Yeah.
Sonya Huang: How’s that going to play out?
Jason Kelly: Good question. I don’t know yet. I think there’ll be one training—so there’s two different levels of challenges. One was the thing I mentioned earlier. Like, you’re submitting a piece of physical work. And again, I think this applies to labs. This will also apply in, like, light manufacturing, right? Like, you want to do a—like, oh, you’re like a prototyping shop or whatever, any place where there’s, like, variability, okay? And so I see a lot of variable requests from scientists. I see all the edge cases of how it breaks the physical equipment.
Pat Grady: Yeah.
Jason Kelly: That, you can’t compile out. Oh, make a digital twin. No. Like, you know, right? Like, you have to, like, actually do this stuff. Like, I do not think—like, I don’t buy it, right? Like, a lot of it’s edge cases, you know, liquid classes. You can’t really pick it up on a camera. There’s just, like, all this stuff that are like the edge cases that Waymo had to see driving around are edge cases we see by doing a lot of variable work on the same system. That is one type of training. I don’t know that it’s fully like model training as much as it probably is just like a giant file or something. But, like, that’s one.
Sonya Huang: Okay.
Jason Kelly: Does that make sense?
Sonya Huang: Yep.
Jason Kelly: The bigger one is sort of the model’s ability to take your intent and turn it into an experimental plan. And that’s interesting. That’s back to, like, could these things just, like, blow science out of the water? And that, I think, you could have a really cool loop that has more to do with the results of every experiment. Like, this OpenAI project. Like, as we saw what experiments worked and didn’t, you could theoretically then teach the model to be a better scientist.
Sonya Huang: Exactly.
Jason Kelly: Oh my God. You know, right? And I everyone forgets this but, you know, human progress, like, is basically science.
Sonya Huang: Yeah.
Jason Kelly: Sorry. You know, right? Like, yeah, this political guy, he went to war with him or whatever. Who gives a fucking shit, right? It’s like Newton, Einstein. Like, you know, like, these are the people that move the species forward at the end of the day, right? Like, everything else is kind of noise. Like, we’re just running around in circles, you know? Like, the Romans did it, you know? Right? Like, we’re just doing the same stuff, right? The Greeks. But except for science, right? Does that make sense? And so I do think this is like, if you crack that nut, like, if models plus—again, I think you’ve got to have the experimental work, if that really 10x or 100x the speed we do scientific discovery, like, that 10x or 100x is the progress of the species in my head.
Sonya Huang: Yeah. It just seems to me that the results of what you’re generating in the lab need to feed back into updating the model weights somehow. Otherwise we’re not going to get to that point.
Jason Kelly: I agree, but I think that’s totally doable. I think that loop—and I think this is something that the frontier models are starting to care about. Like, getting better at that, I do think is again, in my view, some of the most important part of human intelligence is our ability to push the frontier of knowledge. I mean, spreadsheets are cool, to. You know, right? Like, doing back office for a dentist, also fine. You know, right? Like, you can make money with that but, like, if we really want them to be smart., this is where it matters the most.
Sonya Huang: I agree.
Pat Grady: You mentioned Project Genesis earlier.
Jason Kelly: Yeah.
Pat Grady: What is it? Why does it matter?
Jason Kelly: Yeah. So this came out of White House and OSTP—so the Office of Science and Technology Policy, like Mike Kratsios there. And so it’s run by the Department of Energy. And that’s in part because, like, the Department of Energy is—and if you’re a science nerd, that’s where we do, like, our big science projects.
Pat Grady: Okay.
Jason Kelly: So, like, starting with the Manhattan Project, but also, like, the Human Genome Project actually was like a Department of Energy project.
Pat Grady: Okay.
Jason Kelly: So when it’s project-based, it kind of tends to live in Department of Energy and, like, more open-ended sciences like the National Science Foundation. Okay, so DOE is running it. So it’s a project. And so they’re like, “All right, great. We’re going to have a list.” And they actually put out a list of, like, these are areas where we would like to see breakthroughs that are relevant for the American public.
Pat Grady: Yep.
Jason Kelly: Okay? And a couple of them are bio-related, but there’s other stuff, too—material science, new energy, all these different things, right? And then what we want to do is bring AI models into the national labs, which is where we do a lot of our big science in the country, and accelerate them. And their target is to double the acceleration of science in the next few years, all right? So that’s the idea. And so one way you’re going to do it is they want to take the existing data that is at the national labs and, like, basically feed it into models and see if you could find new stuff from data we’ve already collected. But then the other way they want to do it is to have autonomous labs that can generate new data in the direction of models. And so when we did that deal with the Department of Energy, the Department of Energy Secretary Wright and I, like, ribbon-cut the first 18 robots up in Washington and, like, he signed it. It was really cool. And so, like, I think that’s a—to me, again, as a science nerd, like, that’s a good push for us. And I think you want to see some good results soon, because ultimately Congress has to get excited about this. You would need a bigger bolus of money in the future to really make it a big deal. But I like what they’re doing.
[CROSSTALK]
Pat Grady: In the dream scenario, what does that do for America?
Jason Kelly: So I chaired this National Security Commission on Emerging Biotech in DC for, like, two years, and Senator Young in Indiana chairs it now. And there’s a similar one on AI that Eric Schmidt chaired seven years ago. So it was super fascinating to see and learn more about DC, number one, and Congress, but also, like, the point is how does the US stay competitive in technology areas that are strategic? And so there’s one for cyber, like, 15 years ago or something, and there was AI, and then this one’s on bioengineering, biotech. We have had an unfair advantage since the Soviet Union fell, basically, where we were automatically at the lead in science. Like, there was no one else that had the money to spend on science, basically. And because science is like, it’s not just spending on the science, you also need scientists. So you have to have research universities to train these people. Like, it is esoteric. So that was true. That was true. And now with China’s rise, it’s not true anymore, right? Like, they’re actually—like, if you look at the number of, like, scientific papers published, it’s more from China now. If you look at, in my world, like biotech drugs—not manufacturing drugs, like making new drugs—the way it works in our industry is, like, a startup discovers a drug and then they try to sell it to Merck or Pfizer, and they’re like a kind of go-to-market channel. And Merck or Pfizer buys it for $2 billion or $5 billion or $10 billion depending on where it is in, like, clinical trials.
Pat Grady: Yep.
Jason Kelly: Great. Three years ago, less than five percent from China. Last quarter? Forty percent-plus.
Pat Grady: Wow.
Jason Kelly: Okay? Yeah. And that’s innovation. That’s, like, discovery, right? And why is it? Why is that growing so fast in China? They have just as many scientists as us. They’re just as smart as our scientists. They get paid less. And remember, it’s like hands in the lab. You got to do—science is driven by experimental work. So if you now have more experimentalists in China, and you get more research per dollar, I don’t see why they don’t win in research.
Pat Grady: Yeah.
Jason Kelly: And so, so from my standpoint, we need to make this change both in how we do experimental work and bring in the AIs to just increase our amount of intellectual horsepower if we’re going to keep up in science. And you don’t want to be surprised, right? If you know DARPA, like, thanks for the internet, DARPA, right? The founding point of DARPA was after Sputnik, when the Russians were the first ones to put a satellite up, and it was created to say, like, we will not be technologically surprised again. It’s a behind-the-scenes thing, but it is very important, right? It’s really scary if you get technologically surprised. And so I think that that is why it’s, I think, important from a national security standpoint, important for the country, important for the species, is the rate at which we get scientific discovery. Does that make sense?
Pat Grady: Yep.
Jason Kelly: Other thing I’m curious—I want to ask you guys about. I mean, God bless Sam and this freakin’ OpenAI thing but, like, you know, when they started that, it was like a pie-in-the-sky research project.
Pat Grady: Yeah, yeah.
Jason Kelly: Right? This is almost like Bell Labs-y, kind of, you know, like, whatever, like, go for it. Everyone’s like, “It’s bullshit. Oh, what is this nonprofit?” But here we go. It’s now worth what? What was the last round done at? Half a trillion or something?
Pat Grady: $830 billion.
Jason Kelly: Okay, great. $830 billion. All right. So to me, it signals fundamental research in industry can be valuable.
Pat Grady: Yep.
Jason Kelly: And I think that’s also a thing that we’ve kind of forgot about over the last 30, 40 years, because so much of, like, where the money was was just like engineering, engineering, engineering, and it wasn’t really about trying something that was just like, “This is probably not gonna work, but we should give it a try.” Pharma’s been like that, but lots of the rest of the economy has not. Does that make sense?
Pat Grady: Yep.
Jason Kelly: And I wonder if that’s going to change. Like, I’m curious if you think, like, I don’t know, every big industry, like, you know, the chemical industry, like, should everyone just be like, well, based on these models and some acceleration in science, like, actually the most valuable thing Dow Chemical could do would be some fucking crazy breakthrough, not, you know, let’s do another chemical plant or run the numbers on putting something in Louisiana. But, like, actually, like, no, no, we’re going to, like, go for it. Do you see that at all? Like, I don’t know. But that would be, like, one of my naive hopes is the industrial side of the house wakes back up on doing research.
Pat Grady: Yeah. Sonya, I know you have a point of view on this.
Sonya Huang: I would say we’ve seen a few of these. I mean, you mentioned Chai earlier.
Jason Kelly: Yeah.
Sonya Huang: I think it’s likely to come from researchers on the research side of the house that are fundamentally rethinking, for example, the protein design process. I think it’s—my gut instinct is more likely to come from folks like that who are just taking really big swings. And the tricky thing is just the song and dance of how they get funded, especially if we’re still in biotech winter. Right? How do they even prove out to the world that, hey, we have better discovery engines, we have better candidates. Because I think a real problem in the biotech world is asymmetric information. You just can’t tell.
Jason Kelly: Yeah.
Sonya Huang: Google with Isomorphic feels like the closest, because it’s actually got deep pockets behind it to actually prove that story out. But yeah, my bet would be a vertically-integrated research team taking big swings. The challenge is, like, funding these companies all the way through.
Pat Grady: Well, I think to the point on should there be more breakthroughs, more fundamental research? I think the answer is unequivocally yes. Like, that is one of the wonderful dividends, so to speak, that’s going to come out of this whole AI wave. And I think phase one is sort of becoming human-level intelligent across a bunch of different categories. And I think that will largely go to just doing the things that we do today better, faster, cheaper. I think phase two is going to be becoming super intelligent across specific categories one at a time.
Jason Kelly: Yeah.
Pat Grady: And that’s where all the breakthroughs are going to come from. And I feel like we’re kind of in this transition phase where we’re getting to the point of human-level intelligence across a bunch of different things. We’re about to start being super intelligent in a bunch of different things, and we’re about to start seeing a bunch of breakthroughs. And so I think it’s like we’re within months or years of it becoming really interesting.
Sonya Huang: Let me give you an example. We just backed this team that did the AlphaChip project at Google. So they’re using AI systems to actually just design chips that perform better than what human chip designers can do. And so, like, I think we’re going to see these pockets of superintelligence in different corners of industry.
[CROSSTALK]
Pat Grady: What was the AlphaGo move? 38 or whatever?
Sonya Huang: 37.
Pat Grady: Yeah. I always forget the number. I know. I thought it was, like, 83. 37? Great. Like, move 37.
Jason Kelly: Right? Yeah. I mean, if that’s true, right? Like—and I think there’s two ways to get at it, by the way. My secret on this would be one is the thing you’re saying, like, it’s going to be superintelligence, right? It’ll intuit something that a person wouldn’t have. And then I really think if you can solve the problem of greatly accelerating the experimental work, it’s almost the same.
Pat Grady: Yeah. The combinatoric approach.
Jason Kelly: A lot of the physical world stuff is not simulatable, right? And so it’s just can you run that machine faster? And then importantly, like you’re saying, can you feed it back and make it smarter based on what it’s actually seeing in the world? And then, okay. And then I think if you start to believe you can get those breakthroughs, I do think you have to ask, how do you commercialize? Like, what’s venture look like in that scenario? Right? Because you’re like, “Oh, crazy, I got this insane break. Yeah, we got a room temperature semiconductor, right?” Or, you know, right? Like, now what? Right? And is it like you go to the big guys, you do it yourself, right? Like, there’s a whole thing there. I think it’s fascinating.
Pat Grady: Yeah, we had this conversation the other day on will the venture capital industry shrink because the cost of producing everything gets cheaper because it becomes so much more efficient?
Jason Kelly: Yeah.
Pat Grady: And if we analogize back to the cloud transition, when all of a sudden you didn’t have to build your own data centers, you could just spin up a cloud service, you might have thought the same thing would happen, but actually the opposite happened. There was this explosion in creation, which made distribution that much more competitive. And so there were a million different companies, but then all of them had to fight so hard to, like, break through the noise.
Jason Kelly: Yeah.
Pat Grady: And I would guess the same thing happens, which is the cost of creation goes down and down and down. The cost of distribution goes up, because there are just so many different things out there.
Jason Kelly: Yeah, that could be right. Yeah.
Sonya Huang: Do you think language-based foundation models are the right substrates, or do you think somebody’s got to train, like a, you know, ACTG-native model.
Jason Kelly: Yeah. So that is like the Arc’s Evo is an ACTG native model, right? So it’s trained on a trillion bases of DNA. And I think that’s awesome, right? I think it’s super exciting. I think it will apply. It’ll be like a tool available to the reasoning model to do its job. That’s already the case, right? Like, the reasoning model working on, say, even our OpenAI project could go access AlphaFold, design a protein, get it synthesized, add that as a reagent into the project. Like, that’s allowable, right? And so I think those will end up being powerful tools. But I still think the reasoning models are really—they can do the job of an experimental scientist. And so now we have, like, a thousand experimental scientists in a box. That’s already true. I don’t need, like, a miracle, right?
Sonya Huang: So here’s a question: AlphaFold, all these papers have come out on how AI is changing drug discovery. Do you think the pace of drug discovery has actually accelerated or not?
Jason Kelly: Okay. Yeah, let’s nerd out now on, like, what the hell you can actually do with bio, which is like a pile of trash all the time, right? Okay. So, like, I basically—you know, Jurassic Park came out when I was 13. All I want to do in, life, is like make Jurassic Park.
Pat Grady: [laughs]
Jason Kelly: Genetic engineering is awesome, right? Like, that’s why I’m doing this.
Sonya Huang: Did you see that one company that brought back the woolly mammoth?
Jason Kelly: Yes. Yeah, I know them well. Colossal. Yeah, that’s great. I love that stuff. They haven’t brought back the wooly mammoth yet, but they brought back a dire wolf.
Sonya Huang: Oh, a dire wolf. Sorry.
Jason Kelly: Mammoth is coming, I’m sure. But yeah, okay. Right? But really what I’m excited about is the ability for, you know, kids someday to design biology like they program computers, right? Like, that is what I want. Like, that’s the world I want to exist. And so the question is like, how the hell do you get there? And one of the issues we have in programming biology, designing DNA, genetic engineering, whatever you want to call it, is that the only working app ecosystem is therapeutics. There is, like, 85 percent of the market for biotechnology is therapeutics, and then there’s, like, 10 percent is ag. Remember Monsanto? Boogity boogity, right? Like, that’s genetic engineering in plants. And then there’s, like, five percent that’s, like, industrial, like you and your cold water laundry detergent. That’s a product of biotechnology. There’s enzymes in there that break up dirt without you having to make your laundry hot. And so that’s five percent.
Sonya Huang: Okay.
Jason Kelly: That is the totality of apps we’ve come up with so far for programmable matter compilers, i.e. cells. It’s embarrassing. Okay, right? Like, the fundamentals of this—but again, like, let’s imagine computers. The only application for computers was drug discovery, we’d all be like, “They’re so—you know, man, such a pain in the ass with these computers,” right? And so there is, like, a distinction between, like, what we’re really working on at Ginkgo and other places like us, which is like, how do you really make it easier and faster and cheaper to just design biology, make it do new things?
Pat Grady: Yeah.
Jason Kelly: And then the fact of the matter being that the only apps that really, like, have ROI are drugs.
Sonya Huang: Yep.
Jason Kelly: Okay. And drugs have, like, annoying features. The biggest one being, like, the time it takes from, like, inception of the drug to making money is a killer. And it has to do with regulatory, basically. Like, it’s like, well, we don’t like to stick things in people. You got to be careful. Like, all that stuff is fine and we can get faster on that. Like, China is also eating our lunch. Like, you might have seen this, but they can do a trial in six months. It takes us, like, two and a half years, like, for phase one. It’s crazy. Australia is actually eating our lunch. I think our FDA will just match Australia soon, which is great. So we will get faster, but it’s still not like launching a phone app, okay? Does that make sense?
Sonya Huang: Yeah.
Jason Kelly: So that does remain, like, a problem, I would say. We tried—there’s been a bunch of other attempts at other things, you know, like animal-free meat and, like, it’s never quite been good enough to, like, disrupt another industry yet.
Sonya Huang: Yeah.
Jason Kelly: I would love to see that happen. That would be a big accelerant, not just for whatever industry, but ultimately for genetic engineering. And then if you accelerate genetic engineering, you try to create that flywheel that we got in computers where it keeps going and going. Now that said, to pick our app of drugs, it has gotten more expensive to develop drugs, not less, year over year for the last 25 years. So that’s not great. That’s the opposite of what should be happening, right? And why is that? Because we do it manually. That’s my opinion. We have not—and it’s like Baumol’s cost disease or whatever. Like, these scientists are getting more expensive. The rent is getting more expensive. That’s it. They’re actually more productive. Like, we give them new tools. Like, we are getting slightly better, but the majority of the cost is manual work.
Pat Grady: Yeah.
Jason Kelly: And that shit does not get cheaper. And so that is what I think is the root of it, actually. And that’s why you see me 15 years into this being like retrenching to solve that problem first, because I don’t believe we really get out of the mud until we’ve got the people out of the lab. Then from that base, we can start to climb out, right? Now it’s fully automated. You can do all kinds of crazy stuff, and eventually it looks like chips someday, right? It’s like alien technology. You know, you’re doing it in some way that humans never—remember before chips was vacuum tubes. It was like human-scale electronics. And then we were like, okay, cool. We saw the curve. That will happen for lab work and genetic engineering, I promise you. But the first step is put down the vacuum tubes, right? Get onto some system that does not need people in the middle. Does that make sense?
Pat Grady: How does the application space change?
Jason Kelly: I don’t know. That’s very unpredictable. Well, I’ll tell you some things I think I’m excited about in the near term. You’re familiar with the GLP-1 drugs, right? Lilly’s worth close to $1 trillion. That’s great. That is, in my opinion, like a consumer product. I’m on the clips. It’s awesome! It is like the best thing since the iPhone. You don’t think about food. It’s like you get to spend your day thinking about work or kids, whatever else you want to do. You don’t have to think about, oh, I gotta intermittent fast through lunch or I’m going to be obese, right? You can just get your willpower back. It’s awesome. But to me, the reason it’s worth so much money is because it’s not treating a disease, right? Like, the biotech industry, the therapeutics industry today is really the disease industry.
Pat Grady: Yeah.
Jason Kelly: Right? And how much of your life—and again, depends on the person—but how much of your life do you have a disease? It’s like a small amount of the time. How much of your life do you want to, like, weigh 15 pounds less? How much of your life do you want to sleep better? How much of your life do you want to have more muscles? How much of your life you want to feel better? Like, the applications in the consumer space price for biotech are bananas. Oh, it adds two years to your lifespan. What’s that worth? Like, what is the value of a biotech product that adds five years to lifespan? This is Sequoia Capital. Throw a number at it.
Pat Grady: Depends on your customer.
Jason Kelly: $50 trillion? Like, it’s infinity. There’s no limit on the value of something that would, like, stack extra years of healthy life onto people’s lives. Like, that’s nuts, right? Like, that’s effectively what our healthcare system is trying to do. And think of, like, the total consumption cost of that. So if you could have that in a pill, in a shot?
But right now, today, we don’t even have a good pathway to get something like that approved, right? Because all of the regulatory, the FDA and everything, is oriented around treating disease.
Pat Grady: Yep.
Jason Kelly: And this is actually where all the people are like, “Oh, MAHA, blah, blah, blah.” But I think, like, that line of the MAHA thing, which is like, hey, actually it’s not just about disease, but about being healthy when you don’t have disease, I think is really good. Like, I think that’s a really good thing for the industry. And so I do think you’ll see that set of things happen.
And so that’s one half. It’s like new drugs there. And then the other one, our first investor out of YC, you know who it was? Our angel? Mr. Bryan Johnson.
Pat Grady: No kidding!
Jason Kelly: Oh yeah. Back when he was like, pudgy VC Bryan.
Sonya Huang: That’s awesome. No way!
Jason Kelly: Oh yeah, of course. Yeah. Uh-huh. Not like, you know, longevity, like, you know …
Sonya Huang: Did you just call him pudgy VC?
Jason Kelly: Yeah. Back when he was like a normal person, right? Yeah, not, like, jacked. He’s awesome now, right? But, like, what he’s done, what’s interesting about what Bryan’s done is he has normalized the monitoring. Like, because I asked him, I was like, “How are you—like, these are, like, all these interventions.” Like, you’re a—like, well, you know, like, Bryan’s got a good life, you know? Like, would you be like, oh, you’re taking some random thing and trying it out? Like, isn’t that scary? And he’s like, “Well, I’m monitoring all the time,” right? So, like, every week he’s, like, taking all these tests and everything else. He’s the most measured person, $2 million a year of diagnostic stuff, right? Like, that is the other area. So like, oh, we all love our Oura Rings and everything. This is pathetic, okay? Right? Like, it’s great. Yeah. Heart rates. Like, it’s like telling you nothing, all right? Like, the real—and I love Oura, by the way. I’ve had this thing for 10 years. But, like, the real meat of what’s going on inside your body is molecular.
Pat Grady: Yeah.
Jason Kelly: So what we really should be doing is, like, taking a blood sample every week and giving you, like, a whole readout of a ton of stuff, like, longitudinally over time, so that you can try different interventions for you and see how it affects you molecularly, because that’s what actually matters. Like, molecularly, like aging is molecular, right? It’s not your fricking—whatever. And so that whole world, stuff like functions getting going, doing Quest tests. I mean, my God, I did it over Christmas, but it’s like 10 vials. It’s like the worst experience in the entire world, right?
Sonya Huang: There’s the at-home stuff now too, right?
Jason Kelly: Yeah, but it’s so early, right? That’s my point. So I think that line is another place that could be a big—if you’re asking about near-in apps for biotech, that one is the other one, right? And so I think you could see that. I think you could see other things like the glyphs. Those are ones I’m excited about.
Pat Grady: Awesome.
Jason Kelly: And then I’m always hopeful …
Sonya Huang: Jurassic Park?
Jason Kelly: Something like that. Yes! You know, right? Like, that there’ll just be some other weird thing. And we did just launch a CloudLab service at Ginkgo where you can—like, we have experiments as cheap as $39 that you can just run. And we don’t send you anything. Like, we won’t send you a sample, but we will send you back data. So it’s like you do the experiment, we run the experiment for you, you get the data back, right?
And so my last point on this one is, like, I think, like, science is thought of as this very, like, precious genius thing, but really what it is is, like, formalized human curiosity. It’s like a process by which humans, of which all of us are curious about things, like, do curiosity right, like, really try to answer our curiosity. Does that make sense?
Pat Grady: Yep. And I think everybody’s curious. And so I believe that if you drop the cost of—like, a lot of what blocks people from science is actually not the esotericness of it. It’s, in my view, the lab.
Pat Grady: Yeah.
Jason Kelly: That is brutal, right? Like, you don’t have access. A) It is total gatekept. Like, you cannot get access to one. Like, almost legally you can’t get access to one, right? And so there’s just this whole thing, and I’m like, well, what if that went away? What if people, everyday people, could order an experiment? What if the model would help them design the experiment to ask a question that they have about the world? Would they suddenly ask questions and do these experiments? Would they be—would everybody—would millions of people want to be scientists?
Pat Grady: Yeah.
Jason Kelly: And I know that sounds like, well, that’s nuts, blah, blah, blah. But, like, if you rewind the clock—God bless Silicon Valley and the computer industry—to the 1960s when it was IBM and it was mainframes and you told people that kids would program computers, they would say you’re fucking insane. And so I believe if you do manage to drop the cost on all this stuff, you may have kids and everybody else wanting to just ask original scientific questions and being able to do it. And that would be a cool market, right? And so anyway, all this stuff I feel is on the other side of getting this AI for science stuff working, but I’m excited about it.
Pat Grady: Extremely cool vision for the future.
Sonya Huang: Great note to end on. Thank you.
Jason Kelly: Yeah.
Sonya Huang: Very inspiring.
Jason Kelly: Thanks for having me on.