The AI Product Going Viral With Doctors: OpenEvidence, with CEO Daniel Nadler
Training Data: Ep32
Visit Training Data Series PageOpenEvidence is transforming how doctors access medical knowledge at the point of care, from the biggest medical establishments to small practices serving rural communities. Founder Daniel Nadler explains his team’s insight that training smaller, specialized AI models on peer-reviewed literature outperforms large general models for medical applications. He discusses how making the platform freely available to all physicians led to widespread organic adoption and strategic partnerships with publishers like the New England Journal of Medicine. In an industry where organizations move glacially, 10-20% of all U.S. doctors began using OpenEvidence overnight to find information buried deep in the long tail of new medical studies, to validate edge cases and improve diagnoses. Nadler emphasizes the importance of accuracy and transparency in AI healthcare applications.
Stream On
Summary
Daniel Nadler, the founder of Open Evidence, shares insights on how his company has leveraged reliability in AI to improve medical decision-making. By treating doctors as consumers rather than going through traditional healthcare system gatekeepers, OpenEvidence has achieved remarkable adoption by helping physicians navigate the overwhelming volume of medical research.
Maintain a relentless focus on accuracy and quality: In medicine, there is no room for error or hallucination. OpenEvidence achieves its high standard by only training on peer-reviewed medical literature, using an ensemble of specialized models rather than a single large model, and allowing full transparency into source citations.
Build infrastructure for reliability at scale: As AI applications become mission-critical, robust infrastructure becomes as important as the AI capabilities themselves. Systems need to be reliable and available when users depend on them, requiring significant investment in traditional software engineering.
Target real, acute user needs: Rather than building technology in search of a problem, focus on solving genuine pain points. For physicians, the challenge of keeping up with exponentially growing medical knowledge while handling complex patient cases creates an urgent need for better information access.
Create virtuous cycles with stakeholders: By sending millions of physician visits to medical journal pages and making it easier for doctors to discover relevant research, OpenEvidence built symbiotic relationships with publishers that improved the product further through expanded access to premium content.
Attract elite talent through elite challenges: The most capable people want to work with other high-performers on difficult, meaningful problems. Build a team of individuals with high “neuroplasticity”—the ability to rapidly learn and adapt to new information and situations.
Transcript
Chapters
Contents
Daniel Nadler: One of the things we hear so frequently from doctors about OpenEvidence is, you know, I used it to look up this thing for a patient case that is maybe a patient case that I would have seen one or two times in my career.
And then the same doctors are saying that about a different patient case, and then about a different patient case, and then about a different patient case. And then you realize just how long the tail of this thing is, where it’s like, well, if the majority of your uses of OpenEvidence are patient cases that you would see once or twice in your career, then that really captures what the thing is doing.
Pat Grady: One of the questions on everybody’s mind as it relates to AI is will this actually be good for humanity? One of the use cases where AI is inarguably good is medicine. We have Daniel Nadler, co-founder of OpenEvidence, on the show today. OpenEvidence is trained on peer-reviewed medical literature to provide an AI copilot that helps doctors make better decisions at the point of care. This is inarguably good for humanity. It doesn’t take crazy assumptions to believe that OpenEvidence will save a million lives over the next decade. Today, we’ll hear from Daniel on how they built the product, what’s different versus some of the other application layer AI products out there, and some real life examples of how it’s being used in the field to the benefit of everybody. We hope you enjoy.
Daniel, welcome to Training Data. Thanks for coming on the show.
Daniel Nadler: Thanks Pat, for having me.
Doctors are consumers
Pat Grady: All right, how many doctors will use OpenEvidence today?
Daniel Nadler: Today probably over 100,000 physicians in the United States, and more globally.
Pat Grady: And what would that number have been about a year ago?
Daniel Nadler: A thousand? Fewer than zero. You know, look, most people aren’t aware of this, but there’s only about a million doctors in the United States serving a population of over 340 million people, which is itself an issue that we’ll probably talk about later on. But there’s only a population of about a million doctors, today maybe 100,000 of them are going to use OpenEvidence on a monthly basis. About 300,000-400,000 monthly active users touch our system, including over 200,000 that log in and ask questions. So, you know, you’re talking about 10, 15, 20 percent, maybe even 25 percent of doctors in the United States that are—that are using OpenEvidence in some form or another.
Pat Grady: And we’re not healthcare investors, we’re technology investors here at Sequoia, but my understanding is that it’s not normal for 100,000 doctors to be using something, you know, overnight. Like, my understanding is that in healthcare it normally takes a lot longer to get to that sort of scale. So what did you guys do right?
Daniel Nadler: Well, that’s why we get along so well, because you’re not healthcare investors, and our approach has not been a healthcare approach. What we got right is we realized that doctors are people, too. Doctors are consumers. In fact, everyone’s a consumer. I think that’s what you get right in your investment strategy, which is you don’t sort of think about consumer internet applications as one category, and then a bunch of stuff over here as its own kind of weird, quirky, opaque, siloed industry categories over there. You think about everything in terms of consumer internet growth curves. And that’s exactly how we thought of it. You know, obviously if you try to go top down—so everyone says, you know, it’s such a refrain, you know, healthcare is impossible to break into. Healthcare is so hard. Don’t start a healthcare company.
Pat Grady: And I think the evidence generally supports that.
Daniel Nadler: Of course, of course. Because they’re all trying to do the same—they’re all trying to bang their head against the wall in the exact same way. They’re all trying to go to the top of some integrated delivery network which is like a large network of hospital systems, score a meeting with the CMIO or the CTO or all these acronyms. It’ll take them three months to four months. Even if they’re properly networked and connected, it would take them three or four months to get meeting one on the calendar. Meeting one will be great. They’ll high five each other. They’ll be like, “That was a great meeting! We got great feedback!”
What’s the follow up? Well, they’re going to schedule a follow-up meeting which will probably be with a responsible AI committee. That will take three months after the first meeting. Then you’ll present to the responsible AI committee, you know, at that hospital. Along the way the committee members will have changed their strategy around, responsible AI will have changed, maybe the presidential administration will have changed, and now JD Vance is saying actually you have to change your approach to responsible AI. So that gets delayed, and you’re a year into it before you’re, you know, in meeting three or meeting four. And by the way, no doctors there along the way are getting the benefit of using the application. I had a lot of experience—you know, this wasn’t my first rodeo. I had sold a very successful AI company before starting OpenEvidence.
Daniel Nadler: Kensho. And so I was very familiar with how corporate America works and how large organizations work. It’s not specific to the healthcare system, it’s just called large organizations. And I was very familiar with that. I had pattern recognition to that. And so I realized that that wasn’t really a viable path forward for us, especially֫—this is my second company, it was very mission driven, it was very impact driven. But it would not have been much sense of accomplishment for me to have nominally started a second company that was mission driven or impact driven in healthcare, but then have no doctors using it because we’re waiting for meeting number six or seven or eight with the hospital system.
So long story short, we just took a radically different approach. Doctors are people, too. Doctors are consumers, too. And if you make something awesome that is life changing and game changing and profession changing for knowledge workers, and it’s really good enough and you put it out on the app store, it sounds 101, but it really works, people. You make something awesome, you put it out on the app store for free, and it turns out that without fancy marketing campaigns or large marketing budgets or anything like that—all of our growth is word of mouth, doctor to doctor to doctor to doctor—people will start using it, and people will discover it and then they’ll start using it. And when they start using it and if it’s really awesome, they’ll tell other people about it. And then you get just sort of the network effects that, you know, you got with Tesla early on. Tesla early on famously spent, you know, almost no money in marketing. They didn’t—you know, as I understood, car commercials were kind of one of the largest category of advertising.
It was taken as dogma that if you wanted to have even a remotely successful automobile, you needed to spend enormous sums of money on advertising those cars. And Tesla early on said, “We’re just going to make a product that’s so awesome that people will tell other people about it. Someone will drive it, they’ll be like, ‘Oh my God, this is so much better than other cars,’ and tell other people about it.” And that sort of word-of-mouth network effect that Tesla had happened with OpenEvidence with doctors, where doctors download it from the app store. As I said a year ago, it would have been, I don’t know, somewhere between zero and a thousand doctors using it. Today it’s hundreds of thousands. It’s 10, 15, 20, maybe 25 percent of active physicians in the United States. It really—the denominator, it depends on how you calculate it because there are more physicians that have a medical license than are necessarily active at any one time, but it’s as little as 10 percent and as high as 25 percent of active physicians in the United States use OpenEvidence today. And all of that is just this sort of Tesla logic of, like, make an awesome car. It’ll spread through word of mouth.
Pat Grady: And I think it’s intuitive to people what it means to have an awesome car. It’s probably not intuitive to people what it means to have an awesome app for doctors.
Daniel Nadler: Yeah.
Pat Grady: It’s like, what are doctors actually doing in this app? Why do they like it so much?
The firehose of medical knowledge
Daniel Nadler: So I think something is awesome if first and foremost it solves a real need and a real pain point, right? So much of technology is a solution in search of a problem. It’s important to build toward problems that are real and have a solution that addresses a real problem. So you got to begin from, well, what’s hard about being a doctor? One of the hardest things about being a doctor, besides the hours, having to go through medical school and all the rest, the fact that they’re stretched too thin and there’s too few of them for the population, is that they’re expected to keep up with a fire hose of medical information.
So this is really not appreciated by people who are not doctors, but there’s two new medical papers published every minute, 24 hours per day.
Two new medical papers published every minute, 24 hours per day, seven days a week. There was a study done in, published in Nature that said that medical knowledge doubles every 73 days. That methodology was probably a little aggressive. We did our own internal study at OpenEvidence of the rate of doubling of medical knowledge. We came up with a much more conservative number, but that’s still five years. Our conservative numbers, medical knowledge doubles every five years. There’s a lot of methodology in how you think about counting, you know, all citations. If it’s all citations, yeah, it doubles every 73 days. But not all citations in medicine are equal.
Even if you really just count what you should count account, which is the stuff that doctors really need to know, like top-tier journals that have the highest impact factors, let’s say the top third of journals, something like that, even with that very conservative methodology, you’re talking about medical knowledge doubling every five years. So if you think about the math of that for a second, so in 1950, medical knowledge doubles every 50 years. Today it doubles every, every five years. What that means is if you graduate medical school in 1950, the half life of what you’ve learned in medical school, meaning the time it takes for half of what you learn in medical school to kind of be out of date, and we’re not talking about anatomy or—you know, anatomy, cellular biology, that stuff doesn’t change or doesn’t change that quickly, but the rest of it actually does change.
And so in 1950, the growth of medical knowledge, meaning new treatments that they need to be aware of, was every 50 years, which means the half life of what they learned in medical school nicely overlapped with the length of their career. So by the time they retired, maybe half of what they learned in medical school in terms of the efficacy of new treatments and new treatments that were available was kind of out of date, but they were retiring, so that’s kind of okay, right? And they tried to keep up along the way, and it was easier to keep up along the way because it was so much slower.
Today, if you think about that same framework, by the time they’re, like—depending on their specialty—going through their residency and fellowship, half of what they learned in medical school is now out of date in terms of new treatments, the efficacy of those treatments and so on. So it’s impossible for doctors to keep up with that, because medical school, which is what most people who are not doctors assume is the information transmission mechanism for medical knowledge, only sustains a doctor for a couple of years these days in terms of the half life of their knowledge.
You have a patient coming to the dermatologist who has psoriasis. Okay, maybe say, fine, just read up on the new classes of biologics and pick one based on efficacy and safety. And in this example, you see that the specialization of medicine becomes very challenging because the dermatologist maybe is keeping up with the dermatology journals, and all the new biologics for psoriasis that are published in the dermatology journals. But MS is a neurology condition, and it’s ludicrous to expect the dermatologist to also read every single page of every single neurology journal, let alone the border region or the interaction between those two specialties.
So the dermatologist in that situation, in that scenario, is in a real pickle because they don’t want to make the MS worse. They don’t want to just send the patient away and say, “Hey, everything’s too risky, so we’re not going to treat your psoriasis,” because that’s a quality of life issue for the patient. And they need to sort of figure out what is the latest evidence—hence OpenEvidence—what is the latest evidence on the efficacy of, yes, IL-17 inhibitors and IL-23 inhibitors, but specifically with a lens to what is the efficacy and safety of those inhibitors for patients who also have the comorbidity of MS?
And that information hunt and gathering exercise is, in a traditional, you know, pre-OpenEvidence, is just very painful, the traditional way of doing that. You would—you’d go on Google, try to Google this. You’d go on PubMed. None of those searches work particularly well. They’ll sort of give you the titles of the articles. But this is such a specific question. It’s a very specific question. This is not like a generic article title, like “What is the Efficacy of IL-17 Inhibitors?” This is a very specific question. What is the safety of IL-17 inhibitors versus IL-23 inhibitors for a patient with both psoriasis and MS?
And that was a real need, to come back to your question: What makes something awesome for doctors? Not just this one specific case, but as you can imagine, for every example you can give like this, of, you know, a comorbidity, psoriasis, MS, seeing a dermatologist. The surface area of medicine is so enormous that you have millions and millions and millions of cases like this. You know, engineers that are listening to this can immediately understand this. Everything is an edge case, everything is a corner case.
Pat Grady: Yeah.
Daniel Nadler: Right? So from an engineering perspective, the way to think about medicine is, you know, the surface area is it’s not truly infinite, but for all intents and purposes, enormous. And everything is an edge case, and everything is a corner case. And you’re trying to always look up the edge case or the corner case. That’s the experience of being a doctor put in engineering terms. And so if you can solve the experience of being a doctor framed in engineering terms of everything is an edge case, everything is a corner case, go find—solve the lookup for that edge case and that corner case, meaning go find the reference somewhere in a medical journal that is a peer-reviewed, top-tier medical journal that answers the question of the comparative safety of IL-17s versus IL-23s in a patient with psoriasis and MS. Which is never going to be in the title, which is what PubMed or Google could find. It’s always going to be buried, you know, on page five or six or seven of that 30-page medical journal article, then you have made the experience of being a doctor that much better. You have made the lives of doctors that much better, and most importantly, you’ve improved the life of the patients that they’re treating because then you’ve prevented a scenario where the MS is getting worse because the doctor didn’t know, hey, you know, IL-17s are very promising generally for psoriasis, but if a patient has MS, IL-23s are actually a lot safer, right?
If the doctor didn’t know that, which there’s no reason for them to know because the average—you know, given the average age of a doctor, neither of those two things existed when they went to medical school. They couldn’t have—it’s not even like a study question. They could not have learned that in medical school. IL-23 inhibitors came out in 2017, 2018, 2019. There’s no way even if doctors my age, and I’d be a young looking doctor, most doctors you see, you know, are older than I am. And even if a doctor were my age, I am old enough that were I doctor, I would not have learned that in medical school. So there’s no way a doctor can learn that in medical school. They would need to sort of keep up with it post medical school. And because it’s an edge case or corner case, and because for every one example like that example, there’s 10,000 other examples that they would also need to stay on top of, they probably would have had a hard time keeping up with that data point pre OpenEvidence, and that would have resulted in a worse outcome for patients.
And one of the things we hear so frequently from doctors about OpenEvidence is, you know, “I used it to look up this thing for a patient case that is maybe a patient case that I would have seen one or two times in my career.”
And then the same doctors are saying that about a different patient case, and then about a different patient case, and then about a different patient case. And then you realize just how long the tail of this thing is, where it’s like, well, if the majority of your uses of OpenEvidence are patient cases that you would see once or twice in your career, then that really captures what the thing is doing. It’s essentially running search and discovery and knowledge retrieval on a tail that is—nothing is infinitely long, but is so long that it for all intents and purposes to a wet human brain, might as well be infinitely long.
What is the evidence?
Pat Grady: Yeah. Yeah, the data on the rate at which medical knowledge is increasing, it’s a very positive data point, right? Like, it’s great that medical research and medical knowledge is increasing so quickly. It’s like this keg being filled with potential energy that hasn’t converted into kinetic energy because we have this choke point which is a human’s ability to ingest and make sense of all this information. So it makes sense that now that AI is here, AI is great at looking over enormous amounts of text and doing reasoning across enormous amounts of text, it makes sense that AI is a little bit of an unlock to sort of convert that into kinetic energy with the system-level view. The question that we talked a bit about some of this evidence that’s going into the system, decompose the name of the company real quick. OpenEvidence. Why is being open important? And what exactly is all the evidence that’s going into the system?
Daniel Nadler: Sure. So the evidence is peer-reviewed medical literature. And it’s most important to say what it’s not. One of the reasons you got so many egg on face situations from large publicly-traded tech companies that put out AI systems in the area of medicine, you got these sort of egg on face situations—I won’t say specific examples, but I think we all know what they are—is because they were doing retrieval across the public internet, which means they’re doing retrieval across health blogs, health blogs on the public internet. They might have been doing retrieval at the time on Twitter, before Elon caught off access to that. There’s a lot of stuff on Twitter that you wouldn’t want to be the basis for a doctor’s decision. But even if you just take the health blog thing, it’s amazing how many health blog writers there are on the internet. I only discovered that when I sort of started working on this problem.
And most people assume that the people writing health blogs are doctors or at least have some connection to medicine. That’s actually not the case. These are not bad people. They’re really well-meaning people, but what you’ll find is a lot of these people are only part-time health blog writers and they’re also part-time travel writers or they’re part-time cooking recipe writers. They’re basically—you know, they’re sort of journalist bloggers where their core skill set is writing. They’re very good at writing. They don’t have any domain expertise in medicine. They don’t know any more about medicine necessarily than they know about planning an itinerary for a trip to Mexico.
And that’s what goes into the training data; this thing’s called training data. And then we’re shocked when in the early days of large language models, they said all sorts of crazy things. Well, they didn’t say crazy things, they regurgitated what was in the training data. And those things didn’t intend to be crazy, but they were just not written by experts. So all of that’s to say where OpenEvidence really—right in its name, and then in the early days—took a hard turn in the other direction from that is we said all the models that we’re going to train do not have a connection to the internet. They literally are not connected to the public internet. You don’t even have to go so far as, like, what’s in, what’s out. There’s no connection to the public internet. None of that stuff goes into the OpenEvidence models that we train. What does go into the OpenEvidence models that we train is the New England Journal of Medicine, which we’ve achieved through a strategic partnership with the New England Journal of Medicine.
Pat Grady: Say a word about that for a minute. Because my understanding is that New England Journal of Medicine doesn’t just give all of its research to everybody for free to go train on.
Daniel Nadler: They don’t. To the best of my knowledge, we’re the only AI company that they’ve done this with. And without getting into the specifics, we’re not the only AI company that’s asked. They’ve said no many times.
Pat Grady: Why do they seemingly trust OpenEvidence when they don’t trust other folks? What makes you particularly appealing to them?
Daniel Nadler: Well, without getting into the exact play by play, what happened was a number of other well-known AI companies showed up at their door and said, “Can we train on the New England Journal of Medicine?” And the New England Journal of Medicine said no. I won’t even get to the reasons of why they said no, and I’m not them, so I don’t speak on their behalf. But they said no. In our case, we didn’t show up at their door. A number of the very senior people on the editorial board of the New England Journal of Medicine were power users of OpenEvidence, and they wanted their content to show up in the thing that they were using.
Pat Grady: [laughs]
Daniel Nadler: So it’s beautiful, right? And so they came to us, and then we spent a lot of time really getting right what a framework for cooperation, collaboration which prioritizes and privileges the importance of their brand and the sanctity of their brand. And, you know, they’re the pinnacle. They’re the apex medical journal, and the Massachusetts medical—and they’re nonprofit, they’re not commercially motivated. There’s no amount of money you could throw at them that’s going to make them make a perfectly mercenary decision. And in fact, without getting into the specifics of it, some of these really well-funded AI companies threw enormous amounts of money at them and they said no. If they’re a private company, they probably would have said yes, but they’re a nonprofit so they said no because the Massachusetts Medical Society, which is a nonprofit organization, cared more about the sanctity and the pristineness of their mission as a nonprofit than they did about just trying to score some sort of quick commercial contract.
In the case of OpenEvidence, again, it was beautiful. Very senior people there were users. And then this sort of circles back to what we talked about right at the start, which is had we waited, had we taken a top-down approach, had we taken an enterprise SaaS approach and been in waiting mode for meeting number 17 with the hospital system and no one was using it, well, if no one was using it, that would include no one at the New England Journal of Medicine using it, which would have meant that they wouldn’t have fallen in love with it, which meant that we would have never had an opportunity to strike a content partnership with them that would have been—you know, that in turn made the whole thing that much better and more awesome to the people using it. So you get into vicious cycles versus virtuous cycles.
In our case, the whole thing was a virtuous cycle. We put it out there, people downloaded it for free. Some of those people included very senior people at the New England Journal of Medicine. They started using it, they fell in love with it, they reached out to us, we did this deal, and now the thing is 10,000 times better because it’s trained on the full text of the New England Journal of Medicine, which no AI today in the market—I can tell you with certainty, no AI today in the market is trained on the full text of the New England Journal of Medicine other than OpenEvidence.
What does “open” mean?
Pat Grady: So we talked about the “evidence” part. Let’s talk about the “open” part a little bit. What does the “open” piece of OpenEvidence mean and why is that part important?
Daniel Nadler: “Open” meant a lot of things to me in the early days. One of the things that it meant was capturing that go-to market strategy that we talked about, which is for me, “open” was almost a reminder to myself that this was not an enterprise SaaS company. My first company was an enterprise SaaS company. Those can be great businesses. I don’t need to tell you, you’ve had phenomenal success with enterprise SaaS companies. They can be great businesses.
But for my second company, I didn’t want to just have a mission-driven, impact-driven company. I also wanted to have a company that was different from my first company in every respect because I don’t like repeating myself. Specifically, I want it to be sort of direct to consumer, or in our case, direct to prosumer. So the “open” for me sort of symbolized that: that we would go directly to doctors, that we wouldn’t go to their gatekeepers, we wouldn’t allow people to be gatekeepers to doctors. We would go directly to doctors, we’d appeal directly to doctors, would appeal directly to the pain points that they experience on a daily basis, which is that they’re overstretched or overworked, there are not enough of them, they’re stretched far too thin in terms of the number of patients that they need to see and treat, but also that they’re forced to drink from a medical information fire hose. And we were going to make that better, and we’re going to help tame that medical information fire hose. And we’re going to make that appeal directly to doctors as consumers and as people, specifically. And that was a big part of what “open” meant.
You know, there’s enormous inequality in the healthcare system in the United States—as there is in everything in the West. And there’s the haves and there’s the have nots. And the best hospital systems in the United States, which have enormous endowments and unlimited funding, can afford to buy not just, like, one tool or two tools or three tools, they can afford to buy every tool in the category and try them all out, and they can afford to have, like, doctors not really use most of them. And that’s okay because, you know, if DOGE or Elon went and did an audit of the SaaS spend of some of these really well-endowed hospitals, he would have a field day because what he would find is they’re buying everything and they’re using almost nothing.
Pat Grady: Yep.
Daniel Nadler: And that’s happening over here, while over here you have doctors in rural parts of the country or urban parts of the country that just are socioeconomically disadvantaged, or you have doctors that are in private practice or in groups of, you know, 10 or fewer—practices of 10 or fewer doctors. Most people don’t realize that doctors, a lot of doctors, are small business owners. They don’t work for these enormous hospital systems with a lot of money. A lot of doctors are self proprietors, they’re small business owners, they have a practice. It’s kind of almost like the 1950s, but it persists. They have a practice. They might have one or two people helping them in an administrative capacity or a secretary or something like that, but they’re the principal, and they have to deal with all the administrative stuff plus see patients, plus, plus, plus, plus. And they don’t have these enormous technology budgets. They certainly don’t have endowments like university-style endowments like some hospitals do. And they can’t afford to go pay $10,000 for an enterprise SaaS subscription to some software product.
So all of that is what “open” meant. We got a letter from a doctor in Albany, Georgia, who said he’s the director of a cancer center in Georgia, and he’s in community practice as a community oncology practice. And that OpenEvidence has become a lifeline to his daily practice in his cancer center, and has been a game changer for patients in treating his cancer patients. And like most people, I didn’t know much about Albany, Georgia, so I looked up Albany, Georgia, and very quickly on Wikipedia, I discovered that Albany, Georgia, is in southwestern Georgia. It’s 75 percent African American. It has a median household income of $43,000 a year.
And I started to piece together the situation. You know, this doctor’s probably the only oncologist in a 50-mile radius. Maybe there’s a second one. And they’re serving an enormous geography of fairly poor people. And there’s no way that this doctor has the resources to pay $10,000 or $20,000 SaaS subscription software rates for anything. And that to me is what “open” means. It means the doctor in Fairbanks, Alaska, who wrote us a letter saying that she practices in, again, a community setting in Fairbanks, Alaska, very limited access to subspecialists. And OpenEvidence has been a game changer for her in allowing her to sort of access subspecialty level medical knowledge without direct access to human sub specialists in Fairbanks, Alaska.
Again, if you look up that situation or just even think through that situation, she’s not paying—you know, she’s in a community practice. She doesn’t work for a hospital that can afford this kind of stuff. So that’s what “open” means. It’s every doctor in the country. We’re really proud that we’re used not just at Mayo Clinic in Cleveland—we love the Mayo Clinic. We were partly incubated at the Mayo Clinic. We love these sort of elite hospital settings. We have a lot of users at these elite hospital settings that also use it for free. But we’re not just used at the Mayo Clinic. We’re not just used at the Cleveland Clinic. We’re used across the country. We’re used in the middle of the country. We’re used at Walter Reed, where our nation’s warriors and veterans are treated without the government having to go through a three-year procurement process to decide whether they want to use OpenEvidence.
You know, that’s another example. So. So one of the biggest health systems is the VA. To me, that’s one of the most important health systems because it treats our warriors and our veterans. For the VA to decide to do anything is probably a three-year exercise, and were OpenEvidence not open—to answer your question—we would probably still be in year 1.5 of three to four years of a government procurement process to decide whether doctors in the VA could treat their patients who are warriors and veterans with the benefit of OpenEvidence. And thankfully, we didn’t go that route, and we’re getting letters from doctors in the VA talking about using OpenEvidence to make a treatment decision at the point of care for a wounded warrior. It just energizes me waking up every day. So that’s what “open” means.
How’d you build it?
Pat Grady: Okay. So you guys, you’ve built something of a killer app for medicine, and it’s working, it’s working really well. And we have a lot of people typically who listen to the show who are themselves trying to build killer apps of some sort for AI. So I’m curious—using AI. So I’m curious, how did you build it? Like, what—you know, is this a wrapper on top of GPT3 or GPT4? Like, what is the—what’s going on under the hood? How’d you build it?
Daniel Nadler: So I think what—I’m gonna sort of have two parts to what I say. I’m going to sort of talk about what we did, and then talk about what is applicable for a lot of people listening to us. Now I’m guessing a lot of people listening to us are building applications that don’t necessarily have the same requirements that medicine has. So I do want to address that. In the case of medicine, the way we attack the problem is by bringing together a team of PhD-level scientists who are working in the field. My co-founder, Zachary Ziegler is a brilliant computer scientist from Harvard, studied with Alexander Rush in his Natural Language Processing Lab at Harvard that was one of the leading labs before ChatGPT even came out. Evan Hernandez comes from Jacob Andreas’s lab at MIT. So I could go on. Eric Lehman from MIT as well.
I mean, we put together a team of sort of elite scientists who were working at the frontier of language models at the top two of the top three maybe labs, if not two of the top two labs at the time in the country—in the world, actually. And we needed to do that because we’re trying to solve medicine, and we’re trying to solve the application language models in medicine, and that was a very high standard and a very high bar and it hadn’t been solved. And what the very large consumer internet companies were putting out was creating all sorts of embarrassing moments in medicine for those companies at the time—sort of recent enough that we probably all remember this.
And so we needed sort of that level of almost the intersection of academia and engineering to go after this problem. And we needed to, in our case, produce original research and original knowledge. So we attacked the problem in a different way. Everyone was trying to, at the time, focus on scaling these language models larger and larger and larger and larger. And we were very, very early—now this is sort of consensus with DeepSeek and all this stuff, but rewind, this is 2022, we were very early to the insight that smaller, highly-specialized models overtrained on in-domain data would outperform much larger models on those in-domain tasks. They were very rigid, they wouldn’t write you a poem. They’d fall over very quickly the second you go outside the domain, but in the domain they were beautiful. They outperformed.
And we published our work. We approach this very academically. My background is academic as well—I did my PhD at Harvard. And we were all academics by background, and we published our work in this paper, “Do We Still Need Clinical Language Models?” which was awarded the best paper in machine learning in 2023 at the leading conference in machine learning in healthcare, and attracted a lot of attention. Was really the first paper in the field altogether that showed that in medicine the best way to attack the problem was these smaller, more-specialized models. Again, that’s become consensus today, but you have to sort of pretend you’re not listening to this today. Pretend you’re listening to this in 2022 pre-ChatGPT. It wasn’t obvious because what was coming out at the time was like the Chinchilla paper from DeepMind, and everything was about larger, larger, larger, scale, scale, scale. And we just took this very different approach. And in a way, like, is very—with the benefit of hindsight, everything’s obvious—but with the benefit of hindsight it’s kind of obvious, right? If you go and listen to Jensen’s interview of Ilya, you know, Ilya uses the metaphor of jpeg compression, basically. It’s like these language models are basically like a jpeg compression of the world. And that’s—okay, well, if they’re a jpeg compression of the world, what’s the world? What’s the world that you’re compressing? And it goes back to what we talked about, public internet.
Pat Grady: Yeah.
Daniel Nadler: If you’re doing a jpeg compression of the public internet, then what you’re—which is what these large language models that we’re focusing on scale, where basically they’re token limited. They’re like, “Give me as many tokens as possible for me to train on.” And so, well, where do you find all the tokens in the world? You find them on the public internet. Well, to go back to Ilya’s point, well, what are you compressing then? What are you jpeg compressing? You’re compressing the public internet, and then you get all these sort of embarrassing outputs that were sort of of that late 2022, early 2023 vintage.
In our case, we sort of said, “Let’s make a jpeg compression of medicine.” And so let’s overtrain on again, peer-reviewed medical science, stuff that comes out of the FDA, the CDC. We had the advantage—this is way before the New England Journal of Medicine partnership, but we had the advantage that under copyright law, anything created by the U.S. government is public domain. That’s how Wikipedia does a lot of what it does. So in the early days, we started with sort of like Creative Commons, public domain stuff that was available, and we were very lucky that in medicine—this wouldn’t work in every other field, because in other fields like law or accounting or tax, there’s a lot of stuff that’s behind walls. But in medicine, it turned out that a lot of the really great stuff was created by the U.S. government in the form of the FDA and the CDC and what they had put out. So we overtrained on that stuff and solved the copyright issue that way early on, which then allowed us to bootstrap something awesome enough that people could download it, which then won over users at places where there were copyright considerations, like the New England Journal of Medicine, which then started the flywheel of them reaching out to us and then now us having the benefit of the stuff that is under copyright of the New England Journal of Medicine. But that was our approach, and it was very technical and it was very academic and it was very scientific because accuracy mattered that much given our domain in medicine.
Pat Grady: Yeah, I was going to ask you about that. So, you know, you have doctors using OpenEvidence to make clinical decisions at the point of care.
Daniel Nadler: Yeah.
What about hallucinations?
Pat Grady: So most applications, a hallucination is annoying. With OpenEvidence, a hallucination could be literally life threatening, given how it’s being used. So how do you deal with hallucinations in particular?
Daniel Nadler: Well—and just to reiterate the first thing you said. And maybe this is advice for entrepreneurs or engineers who are listening to this. There are scenarios where hallucination is not even annoying. There are scenarios where hallucination is a feature. One of my favorite applications is Midjourney.
Pat Grady: Yeah.
Daniel Nadler: Hallucination is a feature of Midjourney. Maybe one takeaway is find applications where the biggest hesitation actually gets judo-moved into a feature as opposed to a limitation. And just a total aside, an example—and someone should go start this company. Again, my first company was in finance, and I feel like I kind of grew up on Wall Street and I think that way. And if I wasn’t running OpenEvidence—I wouldn’t run another company because I’m going to work in medicine, I think for the rest of my career because of the impact. But if I were pressed to sort of like, what would I do in finance now that large language models exist? I would actually begin from thinking about hallucination as a feature as opposed to a limitation. So where is hallucination a feature in finance? Well, it’s certainly not in, like, doing retrieval on the PE ratio. You need that to be right. Well, where else could it be used? What about risk management?
A lot of finance is about figuring out what the black swans are. A lot of finance is, what could go wrong? And then at the extreme tail, there’s a lot of money at stake. What is very unlikely, but that could go really, really wrong. And the first two questions most people can reason through without the aid of computers: What can go right and what can go wrong? You can kind of reason through that with your wet brain. But in terms of, like, what could go really, really, really wrong at the level of the 2008 financial crisis, that’s harder for most unaided brains to imagine that.
But these language models would be pretty good—and I’ve tested some of this in my own portfolio management—at hallucinating those things. I’ve actually, as an experiment, done this in my own portfolio management, just use these language models in a way that the hallucination is a feature where I give it certain details of my portfolio, I talk about the company, I give it in the context window, enough information so that it understands what the company does and so on. And it comes up with all sorts of scenarios that are kind of on the long tail of what could go wrong. And I’m like, “Huh! I never thought that—I love Nvidia, but I never thought about that happening to Nvidia. That’s interesting.” So at a high level, I think there’s enormous opportunity. I think we’re, like, one percent of the market captured today in 2025 in terms of applications built that even begin to think about hallucination or riffing as an advantage as opposed to as a limitation. So that’s for all the entrepreneurs listening. There’s 99 percent is still up for grabs.
Yes, in my specific domain, medicine, none of that is fair game. And so the way, you know, we had to deal with that is by not connecting OpenEvidence and the models that we trained to the public internet, is by only training on peer-reviewed medical knowledge. And go back to Ilya’s point, the jpeg compression that we made in our models, in our smaller specialized retrieval models and ranking models—you know, we don’t use one model. Without getting into our sort of trade secrets, like, it’s an ensemble architecture. There’s multiple models that do different things, there’s half a dozen models, they hand off tasks to each other. You can’t get the accuracy level of OpenEvidence by training one large language model. It’s under the hood going to be this sort of cooperative ensemble architecture. And those are made up largely of smaller models that do very specialized things like retrieval and ranking and other things. And for those models, the jpeg compression is exclusively of peer-reviewed medical literature.
So it’s never going to be at risk of regurgitating or surfacing something that is not in the peer-reviewed medical literature, which is, as they say in GI Joe, more than half the battle, right? And then the other half of the battle is allowing transparency and interrogation of the answer. And we were very early to that. I’ve seen now that ChatGPT and others start to do that, but we were probably the first application—I would go on a limb and say we’re, like, the first application that grounded our answers in references that you could drill down and drill through to see the underlying sources of. We did that in early 2023, long before Chat and others started to come out with similar features.
And that’s how we won over users in the early days, because doctors saw that not only is this not saying anything egregiously wrong in terms of regurgitating something on the public internet, but it’s also, even within the domain of the answers that it gives based on peer-reviewed medical literature, allowing me as the physician to go interrogate where it’s getting that thing that it’s saying, where it’s getting that source of information from, and drill down all the way to the reference and then go and read the reference. Which, by the way, also created a beautifully symbiotic relationship with the publishers, because instead of just compressing all their knowledge and then giving it away as some folks did, as a result of just trying to make this as accurate as possible, inadvertently stumbled upon a model of cooperation with publishers that ended up being very good for them, too.
Because we send an enormous amount of traffic to medical journals. We send millions and tens of millions, tens and tens of millions of visits from doctors to medical journal pages hosted by those medical journals, including traffic they might not have otherwise have gotten because the doctor is going to that journal because of some detail deep in the methodology section that the doctor would have never known to go to that journal for—really, this virtuous circle kind of all around.
And then we had medical societies that write guidelines reaching out to us, saying, “Hey, we noticed that you index this other society’s guidelines. Can you go index our guidelines because we want the traffic.” It’s beautiful, right? So you get accuracy, you get a symbiotic and a mutually beneficial relationship with medical journal owners. But critically, you get the right information back to the doctor using OpenEvidence, who then will make the better decision as a result of having better information for their patient at the point of care.
What’s changed about starting an AI company?
Pat Grady: And it’s been a decade or so since Kensho got going, and obviously there’s been a lot of progress in the field of AI and machine learning since then. If we were to inspect the underlying architectures of Kensho and OpenEvidence, how much is the same and how much is different? And I guess part of the question behind the question is: How much of what goes into making an AI application that actually works is recent breakthroughs, and how much of it is more sort of classical engineering machine learning principles?
Daniel Nadler: Kensho was pre large language models, pre language models, pre small language models, pre BERT, pre anything, almost pre Fire.
Pat Grady: [laughs]
Daniel Nadler: So it’s hard to compare, right? I mean, Kensho was very early NLP. Not when I sold the company; by the time I sold the company it was much more sophisticated, but I’m talking about when I founded Kensho in 2013. So it’s very different today. What they have in common is there’s an enormous infrastructure component to building this stuff. So, you know, we train our own models—and I talked about that over the last few minutes. But even if you’re not training your own models, even if you’re just, like, using an API to one of the usual suspects, you know, that’s going to fall over at some point if you’re successful. And you want to be successful, and you want to get to the point where it falls over. And it will fall over. And at that point you need to have all the traditional things that you have in traditional software engineering like infrastructure and really good infrastructure.
And that is very similar to Kensho, because both were critical systems. You know, in the case of finance, there’s enormous amounts of money being moved around on the basis of this information. You can’t have it just stop or fall over in the middle of a trade. And I think that’s a good thing. I think that one of the things that everybody was concerned about post the ChatGPT moment was that all the rules of the game have changed.
Pat Grady: Yep.
Daniel Nadler: And, you know, I’m here to sort of tell you that they haven’t. Yeah, the technology is better, but that’s a continuum. The technology has always been better. The technology was better from 1982 or ’83 to 1987, right? And from ’93 to ’97, right? The technology has always gotten better. Yes, there’s a step function now. Yes, there’s non linearity. Yes, there’s an exponential rate of increase. Yes, everything Ray Kurzweil said is correct and turned out to be correct.
But it’s a continuum. You know, even Ray Kurzweil thinks about this stuff as a continuum. And when you think about something as a continuum, it’s very—it’s a relief in a way because a continuum is something where the laws of physics aren’t changing along the continuum. Even if you think about the metaphor of travel toward the speed of light, yes, the technology that would get us from one-tenth the speed of light to one-half the speed of light in a spacecraft is highly nonlinear in its sophistication, but the laws of physics in that acceleration are not changing. It’s just the technology’s on a non-linear continuum, but it’s still a continuum. And that same continuum—non linear, but continuum—exists in engineering and entrepreneurship more broadly, but specifically in AI, where everything that mattered at Kensho continues to matter today. And the intelligence level of the people that you’re bringing to bear on the task matters. You know, Kensho and OpenEvidence are identical in that we were able to be successful because we brought people with really high IQs to bear on the task.
Hire for neuroplasticity
Pat Grady: Let’s talk about that. Yeah, you mentioned Zach and Evan and Eric and Micah. How do you attract people like that? Why do they choose—for all the options they have, why do they choose to work on OpenEvidence?
Daniel Nadler: It’s impossible for me to answer that without just repeating what Steve Jobs has said, which itself has been repeated so many times, but I don’t have a better way of phrasing it. A players want to work with A players. It’s that simple. Elite people want to work with elite people. A lot of people who sign up to BUD/S, which is the sort of screening process for Navy SEALs, do so because they just want to see if they can keep up with the other people that are doing it. They want to test themselves. They want to see what their limits are. That’s as old as Achilles. That’s not new. It doesn’t matter whether it’s warfare or engineering or sports or any other domain, finance, the very best people in the world want to see just how good they are. They want to see what they’re made of, and the only way to do that, the only way to learn that, is to put yourself around other elite people and see how you stack up against those people.
So that’s the common denominator to what I did at Kensho—that worked very well—to what I’m doing here that’s working out very well, which is, you know—and it’s kind of controversial, or at least was controversial for a minute. You know, you couldn’t talk about IQ. You couldn’t say out loud for a while, like, “I just want people with really high IQs. I don’t care about anything else. That’s—I don’t care who you are, what your background is, what you look—I just want someone with a really, really high IQ.” But that’s the honest truth. I just don’t know—I don’t know how to sugarcoat it. I don’t know how to say it differently. I don’t know how to talk around that fact.
And so, you know, if you think about the people on—you know, the first four or five people: Zachary Ziegler, Jonas Wolf, Evan Hernandez, Eric Lehman, Micah Smith, that came together, sort of senior people on my team initially, yeah, every one of them, if I have to sort of classify this way, you know, came from a PhD program at Harvard or MIT. But that’s not because I’m like, I’m only going to recruit from Harvard and MIT, it was because I had the Kensho experience, and I learned from that experience that if you bring very high IQ people with very high velocity of learning to bear on a very difficult problem, they make more progress far more quickly than a team a hundred times that size that’s a more normal team.
And I think the really reassuring thing for everybody listening in this moment is the rules of the game haven’t changed. The physics haven’t changed. All the things that used to matter still matter: an elite team, high IQ people, high velocity people, hungry people, very motivated people, people with very high neuroplasticity. And by the way, when I say high IQ, what I mean is high neuroplasticity. I mean something very neurologically specific. I don’t mean speed at solving a Rubik’s Cube, which actually doesn’t necessarily correlate very highly to IQ.
Pat Grady: It’s the François Chollet definition of the ability to efficiently acquire new skills.
Daniel Nadler: Absolutely. It’s the ability at which you can learn completely new information and assimilate that new information. That’s what I mean by very high IQ. And guess what? That mattered a thousand years ago, mattered 3,000 years ago. The domain was different. It showed up in warfare and tactics and Sun Tzu and those sorts of things. But whatever humans were doing at any moment in history, what mattered was neuroplasticity. You know, I spent a lot of time reading in my personal life von Clausewitz and Machiavelli and Sun Tzu and the history of warfare. It’s a subject I’m very interested in. And history, of course: Napoleon and Alexander the Great and these folks. And it’s all just neuroplasticity. If you had to sort of say in a few words what differentiated these people, I mean, none of these people were the physically largest people in their armies. Not even close.
What they all had in common is the facts on the ground could change very rapidly, as tends to happen in war, and their thought, not just their decisions, but their entire frameworks for thinking, would just, like, adapt. That’s the quality that, like, a Napoleon or an Alexander the Great had, which is, you know, yeah, they over prepared for the battles that they went into, and thought through every single thing that the adversary could do. But then none of that preparation would exactly match to what happened in the battle. And what differentiated them from even very good generals or very good military leaders is they would just completely adapt their way of thinking about the battle to the facts on the ground that were developing in real time in the battle.
And that’s the standard way of describing that is neuroplasticity, or at least in some branches of cognitive science. It’s just very high neuroplasticity individuals. So what humans have been doing over the last, let’s say 3,000 years has kind of changed a lot, right? Where I mean, unfortunately there are elements of what humans were doing 3,000 years ago that persist to this day. And war does persist. But not everybody today in 2025 is engaged in warfare in a way that might have been the case during the time of the Greek city states. You have people thankfully today that are engaged in things other than city-state warfare. But what hasn’t changed are sort of the neuroprocesses and cognitive qualities that are required for outlier success.
Lightning round
Pat Grady: Awesome. Let’s jump into a lightning round.
Daniel Nadler: Sure.
Pat Grady: Okay, question number one. I know this number is impossible to measure because it requires a counterfactual, but we suspect that the way OpenEvidence is being used, it’s saving lives. Like, it’s helping doctors make better clinical decisions. On what date will we be able to say that OpenEvidence has saved a million lives?
Daniel Nadler: A million lives? Well, in a way, this feels like a McKinsey interview because you got to sort of reason through like, “Well, if you have 150,000 or 500,000 doctors using it, and those doctors each see a certain number of patients and what percentage of those patients are in life-threatening or situations?” I’m doing the sort of …
Pat Grady: Can I tell you my math?
Daniel Nadler: Yeah. Yeah, you tell me your math.
Pat Grady: My, math is, you know, it depends on where you look, but kind of 300,000 to 800,000 lives per year lost due to just straight up medical mistakes. Not all of those are going to be attributed to doctors making decisions at the point of care. There could be other things that happen. But let’s take the low end of that 300,000. Let’s cut it in half. That’s 150,000. That says it’s about six and a half years, you know, until you get there. And that’s a fully ramped OpenEvidence, so we’ll give you a couple years to keep growing. I don’t know, maybe eight or nine years from now. So we’ll call it, like, 20—we’ll call it November 4th, 2034.
Daniel Nadler: I’m going to use this as an interview question.
Pat Grady: [laughs] All right. All right.
Daniel Nadler: The only thing I would add is maybe in the 2030s, you have a million lives saved through the use of OpenEvidence, but what that doesn’t count is the patient today whose MS didn’t get worse because the dermatologists used the wrong biologic. And that’s happening today, right? And for every black and white life saved, you know, there are those examples. There was a doctor who uses OpenEvidence in Rhode Island who wrote us that he saved his patient’s life by using OpenEvidence to reason through whether the patient’s presentation of symptoms was consistent with a pulmonary embolism, and literally used OpenEvidence as like a curbside consult to sort of reason through that patient’s presentation of symptoms, realizing that actually it was the patient’s presentation of symptoms was consistent with a pulmonary embolism, they rushed the patient back to the emergency room and saved that patient’s life.
So lives have already been saved through the use of OpenEvidence, and we know that because doctors tell us that. But for every one of those things that happens, it’s just the MS not getting worse, or it’s some comorbidity not getting worse. That’s in the order of millions today.
Pat Grady: Yeah. Yeah, yeah. Broad domain, general purpose foundation models, they are commoditizing. Yes or no?
Daniel Nadler: I think they’re getting better and better. I think the costs are coming down. I think everything Ray Kurzweil says is typically always right. So, you know, the frontier of the frontier doesn’t get commoditized. It’s always the frontier. But, you know, yeah, the costs are going to—the cost of the wow factor that the first ChatGPT produced?
Pat Grady: Yes.
Daniel Nadler: Are going to converge to zero. Which is why I think all the interesting stuff is going to increase—from a business perspective, all the—there’s still phenomenally interesting work being done at the foundation model layer, intellectually, academically, scientifically. From a business perspective, I think so much of the interesting work, the great companies, to be blunt, are going to be at the application level.
Pat Grady: Yeah, well put. Well put. AGI. On what date did we or will we reach AGI?
Daniel Nadler: I think we’ve already reached. We keep moving the goalposts. We’ve reached AGI. We’ve passed Turing, we’ve passed the Turing test. So we just keep moving the goalposts on what AGI is. What people really mean when they talk about AGI is consciousness. And they don’t know how to say that because they try to sort of say, “Well, it’s this thing or it’s this other thing. And then the AI does that thing.” And, well, it’s the ability to have high school-level expertise in multiple different fields. Okay, it reaches that. Fine, fine. It’s not that. AGI is college-level expertise. And then it reaches that. And now it’s like, you know, PhD-level expertise in everything from coding to medicine. That would be AGI. That’s never gonna—and then it reaches that. What they really mean is consciousness. That’s what people, I think, underneath it all mean. When does—when do you get what you get in the movies when an AI becomes aware and conscious? I’m not sure that ever happens, because I don’t know that consciousness is an emergent property of sufficient density of a neural network. That’s a philosophical question.
Pat Grady: Yep. Yep. For AI founders, AI builders, AI fans, other than this podcast, what one piece of content should they consume?
Daniel Nadler: Ted Chang’s novella Understand.
Pat Grady: Okay. Tell us why.
Daniel Nadler: I want you to have the joy of experiencing it. I won’t spoil the story. It’s a—Ted Chang’s one of the great science fiction authors of all time. He wrote Arrival, which became a major motion picture. And the novella Understand was written in the early ’90s, and it is the best encapsulation of what—without any spoilers, it is the best encapsulation of what non-linear acceleration of intelligence looks like.
Pat Grady: Hmm.
Daniel Nadler: It’s my touchstone for everything that I do. Most people expect a nonfiction answer. There’s great nonfiction. There’s tons of nonfiction reading, including just like, go read the Chinchilla paper. Great paper. If you want really just a touchstone for what it is that’s happening in our civilization right now, it’s this Ted Chang story, Understand, because that just captures narratively, really, the feeling of non-linear acceleration.
Pat Grady: Yes. Awesome. Love it. All right, last question: What is the most optimistic or positive thing that you can imagine AI bringing to the world in the next couple of decades? How will all of our lives be better thanks to AI?
Daniel Nadler: I have to go with a kind of an extrapolation of the field that I’m kind of obsessed with but which is not truly possible today, which is personalized medicine. By which I mean—so personalized medicine has been just over the horizon. It’s kind of like quantum computing. It’s kind of like fusion. You know, it will happen on some civilization scale. It’s been just over the horizon for a very long time. What it means changes. In a way, we’ve been talking about it this whole time because using OpenEvidence to say that hey, if you have psoriasis and MS means you should use this biologic versus this other biologic, that is the beginning of personalized medicine. That’s personalized to your comorbidity versus any other person with psoriasis. But that’s just scratching the surface of what personalized medicine can be.
I think in 10 years from now, whether it’s OpenEvidence or whether it’s a constellation of these types of AIs, the exact specific fact pattern of your specific medical case is going to be matched to everything that is known in the entirety of medical knowledge about everything that is relevant to your case, and a plan of care is going to be formulated that is hyper tailored to everything that is specific to you and that is specific about your case. And I mean, to me that’s just enormously exhilarating. And I think that is just over the horizon, but that is feasible and that will change.
That’s how you really start to push the ceiling on life expectancy. That’s how you start to get into maybe 120, 130 is no longer the ceiling anymore, and you get into these sort of ancient Greek metaphors and paradoxes of Theseus’s ship, and replacing every plank on the ship to the point where there’s no plank in the ship anymore that was the original plank, but you’re walking around and you still have your memory of your wife and your child and your relationship to them, and you’re still alive to watch your own child turn 100 and those things.
And I’m an optimist in that regard. I have an atomistic view of human biology, and I think the Theseus ship approach to human biology is just over the horizon. And a lot of it turns on the sort of personalized medicine stuff.
Pat Grady: Awesome. Daniel, thanks for joining us.
Daniel Nadler: Thanks. Pat.
Mentioned in this episode
Mentioned in this episode:
- How will artificial intelligence change medical training?: 2021 Nature paper that says medical knowledge is doubling every 73 days.
- Do We Still Need Clinical Language Models?: Paper from OpenEvidence founders showing that small, specialized models outperformed large models for healthcare diagnostics
- Chinchilla paper: Seminal 2022 paper about scaling laws in large language models
- Understand: Ted Chiang sci-fi novella published in 1991