Skip to main content

LangChain’s Harrison Chase on Building the Orchestration Layer for AI Agents

Last year, AutoGPT and Baby AGI captured our imaginations—agents quickly became the buzzword of the day…and then things went quiet. AutoGPT and Baby AGI may have marked a peak in the hype cycle, but this year has seen a wave of agentic breakouts on the product side, from Klarna’s customer support AI to Cognition’s Devin, etc. Harrison Chase of LangChain is focused on enabling the orchestration layer for agents. In this conversation, he explains what’s changed that’s allowing agents to improve performance and find traction. Harrison shares what he’s optimistic about, where he sees promise for agents vs. what he thinks will be trained into models themselves, and discusses novel kinds of UX that he imagines might transform how we experience agents in the future. 

Transcript

Contents

Mentioned in this episode: 

Inference essay: 


Harrison Chase: It’s so early on that, like, it’s so early on, there’s so much to be built. Yeah, like, you know, GPT-5 is going to come out, and it will probably make some of the things you did not relevant, but you’re going to learn so much along the way. And this is, I strongly, strongly believe like transformative technology and so the more that you learn about it the better.

Sonya Huang: Hi, and welcome to training data. We have with us today Harrison Chase, founder and CEO of LangChain. Harrison is a legend in the agent ecosystem, as the product visionary who first connected LLMs with tools and actions. And LangChain is the most popular agent building framework in the AI space. Today, we’re excited to ask Harrison about the current state of agents, the future potential and the path ahead. Harrison, thank you so much for joining us. And welcome to the show.

Harrison Chase: Of course, thank you for having me.

What are agents?

Sonya Huang: So maybe just to set the stage, agents are the topic that everybody wants to learn more about. And you’ve been at the epicenter of agent building pretty much since the LLM wave first got going. And so maybe first just to set the table. What exactly are agents?

Harrison Chase: I think defining agents is actually a little bit tricky. And people probably have different definitions of them, which I think is pretty fair, because it’s still pretty early on in the lifecycle of everything LLMs and agent related. The way that I think about agents is that it’s when an LLM is kind of like deciding the control flow of an application. So what I mean by that is if you have a more traditional kind of like RAG chain, or retrieval augmented generation chain, the steps are generally known ahead of time, first, you’re going to maybe generate a search query, then you’re going to retrieve some documents, then you’re going to generate an answer. And you’re going to return that to a user. And it’s a very fixed sequence of events. 

And I think when I think about things that start to get agentic, it’s when you put an LLM at the center of it and let it decide what exactly it’s going to do. So maybe sometimes it will look up a search query. Other times, it might not, it might just respond directly to the user. Maybe it will look up a search query, get the results, look up another search query, look up two more search queries and then respond. And so you kind of have the LLM deciding the control flow. 

I think there are some other maybe more buzzworthy things that fit into this. So tool usage is often associated with agents. And I think that makes sense. Because when you have an LLM deciding what to do, the main way that it decides what to do is through tool usage. So I think those kinds of things go hand in hand. There’s some aspect of memory that’s commonly associated with agents. And I think that also makes sense, because when you have an LLM, deciding what to do, it needs to remember what it’s done before. It and so like tool usage and memory are kind of loosely associated. But to me, when I think of an agent, it’s really having an LLM, decide the control flow of your application.

Pat Grady: And Harrison, a lot of what I just heard from you is around decision making. And I’ve always thought about agents as a sort of action taking. Do those two things go hand in hand? Is agentic behavior more about one versus the other? How do you think about that?

Harrison Chase: I think they go hand in hand. I think like a lot of what we see agents doing is deciding what actions to take, for all intents and purposes. And I think the big difficulty with action taking is deciding what the right actions to take are. So I do think that solving one kind of leads naturally to the other. And after you decide the action as well, there’s generally the system around the LLM that then goes and executes that action and kind of feeds it back into the agent. So I think that, yeah, so I do think they go kind of hand in hand.

Sonya Huang: So Harrison, it seems like the main distinction, then, between an agent and something like a chain is that the LLM itself is deciding what steps to take next, what action to take next, as opposed to these things being hard coded. Is that like a fair way to distinguish what an agent is?

Harrison Chase: Yeah, I think that’s right. And there’s different gradients as well. So as an extreme example, you could have basically a router that decides between which path to go down. And so there’s maybe just like a classification step in your chain. And so the LLM is still deciding, like what to do, but it’s a very simplistic way of deciding what to do. And you know, at the other extreme, you’ve got these autonomous agent type things. And then there’s this whole spectrum in between. So I’d say that’s largely correct, although I’ll just note that there’s a bunch of nuance and gray area as there is with most things in the LLM space these days.

What is LangChain’s role in the agent ecosystem?

Sonya Huang: Got it. So like a spectrum from control to like fully autonomous decision making and logic. Those are kind of on the spectrum of agents. Interesting. What role do you see LangChain playing in the agent ecosystem?

Harrison Chase: I think right now we’re really focused on making it easy for people to create something in the middle of that spectrum. And for a bunch of reasons, we’ve seen that that’s kind of the best spot to be building agents in at the moment. So we’ve seen some of these more fully autonomous things get a lot of interest and prototypes out the door. And there’s a lot of benefits to fully autonomous things that are actually quite simple to build. But we see them going off the rails a lot. And we see people wanting more constrained things, but a little bit more flexible and powerful than chains. 

And so a lot of what we’re focused on recently, is that being this orchestration layer that enables the creation of these agents, particularly these things in the middle between chains and autonomous agents. And I can dive into a lot more about what exactly we’re doing there. But at a high level, that’s that being that piece of orchestration framework is kind of where we imagine LangChain sitting. 

Sonya Huang: Got it. So there’s chains, there’s autonomous agents, there’s a spectrum in between, and your sweet spot is somewhere in the middle, enabling people to build agents.

Harrison Chase: Yeah, and obviously, that’s changed over time. So it’s fun to like, reflect on the evolution of LangChain. So you know, I think when LangChain first started, it was actually a combination of chains. And then we had this one class, this agent, executor class, which was basically this autonomous agent thing. And we started adding in like a few more controls to that class. 

And eventually, we realized that people wanted way more flexibility and control than we were giving them with that one class. So like, recently, we’ve been really heavily invested in LangGraph, which is an extension of LangChain that’s really aimed at like customizable agents that sit somewhere in the middle. And so kind of like our focus, you know, has evolved over time as the space has as well.

Are agents the next big thing?

Sonya Huang: Fascinating.  Maybe one more final kind of setting the stage question. One of our core beliefs is that agents are the next big wave of AI, and that we’re moving as an industry from copilots to agents. I’m curious if you agree with that take and why or why not?

Harrison Chase: Yeah, I generally agree with that take, I think the reason why that’s so exciting to me is that a copilot still relies on having this human in the loop. And so there’s a little bit of almost like an upper bound on the amount of work that you can have done by an external kind of like, by another system. And so it’s a little bit limiting, in that sense. 

I do think there’s some really interesting thinking to be done around what is the right UX and human agent interaction patterns. But I do think they’ll be more along the lines of an agent doing something and maybe checking in with you, as opposed to a copilot that’s constantly kind of like in the loop, I just think it’s, I just think it’s more powerful and gives you more leverage if the more that they’re doing, which, which is very paradoxical as well, because it comes, the more the more you let it do things by itself, there’s more risk that it’s messing up or going off the rails. And so I think striking this right balance is going to be really, really interesting.

The first autonomous agents didn’t work, why?

Sonya Huang: I remember back and I think it was March-ish of 2023. There were a few of these autonomous agents that really captured everyone’s imaginations, like BabyAGI AutoGPT, a few of these. And I remember, Twitter was very, very excited about it. And it seems like that first iteration of an agent architecture hasn’t quite met people’s expectations. I think why do you think that is? And where do you think we are in the agent hype cycle now?

Harrison Chase: Yeah, I think maybe thinking about the agent hype cycle first. I think AutoGPT was definitely the start. And, and then I mean it’s one of the most popular GitHub projects ever. So the hype cycle, I think, and I’d say that started in the spring 2023 to summer of 2023-ish. Then I personally feel like there’s a bit of a lull, slash, down trend from the late summer to basically the start of the new year in 2024, and I think starting in 2024, we’ve started to see a few more realistic things come online. I’d point out some of the work that we’ve done at LangChain with Elastic, for example, they have kind of like an Elastic Assistant and an Elastic Agent in production. And so we’re seeing that we saw kind of like the Klarna customer support bot, kind of come online and get a lot of hype. We’ve seen Devin, we’ve seen Sierra. These other companies start to emerge, in the agent space. 

And so I think, with that hype cycle in mind, talking about why the AutoGPT style architecture didn’t really work, it was very general and very unconstrained. And I think that made it really exciting and captivated people’s imaginations. But I think practically for things that people wanted to automate, to provide immediate business value, there’s actually a lot, it’s a much more specific thing that they want these agents to do. And there’s really like a lot more rules that they want the agents to follow, or specific ways they want them to do things. 

And so I think in practice, what we’re seeing with these agents is they’re much more kind of like custom cognitive architectures is kind of like what we call them, where there’s a certain way of doing things that you generally want an agent to do. And there’s some flexibility in there for sure. Otherwise, you know, you would, you would just code it. But it’s a very directed way of thinking about things. And that’s most of the agents and assistants that we see today. And that’s just more engineering work. And that’s just more kind of like, trying things out and seeing kind of like, what works and what doesn’t work, and it’s harder to do. So it just takes longer to build. And I think that’s kind of why, you know, that’s why that didn’t exist a year ago, or something like that.

What is a cognitive architecture?

Sonya Huang: Since you mentioned cognitive architectures, I love the way that you think about them, maybe can you just explain, like, what what is? What is a cognitive architecture? And like, is there a good mental framework for how we should be thinking about them?

Harrison Chase: Yeah, so the way that I think about a cognitive architecture is basically what’s the system architecture of your LLM application? And so what I mean by that is, if you’re building an application, there’s some steps in there that use algorithms. What are you using these algorithms to do? Are you using them to just generate the final answer? Are you using them to route between two different things? Do you have a pretty complex one with a lot of different branches? And maybe some cycles repeating? Or do you have kind of like, you know, a pretty a loop, would you basically run this LLM in a loop, these are all kind of like different variants of cognitive architectures, and cognitive architecture is just a fancy way of saying like, from the user input to the user output, what’s the flow of data of information of LLM calls that happens along the way. 

And what we’ve seen more and more, especially as people are trying to get agents actually into production, is that the flow is specific to their application in their domain. So there’s maybe some specific checks they want to do right off the bat, there’s maybe three specific steps that it could take after that. And then each one maybe has an option to loop back or has two separate sub-steps.

And so we see these more like, if you think about it, as a graph that you’re drawing out, we see more and more basically custom and bespoke graphs, as people kind of try to constrain and guide the agent along their application. The reason I call it a cognitive architecture is just you know, I think a lot of the power of LLMs is around reasoning and thinking about what to do. And so, you know, I would maybe have a cognitive mental model for how to do a task. And I’m basically just encoding that mental model into some kind of software system, some architecture that way.

Is bespoke and hard coded the way the world is going, or a stop gap?

Pat Grady: You think that’s the direction the world is going? Because I kind of heard two things for me there. One was, it’s very bespoke. And second was it’s fairly brute force, like it’s fairly hard coded in a lot of ways. Do you think that’s where we’re headed? Or do you think that’s a stopgap and at some point, more elegant architectures or, or a series of default sort of reference architectures will emerge?

Harrison Chase: That is a really, really good question. And one I spend a lot of time thinking about, I think, so like, at an extreme, you could make an argument that if the models get really, really good and reliable at planning, then the best thing you could possibly have is just this for-loop that runs in a loop, calls the LLM, decides what to do, takes the action and loops again. And like all of these constraints on how I want the model to behave, I just put that in my prompt and the model follows that kind of like explicitly. I do think the models will get better at planning and reasoning, for sure. I don’t quite think they’ll get to the level where that will be the best way to do things for a variety of reasons. One, I think, efficiency. If you know that you always want to do step A after step B. You can just put that in order. And two, reliability as well. Like these are still non-deterministic things we’re talking about, especially in enterprise settings, you’ll probably want a little bit more comfort that if it’s always supposed to do step A after step B, it’s actually always going to do step A over step B or after step B. I think it will get easier to create these things like I think they’ll they’ll maybe start to become a little bit less and, and less complex. 

But actually, this is maybe a hot take, or interesting take that I have, you could say like, so the architecture of just running it in a loop, you could think of as like a really simple, but general, cognitive architecture. And then what we see in production is like, custom and complicated, kind of like, cognitive architectures. I think there’s a separate axis, which is like complicated, but generic, custom or complicated, but generic cognitive architectures. And so this would be something like a really complicated planning step and reflection loop or like a tree of thoughts or something like that. And I actually think that quadrant will probably go away over time, because I think a lot of that generic planning and generic reflection will get trained into the models themselves. But there will still be a bunch of not generic training or not generic planning, not generic reflection, not generic control loops, that are never going to be in the models, basically. Yeah, no matter what. And so I think those two ends of the spectrum, I’m pretty I’m pretty bullish on.

Sonya Huang: I guess you can almost think of it as like the LLM does the very general agentic reasoning. But then you need domain specific reasoning. And that’s the sort of stuff that you can’t really build into one general model.

Harrison Chase: 100%, like, I think, I think a way of thinking about like the custom cognitive architectures, is you’re basically taking, you’re taking the planning responsibility, away from the LLM, and putting it onto the human. And some of that planning, you’ll, you’ll move more and more towards the model and more and more towards the prompt, but I think they’ll always be like, I think a lot of a lot of tasks are actually quite complicated in some of their planning. And so I think it will be a while before we get things that are just able to do that super, super reliably off the shelf.

We’ve simultaneously made a lot of progress but still have a ton of room to go

Sonya Huang: It seems like we’ve simultaneously made a ton of progress on agents in the last six months or so, like, I was reading a paper, the Princeton SWE paper, where their coding agents can now solve 12.5% of GitHub issues versus I think, 3.8% when it was just RAG. So it feels like we’ve, we’ve, you know, we’ve made a ton of progress in the last six months, but 12 and a half percent is like not good enough to, you know, replace even an intern, right? And so it feels like we still have a ton of room to go. I’m curious, where do you think we are both for general agents and also for your customers that are building agents? Like, are they kind of getting to, I assume not five nines reliability, but they’re they’re getting to kind of like the thresholds, they need to kind of deploy these agents out to actual customer facing deployments?

Harrison Chase: Yeah, so the SWE agent is, I would say, a relatively general-ish agent in that it is expected to work across a bunch of different GitHub repos. I think if you look at something like v0 by Vercel, that’s probably much more reliable than 12.5%, right? And so I think that speaks to like, yeah, there are definitely custom agents that are not five nines of reliability, but that are being used in production. So Elastic, I think we’ve talked publicly about how they’ve done, I think, multiple agents at this point. And I think this week is RSA and I think they’re announcing something new at RSA, that’s an agent. And yeah, those are, I don’t have the exact numbers on reliability, but they’re reliable enough to be shipped into production. General agents are still tough. Yeah, this is where this is where kind of like, longer, longer context windows, better planning, better reasoning, will help those general agents.

Focus on what makes your beer taste better

Sonya Huang: You shared with me this great Jeff Bezos quote that’s just like, ”focus on what makes your beer better.” And I think it’s referring to the fact that at the turn of the 20th century, breweries were, you know, trying to make their own electricity, generate their own electricity. I think a similar question a lot of companies are thinking through today, like do you think that having control over your cognitive architecture really makes your beer taste better? So to speak metaphorically? Or like, you cede control of the model and just build UI and product?

Harrison Chase: I think it maybe depends on the type of cognitive architecture that you’re building? Going back to some of the discussions earlier, if you’re building a generic cognitive architecture, I don’t think that makes your beer taste better. I think the model providers will work on this general planning, I think like well work on these general cognitive architectures that you can try off the bat. On the other hand, if your cognitive architectures are basically you, codifying a lot of the way that your support team thinks about something, or internal business processes, or the best way that you know, to kind of like develop code, or develop this particular type of code, or this particular type of application, yeah, I think that absolutely makes your your beer taste better, especially if we’re going towards a place where these applications are, are doing work. Then like the logic, the bespoke kind of like business logic or mental models for, I’m anthropomorphizing these LLMs a lot right now, but like the models for these things to do the best work possible, 100%. Like I think that’s the key thing that you’re selling and in some capacity, I think UX, and UI and distribution and everything absolutely still plays a part. But like, yeah, I draw this distinction between general versus custom.

Pop up a level, so what?

Pat Grady: Harrison, before we get into some of the details on how people are building these things, can we pop up a level real quick? So our founder, Don Valentine was famous for asking the question, “so what?” And so my question to you is, so what? Let’s imagine that autonomous agents are working flawlessly. What does that mean for the world? Like how is life different if and when that occurs?

Harrison Chase: I think at a high level, it means that, as humans, we’re focusing on just a different set of things. So I think there’s a lot of like, rote repeated kind of work that goes on in a lot of industries at the moment. And so I think the idea of agents is, a lot of that will be kind of like automated away, leaving us to think maybe higher level about like, what these agents should be doing, and maybe leveraging their outputs to do more creative or building upon those outputs to do more kind of like, higher leverage things, basically. 

And so I think, you know, you could imagine bootstrapping an entire company, where you’re outsourcing a lot of the functions that you would normally have to hire for. And so you could play the role of a CEO with an an, an an agent for marketing, an agent for sales, something like that, and allow you to basically outsource a lot of this work to to agents, leaving you to do a lot of the interesting strategic thinking, product thinking, it maybe this depends a little bit on on what your interests are. But I think at a high level, it will free us up to do what we want to do and what we’re good at, and automate a lot of the things that we might not necessarily want to do.

Where are agents getting traction?

Pat Grady: Are you seeing any interesting examples of this today, sort of live and in production?

Harrison Chase: I mean, I think the biggest, there’s, there’s two kinds of categories, or areas of agents that are starting to get more traction, one’s customer support, one’s coding. So I think customer support is a pretty good example of this, like, I think, you know, oftentimes people need customer support, we need customer support at LangChain. And so if we could hire agents to do that, that would be really powerful. 

Coding is interesting, because I think there’s some aspects of coding that, I mean, this is maybe a more philosophical debate. But I think there’s some aspects of coding that are really creative and do require, like, really, I mean, lots of product thinking, lots of positioning and things like that. There’s also aspects of coding that limit some of the, or not limit, but get in the way of a lot of the creativity that people might have. So if my mom has an idea for a website, she, she, she doesn’t know how to code that up, right? But if there was an agent that could do that, she could focus on the idea for the website, and basically the scoping of the website, but automate that. 

And so I’d say customer support, absolutely, that’s having an impact today. Coding, there is a lot of interest there. I don’t think we’re at, I don’t think it’s as mature as customer support. But in terms of areas where there is a lot of people doing interesting things, that would be a second one to call out.

Pat Grady: Your comment on coding is interesting, because I think this is one of the things that has us very optimistic about AI. It’s this idea of sort of closing the gap from idea to execution or closing the gap from dream to reality, where you can come up with a very creative, compelling idea. But you may not have the tools at your disposal to be able to put it into reality and AI seems like it’s well suited for that. I think Dylan at Figma talks about this a lot too.

Harrison Chase: Yeah, I think it I think it goes back to this idea of like automating away the things that get in the way of of making—I like the phrasing “from idea to reality”—it automates away kind of like the the the things that you don’t necessarily know how to do or want to think about but are needed to create whatever you want to create. I think it also one of the things that I spent a lot of time thinking about is like, what does it mean to be a builder in the age of kind of like generative AI and in the age of agents? So what it means to be, you know, a builder of software today means you either have to be an engineer or hire engineers or something like that, right? But I think what it means to be a builder in the age of agents and generative AI just allows people to build a way larger set of things than they could build today. Because they have at their fingertips all this other knowledge and all this other, kind of like, all these other builders, they can hire and use for very, very cheap. I mean, I think like, you know, some of the language around like commoditization of kind of like, intelligence or something like that, as these LLMs are providing intelligence for free. I think this does speak to enabling a lot of these new builders to emerge.

Reflection, chain of thought, other techniques?

Sonya Huang: You mentioned reflection and chain of thoughts and other techniques, like maybe can you just say, where then like what we’ve learned so far about what some of these, I guess cognitive architectures are capable of doing, for agentic performance? And maybe just, I’m curious what you think are the most promising cognitive architectures?

Harrison Chase: Yeah, I think there’s, maybe it’s worth talking a little bit about why kind of like the AutoGPT, things didn’t didn’t work. Because I think a lot of the cognitive architectures are kind of like, emerged to counteract some of that. I guess, way back when there was basically the problem that  LLMs couldn’t even reason well enough about a first step to do and like what they should do as the first step. And so I think prompting techniques like chain of thought turned out to be really helpful there, they basically gave the LLM more space to think about and think step by step about, like, what they should do for a specific kind of single step. Then that actually started to get trained into the models more and more. And they kind of did that by default. And basically everyone wanted the models to do that anyways, and so yeah, you should train that into the models. 

I think then, there was a great paper by Shunyu Yao, called ReAct, which basically, was the first cognitive architecture for agents or something like that. And the thing that it did, there was one, it asked the LLM to predict what to do, that’s the action, but then it added in this reasoning component, and so it’s kind of similar to chain of thought, and that it basically added in this reasoning component, he put it in a loop, he asked it to do this reasoning thing before each step, and you kind of run it there. And so that explicit reasoning step has actually become less and less necessary, as the models have that trained into them, like just like, they have kind of like the chain of thought trained into them that explicit reasoning step as becoming less less necessary. 

So if you see people doing kind of like ReAct style agents today, they’re often times just using function calling without kind of like the explicit like, thought process that was actually in the original ReAct paper. But it’s still this like loop that has kind of become synonymous with the ReAct paper. So that’s a lot of the, that’s a lot of the difficulties initially with agents. And I wouldn’t entirely describe those as kind of architectures. I describe those as prompting techniques. 

But okay, so now we’ve got this working. Now, what are some of the issues? The two main issues are basically planning and then kind of like, realizing that you’re done. And so by planning, I mean, like, when I think about how to do things, subconsciously, or consciously, I put together a plan of the order that I’m going to do the steps in. And then I kind of go and do each step. And basically models struggle with that, they struggle with long term planning, they struggle with coming up with a good long term plan. And then if you’re running it in this loop, at each step, you’re kind of doing a part of the plan, and maybe it finishes, or maybe it doesn’t finish. And so there’s this, you know, if you just run it in this loop, you’re implicitly asking the model to first come up with a plan, then kind of like track its progress on the plan and continue along that. 

So I think some of the planning cognitive architectures that we’ve seen have been, okay, first, let’s add an explicit step where we ask the LLM to generate a plan, then, you know, let’s go step by step in that plan. And we’ll make sure that we do each step and that’s just a way of like, enforcing that the model generates a long term plan, and like actually does each step before going on and it doesn’t just like you know, generate a five step plan, do the first step and then say, Okay, I’m done, I finished or something like that. 

And then, I think, a separate but kind of related thing is this idea of reflection, which is basically like, has the model actually done its job well, right? So like, I could generate a plan where I’m gonna go get this answer. I could go get an answer from the internet. Maybe it’s just like completely the wrong answer, or I got bad search results or something like that. I shouldn’t just return that answer, right? I should kind of like think about whether I got the right answer. And, or whether I need to do something again, and again, like if you’re just running it in a loop, you’re kind of asking the model to do this implicitly. So there have been some cognitive architectures that have emerged to overcome that, that basically add that in as an explicit step, where they do an action or a series of actions, and then ask the model to explicitly think about whether it’s done it correctly or not. 

And so planning and reasoning are probably like two of the more popular generic, kind of like, cognitive architectures. There’s a lot of, like, custom cognitive architectures, but that’s all super tied to business logic and things like that. But planning and reasoning are generic ones, I’d expect these to become more and more trained into the models by default. Although I do think there’s a very interesting question of how good will they ever get in the models, but that’s probably a separate longer term conversation.

UX can influence the effectiveness of the architecture

Pat Grady: Harrison, of the things that you talked about at AI Ascent was UX, which we normally think about is kind of being on the opposite end of the spectrum, from architecture, you know, the architecture is behind the scenes, the UX is the thing out in front. But it seems like we’re in this interesting world where the UX can actually influence the effectiveness of the architecture, by allowing you like, for example, with Devin to rewind to the point in the planning process where things started to go off track. Can you, can you just say a couple words about UX and the importance of it in agents or LLMS, more generally, and maybe some interesting things that you’ve seen there?

Harrison Chase: Yeah, I’m, I’m super fascinated by UX. And I think there’s a lot of really interesting work to be done here. I think the reason it’s so important is because these LLMs still aren’t perfect and still aren’t kind of reliable and have a tendency to mess up. And I think that’s why chat is such a powerful UX for some of the initial interactions and applications. You can easily see  what it’s doing, it streams back its response, you can easily correct it by responding to it, you can easily ask follow up questions. And so I think chat has clearly emerged as the dominant UX at the moment. I do think there are downsides to chat. You know, it’s generally like one AI message, one human message. The human is very much in the loop, it’s very much a copilot-esque type of thing. And I think the more and more that you can remove the human out of the loop, the more it can do for you, and it can kind of work for you. And I just think that’s incredibly powerful and enabling. 

However, again, LLMs are not perfect, and they mess up. So how do you kind of balance these two things? I think some of the interesting ideas that we’ve seen, talking about Devin, are this idea of basically having a like, really transparent list of everything the agent has done, right? Like you should be able to know what the agent has done. That seems like step one. Step two is probably like being able to modify what it’s doing or what it has done. So if you see that, you know, messed up step three, you can maybe rewind there, give it some new instructions, or even just edit, it’s kind of like a decision manually and go from there. 

I think the other like interesting UX patterns besides this rewind and edit. One is like the idea of kind of like an inbox where the agent can reach out to the human as needed. So you’ve maybe got like, you know, 10 agents running in parallel in the background, and every now and again, it maybe needs to ask the human for clarification. And so you’ve got like an email inbox where the agent is sending you like, help, help me I’m at this point, I need help or something like that, and you kind of go and help it at that point. 

A similar one is like reviewing its work, right? And so I think this is really powerful. For we’ve seen a lot of agents for writing different types of things doing research, like research style agents, there’s a great project, GPT Researcher, which has some really interesting architectures around agents. And I think that’s a great place for this type of review. All right, like you can have an agent write a first draft, and then I can review it. And I can leave comments basically. And, and there’s a few different ways that it can actually happen. And so you know, what the, the most maybe like, the least involved way is I just leave a bunch of comments in one go, send those off to the agent, and then it goes and fixes all of them. Another UX that’s really, really interesting is this, like, collaborative at the same time. So like Google Docs, but human and an agent, working at the same time, like I leave a comment, the agent fixes that, while I’m making another comment or something like that. I think I think that’s a separate UX. That is pretty complicated to think about setting up and getting working. And I yeah, I think that’s interesting. 

There’s, there’s one other kind of UX thing that I think is interesting to think about, which is basically just like, how do these agents learn from these interactions, right? Like, we’re talking about a human kind of like, correcting the agent a bunch or giving feedback. It would be so frustrating if I had to give the same piece of feedback 100 different times, right, that would suck. And so like, what are, what’s the architecture of the system that enables it so that it can start to learn from that, I think is really interesting. And, you know, I think all of these are still to be figured out, like we’re super early on in the game for figuring out a lot of these things. But this is a lot of what we spent a lot of time thinking about.

What’s out of scope?

Pat Grady: Actually, that reminds me, you are, I don’t know if you know this or not, but you’re sort of legendary for the degree to which you are present in the developer community and paying very close attention to what’s happening in the developer community and, and the problems that people are having in the developer community. So there are the problems that LangChain sort of directly addresses and you’re building a business to solve. And then I imagine you encounter a bunch of other problems that are just sort of out of scope. And so I’m curious, within the world of problems, developers who are trying to build with LLMs, or trying to build an AI are encountering today, what are some of the interesting problems that you guys are not directly solving that maybe you would solve if you had another business?

Harrison Chase: Yeah, I mean, I think two of the obvious areas are like, at the model layer, and at kind of like the database layer. So like, we’re not building the vector database, I think it’s really interesting to think about what the right storage is. But you know, we’re not doing that. We’re not building a foundation model. And we’re also not doing fine tuning of models, like we want to help with the data curation bit. Absolutely. But we’re not kind of like building the infrastructure for fine tuning for that. There’s Fireworks and other companies like that. I think those are really interesting. I think those are probably at the immediate infra layer in terms of what people are running into at this moment. 

I do think there’s a second question, there were a second thought process there, which is like, if agents do become kind of like the future, like, what are other infra problems that are gonna emerge? Because of that, and so like, you know, too, and I think it’s way too early for us to say like, what of these we will or won’t do? Because to be quite frank, we’re not at the place where agents are reliable enough to have this whole, like, economy of agents emerge. 

But I think like, you know, identity verification for agents, permissioning for agents, payments for agents, there’s a really cool start up for payment for agents, actually, this is the opposite, is agents could pay humans to do things, right? And so I think there’s like, I think that’s really interesting to think about, like if agents do become prevalent, like, what is the tooling and infra that is going to be needed for that, which I think is a little bit separate than, like, what’s the things that are are needed in the developer community for building LLM applications because I think LLM applications are here. Agents are starting to get here, but not fully here. And so I think it’s just different levels of maturity for these types of companies.

Fine tuning vs prompting?

Sonya Huang: Harrison, you mentioned fine tuning, and the fact that you guys aren’t gonna go there. It seems like the two kinds of prompting and calling of architectures and fine tuning are almost substitutes for each other. How do you think about the current state of how people should be using prompting versus fine tuning and, and how do you think that plays out?

Harrison Chase: Yeah, I don’t think that fine tuning and cognitive architectures are substitutes for each other. And the reason I don’t think they are, and actually I think they’re kind of complementary in a bunch of senses, is that when you have more custom cognitive architecture, the scope of what you’re asking each agent, or each node, or each piece of the system to do becomes much more limited. And that actually becomes really, really interesting for fine tuning.

LangSmith and LangGraph?

Sonya Huang: Maybe actually, on that point, can you talk a little bit about LangSmith and LangGraph? Like Pat had just asked you, what problems are you not solving? I’m curious, what problems are you solving? And as it relates to kind of all the problems with agents that were talking about earlier, like, the things that you’re doing to, I guess, making, to make managing states more, more manageable, to make, you know, the agents more kind of controllable, so to speak, like, how your products help people with that?

Harrison Chase: Yeah, so maybe even backing up a little bit. And talking about LangChain, when it first came out, I think the LangChain open source project really solved and tackled a few problems there. I think one of the ones is basically standardizing the interfaces for all these different components. So we have tons of integrations with different models, different vector stores, different tools, different databases, things like that. And so that’s a big, that’s always been a big value prop of LangChain and why people use LangChain. 

In LangChain, there also is a bunch of higher level interfaces for easily getting started off the shelf with RAG or SQL Q&A or things like that. And there’s also a lower level of runtime for dynamically constructing chains. And by chains, I kind of mean, we can call them DAGs, as well, like directed, directed flows. And I think that distinction is important, because when we talk about LangGraph and why LangGraph exists, it’s to solve a slightly different orchestration problem, which is you want these customizable and controllable things that have loops, that both are still in the orchestration space. But I draw this distinction between kind of like a chain, and these cyclical loops. 

I think, with LangGraph, and when you start having cycles, there’s a lot of other problems that come into play one of the main ones being this persistent layer, persistence layer, so that you can resume so that you can, you can kind of like, have them running in the background, in kind of like an async manner. And so we’re starting to think more and more around deployment of these long running, cyclical, human in the loop type applications. And so we’ll, we’ll start to tackle that more and more. 

And then the piece that kind of like spans across all of this is LangSmith, which we’ve been working on, basically, since the start of the company. And that’s kind of like observability, and testing for LLM applications. And so basically, from the start, we noticed that you’re putting the LLM at the center of your system. LLMs are non-deterministic, you got to have good observability and testing for these types of things in order to have confidence to put it in production. So we started building LangSmith. Works with and without LangChain. There’s some other things in there like a prompt hub, so that you can manage prompts, a human annotation cue to allow for this human review, which I actually think is crucially one, like I think in all of this, it’s important to ask like, so what’s actually new here? And I think the main thing that’s new here is these LLMs. And I think the main new thing about LLMs is they’re not-deterministic, so observability matters a lot more. And then also testing is a lot harder. And specifically, you probably want a human to review things more often than you want them to review software tests, or something like that. And so a lot of the tooling, routing, and LangSmith kind of helps with that.

Where will existing observability tools work for LLMs vs needing a new architecture/approach?

Pat Grady: Actually, on that Harrison, do you have a heuristic for where existing observability, existing testing, you know, existing fill in the blank will also work for LLMs? Versus where LLMs are sufficiently different that you need a new product or you need new architecture, a new approach?

Harrison Chase: Yeah, I think I’ve, I’ve thought about this a bunch on the testing side, from the observability side. I feel like it’s almost like, I feel like it’s almost more obvious that there’s something new that’s needed here. And I think that’s maybe that could be because with these multi-step applications you just need a level of observability to get these insights. And I think a lot of the [products], like Datadog, I think are really aimed, they have this great kind of monitoring. But for specific traces, I don’t think you get the same level of insights that you can easily get with something like LangSmith, for example. And I think a lot of people spend time looking at specific traces because they’re trying to debug things that went wrong on specific traces because there’s all this non-determinism that happens when you use an LLM. And so observability has always kind of felt like there’s something new to be built there. 

Testing is really interesting. And I thought about this a bunch, I think there’s two maybe like new unique things about testing. One is basically this idea of like pairwise comparisons. So when I run software tests, I don’t generally compare the results, it’s either pass or fail for the most part. And if I am comparing them, maybe I’m comparing, like, the latency spikes or something, but it’s not necessarily pairwise of two individual unit tests. But if we look at some of the evals for LLMs, the main, the main eval that’s trusted by people is this LLMSYS, kind of like arena, chatbot arena style thing, where you literally judge two things side by side. And so I think this pairwise thing is pretty important, and pretty distinctive from kind of like traditional software testing. 

I think another component is basically, depending on how you set up evals, you might not have a 100% pass rate, at any given point in time. And so it actually becomes important to track that over time and see that you’re improving or at least not, not regressing. And I think that’s different than software testing, because you generally have everything kind of like passing. 

And then the third bit is just a human in the loop component. So I think you still want humans to be looking at the results of that, like, I don’t want maybe the wrong word, because there’s a lot of downsides to it, like it takes a lot of human time to look at these things. But like those are generally more reliable than having some automated system. If you compare that to software testing, like software can test whether two equals two, just as well as I can tell that two equals two by looking at it. And so figuring out how to put the humans in the loop for this testing process is also really interesting and unique and new. I think.

Lightning round

Pat Grady: I have a couple of very general questions for you.

Harrison Chase: Cool, I love general questions.

Pat Grady: Who do you admire most in the world of AI?

Harrison Chase: That’s a good question. I mean, I think what O penAI has done over the past year and a half is incredibly impressive. So I think Sam, but also everyone there, I think across the board has, has, has, I have a lot of admiration for the way they do things. I think Logan when he was there did a fantastic job at kind of bringing these concepts to folks. Sam obviously deserves a ton of credit for a lot of the things that have happened. They’re lesser known, but like David Dohan is a researcher that I think is absolutely incredible. He did some early model cascades papers, and I chatted with him super early on in LangChain. And he’s been like, yeah, he’s been incredibly just influential in the way that I think about things. And so I have a lot of admiration for the way that he does things. Separately, you know, like, I’m touching on all different possible answers for this, but I think like Mark Zuckerberg and Facebook, like, I think they’re crushing it with Llama and a lot of the open source. And I also think, like, as a CEO, and as a leader, the way that he and the company have embraced that has been incredibly impressive to watch. And so I have a lot of admiration for that.

Pat Grady: Speaking of which, is there a CEO or a leader, who you try to model yourself after? Or who you’ve learned a lot about your own leadership style from?

Harrison Chase: It’s a good question, I think. I definitely think of myself as more of a product-centric kind of CEO. And so I think Zuckerberg has been interesting to watch there. Brian Chesky, I saw him talk or listened to him talk at the Sequoia Base Camp last year, and really admired the way that he kind of thought about product and thought about kind of like company building. And so Brian’s usually my go to answer for that. But I can’t say I’ve gotten incredibly into the depths of everything that he’s done.

Pat Grady: If you have one piece of advice for current or aspiring founders trying to build an AI, what would your one piece of advice for them be?

Harrison Chase: Just just build, and just try building? It’s, it’s so early on that, like, it’s so early on, there’s so much to be built? Yeah, like, you know, GPT-5 is going to come out and it will probably make some of the things you did not relevant, but you’re going to learn so much along the way. And this is, I strongly, strongly believe a transformative technology. And so the more that you learn about it the better.

Pat Grady: One quick anecdote on that. Just because I got a kick out of that answer. I remember at our first AI Ascent in early 2023, when we were just starting to get to know you better. I remember you were sitting, you’re sitting there pushing code the entire day. Like people were up on stage speaking and you were listening. You were sitting there pushing code the entire day. And so when the advice is just build, you’re clearly somebody who takes your own advice.

Harrison Chase: I think well, that was the day OpenAI released like plugins or something and so there was a lot of scrambling to be done and I don’t think I did that at this year’s Sequoia Ascent, so I’m sorry to disappoint you in that capacity.

Sonya Huang: Thank you for joining us. We really appreciate it.

Mentioned in this episode

Mentioned in this episode: 

Inference essay: