Skip to main content
Why IDEs Won’t Die in the Age of AI Coding: Zed Founder Nathan Sobo
Episode 71 | Visit Training Data Series Page
Podcasts Training Data Nathan Sobo, Zed

Why IDEs Won’t Die in the Age of AI Coding: Zed Founder Nathan Sobo

Nathan Sobo has spent nearly two decades pursuing one goal: building an IDE that combines the power of full-featured tools like JetBrains with the responsiveness of lightweight editors like Vim. After hitting the performance ceiling with web-based Atom, he founded Zed and rebuilt from scratch in Rust with GPU-accelerated rendering. Now with 170,000 active developers, Zed is positioned at the intersection of human and AI collaboration. Nathan argues that despite terminal-based AI coding tools visual interfaces for code aren’t going anywhere, and that source code is a language designed for humans to read, not just machines to execute.

Watch Now

Listen Now

Summary

Nathan believes the IDE is more relevant than ever in the age of AI:

The IDE must evolve into a collaborative, metadata-rich environment: Nathan envisions transforming code from a static artifact into a “metadata backbone” where conversations, edits, and context are permanently anchored to specific locations. Instead of Git’s snapshot-based model, he’s building fine-grained tracking that records every edit, allowing developers to ask “what conversations led to this code?” and creating a richer collaboration layer between humans and AI agents that preserves the full context of how code evolved.

Code remains the best interface for understanding code: While terminal-based “vibe coding” tools are popular, Nathan believes developers will always need to look at source code directly, especially when reviewing AI-generated changes. As Harold Abelson said, “programs should be written for people to read and only incidentally for machines to execute.” When an agent makes edits, seeing ten-line excerpts in a terminal isn’t enough—you need the full visual context that only an IDE can provide.

LLMs excel at knowledge extraction, not novel problem-solving: AI is highly effective when generating code that’s “in distribution”—borrowing from well-known patterns like Tailwind CSS or API bindings. Nathan used LLMs to generate Rust procedural macros and Cloudflare API wrappers by feeding in documentation, treating the model as a “knowledge extruder” that shapes existing knowledge into exactly what he needs. However, when working on novel systems like Zed’s Delta DB, where the challenge is figuring out what to build rather than how to write it, LLMs provide little help with the actual coding.

Democratize agents through protocols, own the interface: Rather than build a single proprietary agent, Zed created the Agent Client Protocol (ACP), analogous to the Language Server Protocol, allowing any agent to connect to any IDE. This lets specialist agents compete and evolve while Zed focuses on delivering the best human interface. JetBrains has already adopted ACP, and multiple agent developers are integrating, allowing Zed to align with rather than compete against the broader agent ecosystem.

Make conversations editable, not just readable: The next evolution of IDEs isn’t just chat panels bolted onto traditional editors. Nathan is reimagining the conversation itself as an interactive document where code snippets are live-editable, context can be expanded inline, and developers can navigate with vim bindings. Instead of context-switching between a chat window and file tabs, the conversation becomes a new kind of editor that pulls code toward you and lets you manipulate it directly.

Transcript

Introduction

Nathan Sobo: It just doesn’t make sense to me that human beings would stop interacting at all with source code until we get to, like, AGI, I guess, where human beings aren’t going to be doing a lot of different things. But until then, I think we need to. We need to look at code. And so then the question is: what’s the best user interface for doing that?

Sonya Huang: Today we’re talking with Nathan Sobo, founder of Zed, who spent nearly two decades building IDEs, first building Atom at GitHub and now Zed. Zed is a modern IDE written in Rust, used by more than 150,000 active developers, and also creates and maintains the ACP, or agent client protocol, which connects different coding agents to different coding surfaces, including Zed.

Nathan shares a contrarian take. Despite all the hype around chat- and terminal-based AI coding tools, he argues that source code itself is a language meant for humans to read, and that we’ll always need visual interfaces to understand what AI agents are doing. We dig into whether LLMs can actually code, what the richer collaboration layer between humans and AI might look like for coding, and Nathan’s vision for turning code into a metadata backbone where conversations, edits and context all hang together. Enjoy the show.

Main conversation

Sonya Huang: Nathan, thank you for joining us here today.

Nathan Sobo: Yeah, thanks for having me on.

Sonya Huang: I want to start with a hard hitting question. There is a lot of internet talk, chatter about is this the death of the IDE? So if you roll back two years, everyone was coding primarily in the IDE.

Nathan Sobo: Right.

Sonya Huang: And now it seems like as people move towards the terminal and more of these conversational experiences, there’s a question in the air: is this the death of the IDE?

Nathan Sobo: Yeah. And I’ve actually asked myself that question at different times and different states of anxiety of, like, is it death of the IDE? I’ve spent my entire life grinding toward building the ultimate tool for this, and is it not going to matter? Are these people right? But after mulling it over seriously—because I definitely don’t want to be gold plating a buggy whip, I think that those takes are not realistic.

It is mind blowing that you can sit down at a terminal and speak English with a script, talking to an LLM, and make real progress on a code base. And there are millions of people doing that apparently—including me on occasion. But the problem I ran into whenever I used—you know, Claude code was the thing I think I spent the most time with, is when it wants to show you what it just did and you’re reviewing it, you sort of view it through this 10 line or tiny little excerpt in the terminal. And as soon as you want to see more, what do you do?

And so I think if you believe that the IDE is going to die, then I think that requires you to believe that human beings are not going to need to interact with source code anymore. Like, I don’t need to take a look and see the context of this edit that the agent just made and all the different things that it’s connected to, and understand that and load that into my brain. And I just fundamentally think that source code is a language, just like natural language is a language.

So we have this revolutionary new tool for processing natural language that we’ve never had. But it’s not like source code is binary, that we feed to a processor, right? Like, it is intended for human consumption. Like, one of my heroes that I learned a lot from, Harold Abelson, he’s a computer science professor, I think he was at MIT. He has this great quote that I’ve always loved, which is like, “Programs should be written for people to read and only incidentally for machines to execute.”

Pat Grady: Hmm.

Nathan Sobo: Which is an extreme stance, because it’s like why are you writing the program if you don’t want a machine to execute it? There are, like, a lot of people that program in Haskell and stuff that seem to not actually care about what gets done with all of this programmatic machination. I tend to, but I also see a lot of wisdom in that, and that fundamentally programs are about us expressing some abstract process in a very precise way, and there is no better language for talking about a lot of different sort of Turing complete programmatic concepts than source code itself.

So it just doesn’t make sense to me that human beings would stop interacting at all with source code until we get to, like, AGI, I guess, where human beings aren’t going to be doing a lot of different things. But until then, I think we need to. We need to look at code. And so then the question is: what’s the best user interface for doing that?

Sonya Huang: And is the best user interface a GUI then?

Nathan Sobo: I think so. There are a lot of different ways of representing an interface to code. Does it need to be graphical? I mean, there are a lot of people that are using Vim, for example. Vim’s not a graphical user interface, but is an interface that’s optimized around presenting source code, navigating source code, and yes, sometimes editing that source code manually. Because I think in the same way that the best way to understand software is often looking at the software, looking at the best human-engineered synthetic language we can derive for expressing this abstract process, sometimes the best way of expressing it is just to write it directly.

And I’m not here to say that I’m particularly a big fan of grinding through repetitive work, or having something that could be written by an LLM. I have no desire to write that necessarily, but I do think that there are oftentimes still in software where the clearest way to articulate something is just to write the code, define a data structure. I could write a sentence to an LLM describing “I want a struct with the following four fields,” or even zoom out and describe that on a more abstract level. But if I know what I want to express, sometimes source code really is the most efficient way to do it. And in that world, a tool designed for navigating and editing source code still seems like a really relevant tool. And I have a feeling that even people that are heavily vibe coding with a tool in the terminal are probably running an editor alongside that tool to inspect what’s going on.

Sonya Huang: You mentioned at the start that you’ve been working on IDEs for your whole career. You are a legend in the IDE space. Just maybe for our listeners, just say a word on your background.

Nathan Sobo: When I graduated from college, I decided I wanted to build my own editor. That was 2006, the year after I graduated college. And I’ve been working my whole career to build the editor I envisioned, which was always sort of—at the time, TextMate was a really popular tool. I learned about TextMate from DHH  demoing Rails or whatever. It was just lightweight, simple, approachable, fast. I’d used Emacs, I’d used Eclipse, I’d used the JetBrains products, which are still really powerful. And all of them brought something to the table in terms of either extensibility or responsiveness or feature richness, but none of them synthesized all those things into one package.

And so it was, like, 2006 when I decided I want to build an editor that has the same power or more power as the most capable IDEs that take 10 years to start up and feel kind of sluggish under my fingertips, but then have the same kind of responsiveness as a TextMate or a Vim, but also be really extensible, but not have to be extensible in this arcane Vim script language where I’m having to have this, like, pet that I’m feeding every weekend or whatever in my spare time to make sure that my Vim configuration doesn’t break. I wrote Atom in Vim. So anyway …

Sonya Huang: Talk about Adam, actually, and then lessons from that and then why you started Zed.

Nathan Sobo: Yeah. The first attempt at delivering this IDE of my dreams was Atom. And I joined GitHub as one of the first two engineers to work on that project. And we wanted it to be extremely extensible, and so we decided, why don’t we build it on web technology? So in the process of creating Atom, we built the shell around Atom, which we ended up naming Electron. And Electron went on to be the foundation …

Sonya Huang: That’s clever. I just caught that reference.

Nathan Sobo: Yeah. That was Chris Wanstrath’s idea, actually, but not mine. Yeah, and so what we did is we sort of married Node.js, which was getting really popular at the time, with Chrome, and then delivered this framework that kind of let you build a webpage that looked like a desktop app.

Pat Grady: Hmm.

Nathan Sobo: And it went on to be really successful. Atom had its day in the sun. Then Microsoft kind of copied our idea, took Electron, took code they already had that was running on the web, and moved it over, and then the rest was history. I mean, VS code went on to take over the industry.

But at some point I had kind of gotten to the point where I felt like Atom had run its course. I had learned some hard lessons there. Some of them were just about, like, how do you design a data structure to efficiently represent large quantities of text and edit it? Which is a language-neutral lesson, honestly, to some extent. And some of the lessons were about not inviting people to bring code onto the main thread and destroy the performance of the application by just running random extension code. We made it very extensible. We were very open, which sort of made it very popular quickly. And then we drowned under the promises we had made that were kind of premature.

But one of the things was I was just sick of web technology. Like, I remember opening up the performance profile that was built into Electron—Chrome’s dev tools, basically—and just looking at something that I was trying to optimize, and I’m just like, I need to get inside of whatever these little lines are in this flame graph and figure out what’s going on inside there. And I just hit the ceiling, I guess, on how fast I could make this thing.

And so it was yeah, I think, 2017 that I decided we need to start over, that a web browser is actually not a suitable foundation for the tool that I really wanted to build. Which had a lot to do with performance, which sounds like no big deal. I mean, but performance is not a feature that you can really go back and add later. If you’ve chosen an architecture, you’re going to accept the performance capabilities of that architecture. And the web wasn’t it for me.

Sonya Huang: You built Zed originally to make it easier for humans to pair program with other humans.

Nathan Sobo: Right.

Sonya Huang: That ended up being very convenient as AI agents came about and humans started to need to collaborate with AI agents. Talk about that dynamic a bit.

Nathan Sobo: I think the whole industry has this idea of how we should all be collaborating together. And I was actually at GitHub for the current way that we collaborate becoming popular, and it’s all about being asynchronous, that you kind of go off in your corner and do a bunch of work, take a snapshot of that, upload it, and then in a web browser someone writes comments on your snapshot, and then maybe an hour or maybe a day later you reply. And it’s this very, like, email-oriented experience, asynchronous experience, which when you’re all on the same page or maybe when you’re working on Linux, which is what Git was designed for originally, and there’s people all over the world working on these very disparate things, maybe that’s an appropriate modality.

But I always believed that the best way to collaborate on software was including a lot more times where we’re in the code together, writing code together, or talking through code together, and getting on the same page in a format where we’re actually talking to each other and can interrupt each other, and also relate to each other as human beings in a way that I just don’t see happening on top of the Git-based flow.

We use Git all the time, and we don’t do as much code review as a lot of teams because what we prefer to do is just talk to each other in real time in the code. But there just wasn’t a good tool that enabled that. You could use screen sharing, but the problem with screen sharing is one person is very much in the passenger seat, because you got round trip keystrokes. And so yeah, the two big problems—not knowing that AI was coming, right? The two big problems I wanted to solve at the outset were fundamentally better performance. You know, when you type a key, I want pixels responding to you on the next sync of the monitor, so there’s zero perceptible lag. We’re pretty close. I can’t say we’re 100 percent perfect, but we’re a hell of a lot closer than you could ever get in a web browser.

Pat Grady: And can you say a word about how you’ve achieved that?

Nathan Sobo: Yes, I’m on a digression here, but …

Pat Grady: Go ahead and say your second thing, and then say a word about how you achieve performance.

Nathan Sobo: But then the other big pillar other than performance at the outset was changing the way that developers collaborate on software. And so to do that, I really feel like we need to bring the presence of your teammates into the authoring environment itself in much the way that Figma did for designers. Now designers didn’t have a lot of good options. They didn’t have anything as good as Git, for example, as a compelling alternative. But I still think that vision of you’re in the tool, looking at the actual thing you’re creating and there are other people there with you, was something that even before I saw Figma, I wanted to bring to the experience of creating software. And so that’s why it felt appropriate to own the UI on this deep level.

Now onto the rest of your question about what are the implications of that for AI? The vision with Zed was always, I want to link conversations to the code in the authoring environment where the code’s being written. And so I actually think that conversations in the code, that used to be kind of a weird idea, right? Because, oh, why would you need to have a conversation in the code? You write it by yourself and you push a snapshot, and then we’ll have a conversation on a website about the code you wrote.

But it’s starting to feel a lot more relevant in a world where you’re having this conversation all the time with this, like, spirit being or whatnot, right?

Sonya Huang: Ghost?

Nathan Sobo: [laughs] Yeah. All of us, even me included as a big fan of this more synchronous mode of collaboration, are having a lot more conversations about code in the code. And that’s where I see sort of this snapshot-oriented paradigm really breaking down. Like, when I’m interacting with an agent, and it goes off and makes some changes and I want to give it feedback on those changes, ideally, I want to kind of permanently store the feedback that I gave on those changes and have a record of that. What tools? There’s no sort of Git for that, if that makes sense, right? I’m not going to commit on every token the thing emits and then, like, do a pull request review on that.

And so to be real, Zed is very much a work in progress. And I think to earn the right to deliver this experience, we first just had to build a capable editor that someone would just want to use to create software on their own. I think we’ve made a ton of progress there, and are now starting more earnestly on this phase two of this fine-grained tracking mechanism that’s sort of the equivalent—it’s not exactly how it works, but it’s kind of the equivalent of having a commit on every keystroke, or a commit on every edit that the agent streams in, and then being able to anchor interaction or conversation directly to that.

So that’s something that—the tech we’re building I think is something maybe we could have built in isolation, but then the problem is well, what experience do you deliver on top of that? And I always thought the best possible experience would be this vertically integrated—we designed the UI and all the infrastructure top to bottom to deliver this immersive ability to interact directly in the code with another being.

Sonya Huang: And so you’ve made the choice to make the IDE almost this Switzerland for humans to collaborate with different AI agents.

Nathan Sobo: Right.

Sonya Huang: Talk about the role that agentic coding protocol—is that what it is?

Nathan Sobo: Yeah.

Sonya Huang: ACP plays in that vision?

Nathan Sobo: I really view our job as to provide the ultimate interface between the human being, the source code, other human beings, or other artificial human beings, basically. And we built a first party agent earlier this year, but as we’re doing that—and it’s quite challenging, as tuning the prompts—none of it feels, like, challenging in the same ways that building an IDE is in terms of algorithmic complexity and getting the data structures just right, and making sure that things are performant. The actual Turing complete software parts of that are fairly easy. The hard parts are the AI parts. And that’s still something that I think we’re learning as a team. We come from a very different perspective.

Meanwhile, I see all these teams that all seem to be quite well funded from some of the big AI labs like Anthropic and Google. Google with Gemini-cli, they were the first people that we integrated with. Claude code. Everyone’s building an agent, it seems, and all these agents are rendering what I consider to be a fairly impoverished kind of terminal-based experience that would need to be supplemented with an editor.

So the thought is okay, we’ve got a great editor, and all these people are trying to solve this problem. Like, what needs to happen here is the same thing that the language server protocol did. So one great thing that Microsoft did with VS code is they took all the intelligence of the IDE that was typically bundled in like Jetbrains style, where the IDE comes preconfigured knowing everything, and they moved it out to the community. So the PHP has a language server now, and there’s the TypeScript language server, et cetera.

We wanted to do the same thing with agents. The thought being there’s probably going to be different kinds of agents experimenting in different domains. Maybe there’s certain agents that are optimized for particular problems, there are agents competing with each other, so sometimes one will be the best only to be leapfrogged by another. Externalizing all that and trying to democratize that and say, “Hey, whatever agent you want to use, we want to deliver a great UI for you to interact with that agent and your software.” That was the thinking behind it.

And so far it’s working out better than I might have expected, actually. Like, I didn’t really know how many people are going to resonate with this idea, but JetBrains got on board. And so that I think is really exciting. They’re theoretically a competitor, but it’s nice to have someone on the other side. And now there are a bunch of different agent developers that are getting on board on the other side. We’re going to continue working on our own agent, but it’s nice to be aligned with all that effort instead of competing with it.

Sonya Huang: Do you vibe code?

Nathan Sobo: Sometimes. [laughs] But yeah, I mean, I have. So one successful case of vibe coding is we had some very old, like, server-side infrastructure that needed to be replaced. And so I decided to move all of our server-side infrastructure to Cloudflare, and I vibe coded a simulator for the Cloudflare API. And so basically we have a trait in Rust that abstracts away everything Cloudflare can do. And I had an afternoon, basically, and an idea. And that was an amazing use case of agentic coding of just like—pop pop pop pop pop.

Pat Grady: What did you use for it?

Nathan Sobo: I just described the idea to the agent. I think I fed it some API docs from Cloudflare’s JavaScript APIs. And I said, “I want to build Rust bindings to these APIs, but then I want to build an abstraction that sort of lets me then plug in a simulator for these APIs as well.” And I knew exactly what I wanted. I had a vision, and I didn’t have a ton of time to express that vision. So in the past, maybe I would have either done it myself—which I definitely didn’t have time for, there were other things going on at the time—or written some amorphous document, or try to explain it to engineers on my team what I had in mind. But what this vibe-coding session enabled me to do was get somewhere in between, if that makes sense.

So I apologetically handed this pile of generated code to the guys working on cloud and was like, “I generated this. This is directionally the way that I want to go, don’t judge me too hard if you find some weird stuff in here that doesn’t quite make sense. Like, it’s generated. Just so you know.” That was a huge success, though. I mean, they were able to run forward with it. Yeah, I think I maybe avoided the, like—I never want to be like, you know, the boss, the vibe-coding boss or whatever, you know?

Sonya Huang: Why not?

Nathan Sobo: [laughs] What I mean is I want to be the vibe-coding boss that’s doing it well, but I don’t want to be the vibe-coding boss who’s sort of clueless and thinks that they’ve solved 95 percent of the problem when really they’ve solved five of the problem and they’re just deluding themselves. I want to do it in an aware way and make sure that I’m actually moving the ball forward and not being annoying, handing off a big mess of slop to somebody and being like, “Here you go, clean it up, my great idea.”

Sonya Huang: Actually, speaking of—so your user base is on the order of 100,000 active developers in Zed.

Nathan Sobo: It’s 170, I think. Well, anyway, depends how you measure.

Sonya Huang: Okay. 170,000 active developers at the time. And they tend to be pretty hardcore, engineers, like elites, have been coding for a long time. What is your user base’s overall perspective on AI, and are they embracing it? Are they using—I just listened to the Karpathy interview where he uses autocomplete but doesn’t really use the agentic loop as much. What are your users doing in terms of adopting AI?

Nathan Sobo: Based on the metrics we have, which are not perfect because it’s an open source IDE, it makes it very easy for people to opt out, about half of the people using Zed are using our edit prediction capability, which is, you know, I’m coding along and it suggests the next thing. So very much programmer in the driver’s seat. And about a quarter of our active users are using agentic editing in some shape or form.

You know, we had some haters in the crowd, and I think as we began to embrace AI, there were definitely people who let us know what they thought about that, you know, that we weren’t—whatever. But I don’t care about that. It’s like, hey, this is happening. Something’s happening here. Like, we’re not going to just not go toward that. Like, I’m not like that. And so if they signed up for Zed for being a Luddite or head in the sand or, like—or I don’t know, just clinging to tradition with all our might, like, we’re not on board. Like, I want to move toward the future.

We attract a more professional—yeah, like you’re saying, hardcore audience. Because I think, at least at the moment—and again, the full vision isn’t built yet—one of the things we have to offer is this extreme performance, while with every passing day the same features as the other things, but just much better performance.

And so I think as a developer becomes more seasoned, they start to care about the tactile experience of the tool they’re using under their hands. You use something 40 hours a day, it starts to bother you when it’s dropping frames or just not being able to keep up with your hands, basically.

Pat Grady: Yeah.

Nathan Sobo: So just the kind of people that tend to gravitate towards Zed now are the kind of people that just care about a really well-crafted, fast tool. My daughter goes to school with a girl, her mom is a dentist, and she’s vibe coding some software for her dental practice right now, right? Does she know that she needs a fast editor that feels good under her fingers? I don’t know. I’d like her to. It’s kind of my job to. I think we have bigger plans, and there are going to be things that speak to that wider audience over time, but for now the people that really care tend to be people that are quite experienced.

Sonya Huang: I remember one of your engineers, Conrad, wrote this article, and you texted it to me almost, like, sheepishly or apologetically. It was number one in Hacker News. “Why LLMs Can’t Build Software.” What is your kind of mental model for what LLMs are good at in software, versus where they’re lacking? And how quickly do you think that’s changing?

Nathan Sobo: I’m less convinced than Conrad that LLMs can’t build software. I think his mental model of the things they’re incapable of is maybe better than mine, or he’s just more confident than I am in his take. I’m less convinced of what they’re not ever going to be capable of doing, but I can tell you what they’ve worked well at in my own experience, and where things get frustrating or go wrong for me.

I mentioned earlier that generating the Cloudflare, another earlier experience that I had pre-agentic, pre-agent, anything, it was just with GPT-4, and I generated a new backend for our graphics framework that we had to write, Like, so you were asking earlier about how we achieve our performance. One of the things we do is the entire Zed application is organized around delivering data to shaders that run on the GPU that render the entire UI in much the same way that a video game would render frames of its UI—or its experience, I guess. It’s not a UI. Rendering a 2D UI, and with some of the same techniques that video games use to render their 3D worlds.

Pat Grady: Interesting.

Nathan Sobo: So anyway, I was rewriting the graphics backend, and though I did write the original graphics backend, I was rewriting it. The old one was working well enough, but it wasn’t in the shape I quite wanted. And so I was able to just generate a rendering pipeline that configured the GPU and all the different stages, and all these things that I would have in the past been searching around on Stack Overflow or digging around in obscure documentation to do, I was able to just go—voop! Because all that knowledge, I didn’t have it, but it’s definitely out there in the distribution of these models.

Another thing I did during that same rewrite or fundamental change of our UI framework was I wrote a bunch of procedural macros. And macros in Rust are really powerful. They’re ways to do things that you can just put a little annotation on the top of something, and it’ll generate all this code behind the scenes before the compiler runs. But I never really learned to write Rust macros, definitely not these procedural macros that I needed to write in order to basically pull all the ideas from Tailwind CSS into our Rust graphics framework, which really delighted me, this idea that, like, we’re pulling these pop culture kind of Tailwind web ideas into the systems programming language and combining these two things together

But Tailwind’s definitely in the distribution of the LLM. It knows Rust well enough, and it knew how to generate these procedural macros that I didn’t know how to generate. And so faster than—I never would have even attempted to do what I did of, like, okay, I’m going to write some macros that generate a method for every single Tailwind class, but by kind of feeding some docs into it here, I view it as like a knowledge extruder of, like, there’s all this sort of generalized knowledge out there. And sure, I could go read about it and learn about it, but no, I want it to be, like, squeezed out in exactly the shape that I want it.

And so that was like a perfect use case for it, I think, of it was all pretty well known, standard stuff, but I just didn’t have it in the shape that I needed. So that was like—I think what they’re really good at is it’s like copy and paste, but way, way beyond. But you’re still sort of borrowing from the global knowledge set.

Sonya Huang: Yeah.

Nathan Sobo: Where I’ve gotten more frustrated is, like, right now we’re working on this Delta DB system which is trying to do fine grain tracking and real time syncing of individual edits as they occur that layers on top of Git. It’s fun because it’s been a while since I had some of these moments of just sitting in front of a problem and just struggling to load all the different constraints that need to be solved simultaneously in my head and hold that all in there long enough. LLMs have not been that helpful, at least not in writing the code, because the code is not the constraint when we’re solving this. Like, yes, we’re writing code, but much more important is the thinking going behind what is actually not that many total lines of code, they’re just the right lines.

And I’m proud of how few lines there are. It’s like people get excited about how many lines of code they’re generating, and for different kinds of software, maybe that makes sense. But I’m still using LLMs, I’m just not using them to write the code. I’m using them to go explore an idea or even have it generate some code that I never intend to run, to just see how it feels, quickly. So it’s not that I don’t use them as an inherent part of my process, it’s just like depending on what I’m doing, I’m not always so sure it’s going to make me faster writing code.

Sonya Huang: So just to read that back to you, when the LLMs are doing something that’s kind of in distribution of their training data, and where you have a good abstract model of almost the pseudocode of what you want it to do, the models are actually very good at implementing the code.

Nathan Sobo: Yeah.

Sonya Huang: Once you sort of go out of distribution, or when writing the code is not actually the task at hand, it’s actually the thinking of what you want the code to accomplish, then LLMs are just not there yet.

Nathan Sobo: I think that’s right. And another thing I’ll say is I think I’m excited for LLMs to get faster. I don’t know that Haiku, for example, is intended to be used just directly. Like, we have some work to do as tool builders to figure out when to switch to the faster model or the smarter model, but in general, I think the faster they can go whilst not being totally silly, unintelligent, that’s the trick. But I’m excited for being able to kind of conjure a diff on demand, because I think some of it is just like, if I’m asking the agent to do this thing, I have a couple different choices. Like, I can kind of sit there and watch it, which can sometimes be helpful and sometimes is important because I’m like, “Stop! Like, whatever you’re doing right now, like, writing tests I didn’t ask you to write that you make pass while the tests I want to make pass is still not passing.” I can go make a coffee or go take care of some other task, or maybe context switch to some company-level concern or whatever. I could go try to compete with it and let it do its thing. But all of them are a little annoying. Again, I’m someone who’s obsessed with getting fast feedback. I literally engineered the tool to give me keystroke feedback on the next frame. And so it’s that waiting that I think has been frustrating for me. But I think if I could get something almost correct in a tenth of the time, okay, maybe we’re talking about a shift again. That’s why the jury is very much still out for me around where it’s all going.

Sonya Huang: What do you think is the vision for how the IDE ultimately will evolve? We’re going to have lots of these agents, they’re going to become increasingly capable. How does the GUI evolve?

Nathan Sobo: So there’s a couple different pieces of it. I mean, for me, the deeper piece is this notion of treating the IDE as this fundamentally collaborative environment. And honestly, what’s deployed in Zed today is still pretty alpha quality on that front. But we’re taking all those lessons and have very good progress on a new way of representing collaboration. And the IDE is going to be the place where that’s all surfaced. So an inherently collaborative experience. To me, multiple humans and multiple agents, the idea that when you’re having a conversation with an agent, that’s potentially something you could pull other humans into and use the conversation with the agent as background context or as a fast track to getting all the relevant code you want to talk about or the problem you want to discuss loaded up in a very easy-to-digest way to then pull in a teammate and have that conversation.

And then the idea of permanence, being able to reference locations in the code in a stable way, having a continuous representation of the evolution of the code, rather than this punctuated snapshot-based representation, to me is going to be a fundamental abstraction that we’re going to need to build any kind of interaction with the LLMs around so that we can remember everything, and so that an LLM will be able to ask about a section of the code, what are all the conversations that happen behind this code, to go plumb that context.

And I guess the idea is having the code base be this, like, backbone on which all the data related to the code can hang in a way that it just isn’t today. You can have comments in your code and you can have stuff tied to snapshots of your code in GitHub, but for the most part, the code itself is devoid of metadata.

Pat Grady: Yeah.

Nathan Sobo: And so really unlocking that, turning the code into this metadata backbone in the UI is a big piece of it. Some of the stuff I was showing you is asking ourselves the question that I think a lot of people are asking. IDEs have looked the same for who knows how long, right? Like, it’s the typical thing: you got the tree on the left, the tabs in the middle, maybe some Git stuff on the right, or maybe the agent panel—that’s where we have our agent panel today. And it’s this very—it’s evolved over time to solve kind of one human in one working copy, solving one problem manually, maybe with some edit prediction in the middle.

But if someone really is working agentically as their primary means of working, what does it mean to put the conversation front and center? And we’re not the only ones thinking about this, obviously. There are other tools that are also exploring this. The cool thing about us though is we are a full-blown IDE. And so I really view it as there’s a place for all these different ways of working, and I definitely think that there’s a very long-lived place for this traditional way of what’s going on in this one working copy, and I need to evolve the state of this copy forward until it solves my problem. There’s a place for that, but then there is a place, I think, for all right, you’ve got several potential conversations, maybe on different projects going on at the same time. Maybe that becomes less of an issue when the LLMs get faster, but even so, I think, then you’re going to be wanting to do even bigger things.

And so once you have this process that takes time, there’s this natural desire to multitask. And so there’s that, like, how do we manage multiple conversations with multiple agents, and then how do we make that conversation more valuable? And so that’s really what we’re pushing on in the mockups we’ve been doing. And right now I think we model these conversations as very much like a chat. There’s more we can do, let’s put it that way, in the sense that as this conversation’s evolving, you could view it as a chat, but it’s also sort of a document that’s evolving over time. Model is a conversation, but you could also view it as a document. And so inside that document there are all these references that are being injected from different spots in your code base, edits are occurring. And that’s all kind of getting unrolled over time as this log.

What I really want to do is make that document surface less of a read-only artifact, if that makes sense. Like, more useful as a primary editing surface, where you could move your cursor up out of the what do you want to say to the agent next, and up into the previous conversation to do useful things.

So one of the useful things I want to do is right now, when we’re rendering code, it’s like read only. What we’re working to do now is when we render a window into the code and that code’s fresh, you should be able to edit right there, and have that synchronized with the actual location, expand the context like you can on a GitHub pull request, for example.

So in Zed, we have this concept of a multi buffer, which is taking little pieces of the code from all over your code base and combining them together into one user interface that you can edit as if it’s a single buffer, basically. And so I’m really intrigued at the idea of to what extent is this conversation that I’m having pulling code toward me, potentially making some edits, and why can’t I just reach out and interact with the code directly inside the conversation? And then also, like, when I select between two points, can I review the changes that occurred in that period of time?

So it’s really like trying to make that conversation more than just a chat, and more keyboard navigable in a way that someone with their Vim bindings could just, like, quickly navigate up through the conversation and make it feel kind of like an editor, a new kind of editor. Having built this thing from scratch and having deep control of all the primitives, an opportunity I’m excited to go grab is, like, how can we have this new kind of editor? It’s not just showing you a file in your code base, but it’s showing you a conversation and pieces of all these different files. And you can just reach out and interact with that as you could in an editor.

Sonya Huang: Super cool. Thanks, Nathan. You’ve been on a quest to build the perfect tool for your craft for a long time now, and it’s exciting to see what you’ve done with Zed. And I can’t wait to see what you do with agents in the interface and with Delta DB.

Nathan Sobo: Thank you.

Sonya Huang: Thanks for joining us today.

Nathan Sobo: Yeah, you’re most welcome. Yeah, really had fun.

More Episodes