Skip to main content

Snowflake CEO Sridhar Ramaswamy on Using Data to Create Simple, Reliable AI for Businesses

Snowflake, a leader in data, is of the most important enterprise software companies of our generation. What role does it have to play in the world of AI? Snowflake is integrating AI into its core to transform unstructured data, democratize data access and improve functionality, and create reliable AI applications. In this episode CEO Sridhar Ramaswamy describes how the company has been able to build reliable talk-to-your-data business applications with 90%+ accuracy, whereas even frontier models achieve ~45% off the shelf.

Summary

Sridhar Ramaswamy, CEO of Snowflake and former head of Google’s advertising business, on why he thinks data is key to creating reliable AI for business use cases.

Reliability and precision are critical for enterprise AI applications. Even with advanced models like GPT-4, out-of-the-box solutions often struggle with accuracy, achieving only about 45% reliability for tasks like answering questions about company data. Snowflake has pushed this to over 90% for their talk-to-your-data applications by treating it as a software engineering problem rather than purely an AI model problem. This approach involves carefully constraining the problem space and applying rigorous software development practices.

AI founders should focus on making complex tasks simple for end users. Ramaswamy emphasizes that most enterprise customers want to solve business problems, not grapple with complex technology. By turning multi-week software engineering projects into simple commands that analysts can execute in hours, founders can create significant value. This approach of “making the hard simple” can be a powerful differentiator.

Effective AI implementation requires robust software engineering practices. Reliable AI products demand careful problem definition, comprehensive testing frameworks, and ongoing performance measurement. Founders should focus on building “safety nets,” regression tests, and constraining the problem space to ensure their AI solutions maintain high reliability.

To increase product velocity, start with robust risk management strategies to prevent issues while allowing for rapid innovation. Implementing automated experimentation frameworks enables quick, controlled rollouts of new features. Improving inner loop productivity and making systems extensible are crucial for speeding up individual changes. Above all, leadership must provide clear direction and foster accountability, balancing quality with speed.

AI has the potential to dramatically increase access to software creation and consumption. Ramaswamy sees AI as a new interface layer between humans and software, as well as between different software systems. This shift could democratize both the development and use of software, putting powerful tools in the hands of far more people. Founders who can harness AI to make complex functionality more accessible to a broader audience will have an advantage. 

Transcript

Contents

Sridhar Ramaswamy: The product that makes even the people that go, “I have GPT-4, I have an army of software engineers,” the thing that even they struggle with is things like a reliable talk-to-your-data application, because even with GPT-4 out of the box, you end up getting 45-odd percent reliability, meaning it gets half the questions wrong when it tries to answer it. We are well in the 90s, and we are racing to get to 99 percent reliability on talk-to-your-data applications. Obviously, we restrict the domain and turn this into more of a software engineering problem than just like a pure AI model problem, but that’s the thing that makes every Snowflake customer perk up and go, like, “I want that,” because even the people with the money and the resources to spend on software engineering teams very quickly realize that this is a wall that they are likely not going to break through.

Pat Grady: Today we’re excited to welcome Sridhar Ramaswamy, CEO of Snowflake. Snowflake is one of the most important enterprise companies in the public markets. It’s the default cloud data platform. But today, the question of what role does Snowflake have to play in the world of AI looms large.

Sridhar is somebody we’ve known for a couple decades. He actually started on the very same day as our partner Bill Korn at Google back in April of 2003. We backed Sridhar and his own startup, Neeva, which was an AI-driven search engine. Snowflake acquired Neeva, which is how Sridhar became the successor to Frank Slootman.

Rarely have we encountered somebody who is as in the weeds on the technology, but also as commercially savvy as Sridhar, and he will join us today to talk about what AI means for Snowflake, the importance of safety nets, the open source community, the competitive landscape, and the practical applications of AI that he’s seeing in the enterprise through his lens as CEO of Snowflake. We hope you enjoy.

Primer on Snowflake

Pat Grady: All right, Sridhar, we’re excited to have you here with us today. You’re a technologist by trade. You’ve spent a lot of time in the consumer world, and you are now at the helm of one of the most important enterprise companies of our generation. So before we jump in, we have a lot that we want to know about enterprise AI, what Snowflake is up to, some of your predictions on the world of AI, before we jump in, though, just to level set, can you give us a couple words on your personal background? And then just for people who aren’t familiar—which is probably not a lot, but just for fun, for people who aren’t familiar, what’s Snowflake? So who’s Sridhar? What’s Snowflake? Let’s start there.

Sridhar Ramaswamy: That’s great. Pat, Sonya, super excited to be here at iconic Sequoia, home to many, many legends I admire. Yeah, I’m a computer scientist by training. Early career as an academic. I joke to people that I’m a reformed academic because I was like, I wanted to do things with more impact. Super lucky to be an early part of Google, where I joined one of the greatest businesses ever invented by humanity, which is the search ads business. I ran that for close to a decade, all of ads and commerce at Google, for five years helped grow that business from a billion and a half to over $120-billion in revenue.

And then, funded by Sequoia, did an ambitious startup called Neeva, which wanted to modestly rethink what ‘search’ meant before getting acquired by Snowflake and becoming its CEO. And Snowflake is the AI data cloud. Our core thesis is that a cloud computing platform that puts data at its center is going to be way better for enterprise customers to act on data than a generic cloud. And AI, of course, we think of as a transformational technology that is going to change every aspect of how data is stored, how it gets moved around, and of course, how it’s accessed. We have over 10,000 customers, made $2.6-billion last year, but at the center of everything enterprise and data. That’s a super quick blurb.

Pat Grady: Perfect, thank you. And so you have 10,000 or so customers. I know you’ve met at least a hundred, probably hundreds of them since you took over.

Sridhar Ramaswamy: I’ve met hundreds of them by now.

What’s happening in the world of enterprise AI?

Pat Grady: So there you go. So I’m guessing you have a pretty decent read on what’s going on in the world of enterprise AI. So maybe we’ll just start there. What’s going on in the world of enterprise AI? What are you seeing at your customers?

Sridhar Ramaswamy: First of all, people get that this is going to be transformational. You know, lots of technologies have skeptics. I’m sure you’ve run into folks who are like, “Ah, mobile. It’s not gonna be a thing. This browser, like, so lame.” It takes a while for people to absorb. I think what’s different about AI, first and foremost, is people are like, “I get what this can do.” I think some of the power is just honestly looking at the magic that ChatGPT is. Anyone that has interacted with it, asked it to write a poem, asked it to create an image, knows like, wow, this is something that’s very special. So the level of awareness is incredibly high.

And we have thousands of customers that are in various stages of implementing AI solutions. They span the gamut from people like Bayer that are very excited by the idea of giving business users access to business data without going through like an elaborate, you need an analyst, you need a BI tool, you need blah, blah, blah, you need a week before a change can be made. They’re like, “I just want to put data into the hands of people that need it right now.”

But we also have dozens of people that are using AI as a transformation engine. So, for example, if you have unstructured data, whether it’s an image or let’s say, like a transcript, previously you had to run a software engineering project to figure out what’s this image about. Now you feed into a model, ask it a question, and you get the answer. And so people are super excited by things like that. We have a product called Document AI, which extracts structured information from documents, say, like contracts. All of us have contracts sitting around in our company folders that have all kinds of magic numbers that ideally you want to do analysis on. So there’s a wide variety of cases that people are implementing and sending into production. But I would say stuff at the bottom, which is how do you transform data more effectively, more flexibly? And stuff at the top, which is how do you make data easily available to all kinds of business users in new ways, in interactive ways? I would say that’s the sandwich in terms of what are people wanting to do with data.

Pat Grady: Mm-hmm. And so can you say a couple words on Snowflake’s right to win? So some of the things you mentioned like data transformation, for example, feels like that is very close to the core business of Snowflake. But then there are some things that are maybe a bit further afield. You know, if somebody wants to deploy an enterprise agent of some sort, they can use Snowflake to do it, but what’s Snowflake’s right to win in that situation? So can you just say a couple words about how Snowflake fits into this overall landscape and sort of the right to win?

Sridhar Ramaswamy: So first and foremost, the basic approach that we took to AI, sort of enabling or infusing AI into Snowflake, is it should be an accelerant for everything that you do with Snowflake. That’s what Cortex AI is. It’s a model garden, but it’s more than that. Snowflake prides itself on super tight integration of its various product features. And this is not another service that’s part of Snowflake—it’s built into Snowflake. This means that any analyst that has access to SQL has access to AI.So it’s a massive democratizing mechanism.

And then the early applications that we have built like Document AI are a very natural next in the progression of what people want to do, which is, “Hey, I want to act on the data that is within Snowflake’s purview.” By both expanding the data that Snowflake has access to via things like Iceberg, which is basically an interoperable storage format for cloud storage, but then providing things like Document AI, we just make a whole bunch of AI applications that previously used to be software engineering projects into two commands that an analyst can issue.

And so our first lens very much is that AI should become easy, trivial for data that is sitting in Snowflake. One hundred percent. There are going to be applications that are cutting edge, are going to involve many, many different services, but the angle that we bring to all of those customers is we make reliable AI. And that’s a topic that we can get into. So for example, I tell people you have no business believing the raw output of a language model for anything. You can’t actually do any business with that because it’s ungrounded. It doesn’t understand truth from falsehood, doesn’t understand authority. So we make things like, you know, creating a grounded chatbot. Again, as I said, two commands, not a software engineering project.

Similarly with Cortex Analyst, which is our talk-to-your-data API, we bring the full power of we know everything about the schema, all the queries that have been run on the schema, the semantic context on the schema. We can produce a reliable application that others are going to struggle to create. So we are leveraging our strengths in data to make AI products better. Are there going to be specialist applications that can only be done with GPT-40.0 and a custom integration with a bunch of other stuff? Absolutely. But that’s not what we are after. The bulk of our customers want to get work done. They’re not in the business of doing research with AI.

Sonya Huang: And are you seeing customers bring net new data that maybe didn’t sit inside Snowflake historically into Snowflake because of your AI services? And how do you think about your right to win as it comes to the data that’s not in Snowflake yet?

Sridhar Ramaswamy: This is a broader question. I think one of the things that I’ve actually been a good part of is in expanding the lens of data that Snowflake should play in. Snowflake, as you know, is first of all, it’s closed-source software for the most part. The code engine is closed source, just like search, but we also had a proprietary storage format where data was ingested into Snowflake and kept in this format.

But what we consistently heard from customers—and I’m sure like you hear all the time—is there is 100 or 1,000 times more data sitting in cloud storage than there is inside a specialized player like Snowflake. And more and more industry trends have been towards interoperable data. People want their data to be accessible from multiple places. So for example, if they want to write their own bespoke applications—most people don’t want to do that, but the biggest ones do—they want the data to sit in cloud storage, where yes, snowflake perhaps can write it and read it, but other applications should also be able to read it.

So we made a big push around Iceberg, which is the interoperable format. We also announced a cloud catalog recently. The idea is that in 10 years, data is going to be sitting mostly in the cloud, mostly in cloud storage, which is very cheap, mostly in interoperable formats accessible via open catalogs. And this is the place where we see there being so much more access to data from Snowflake, so everything from data engineering and AI now comes into our purview. We have customers that, for example, are doing things like, oh, let’s run a video model using Snowflake’s container services on data that is sitting in S3, extract transcripts and stick it into Snowflake. So it’s just a very different world we are playing in.

How customers are using Snowflake vs other AI services

Sonya Huang: Makes sense. And then let’s say for the data that’s currently sitting in one of the hyperscalers, for example, you started the conversation by saying the core tenet of the company is that when you build your infrastructure kind of all around the processing of data, you can do better things, what are some of the ways that you’re able to offer better AI services around the data that doesn’t currently sit in Snowflake, but that you’re hoping customers will bring in versus what the hyperscalers are doing already?

Pat Grady: Yeah, and can I add onto that real quick? Because one of the things that we have heard from customers is at either end of the spectrum, you’ve got at one end of the spectrum, work directly with OpenAI, send your data into their cloud, and maybe have some nervousness around whether that data is going to leak into the model or whether they have the right security and privacy sort of governance around it. At the other end of the spectrum, you can just do everything yourself, grab a model off a Hugging Face, build it internally, super safe, super secure, but pretty painful to do all that. And then the middle ground, you’ve got Amazon Bedrock or you’ve got a Snowflake. And they both have a value prop of best of both worlds—we’re going to make it easy for you, but it’s also safe and trusted and secure and all that good stuff. And so I think my angle on Sonya’s question is like, for somebody who’s making a practical decision about sort of what should I build in Snowflake versus what should I build on Bedrock or a comparable cloud service, what leans people in the direction of Snowflake?

Sridhar Ramaswamy: It’s the fact that everything that you want, whether it is data security, data governance, ease of use, all come out of the box. The incredible power that comes with core Snowflake’s platform, including things like collaboration, other third party applications. We make AI simple. One hundred percent, there are those people that will say, “I want to take data that’s sitting in cloud storage, or even in another application, I want to bring it into cloud storage. I want to recreate ACLs, access control lists, and then I want to create a vector index using a bespoke vector indexing solution. And then I will stitch together. I’ll figure out which model that I want to use, whether it’s an API or something that I host myself. And then I will use LangChain and write custom routing logic for my application.”

I can assure you that 99.9 percent of our customers want no part of this. You know, that’s just the reality. All those poor people wanted was a chatbot to run on 100,000 docs that they have so that they can replace the annoying search box for FAQs on their site with here’s a solution that just works. And our take is yes, whatever governance you’ve had before works out of the box, and your data does not go anywhere else. You have the same rock solid guarantee that Snowflake will never use your data to train any cross-customer model. And we will be very efficient and cost effective from just like the overall cost of running the solution.

But Snowflake’s magic, honestly, is we make the hard simple and it’s things like total cost of ownership. Many of our customers are banks, they are healthcare institutions, they are finance or other kind—like we play a lot in the media space as well. Most of our customers want to solve problems, not solve technology for the sake of technology.

You know, we have a foundation model team. They’re very focused on things like how do we get models that have better grounded generation? How do we get them to follow directions well? How do we get them to say no to questions that they should not be answering when it comes to, let’s say, talk-to-your-data. So we focus on specialized areas like that, but the biggest reason to use Snowflake for a lot of our customers is 10 percent software engineering projects with a whole lot of risk about data and security and what else can happen turns into six hours of work for an analyst. We are good at that. We are proud of that.

Pat Grady: So it sounds like the one liner might be it’s kind of the level or the layer at which you’re intersecting these products. If you’re working with one of the public clouds, you’re still very much at the infrastructure layer, building a lot yourself. Snowflake, you’re at the platform layer. A lot of the hard work’s been done for you.

Sridhar Ramaswamy: And our long term bet, Pat and Sonya, is that ecosystems move upstream. There was a time not so long ago where, I don’t know, our parents, our grandparents knew every part of a car. They’re like, “Oh, so manly to change a carburetor and get oil in between your nails.”

Pat Grady: I gotta be honest with you, I’m still impressed every time my dad knows exactly what is wrong with the car.

Sridhar Ramaswamy: Yes. You know, while I’m willing to go to—you know, go to strength training every day, getting oil in between my fingers, you know, with my car does not sound so attractive anymore. And s one hundred percent you can work with CSPs and you can be like, “I have a model garden here, I have a caching service there, I have a database here. I will stitch all of this together.” As I said, everything turns into a software engineering project for us. For us, we’re like, no, that’s just a little data pipeline that you set up, and here is a beautiful UI that you get if you want a chatbot. Obviously you can do more, but you don’t have to.

Pat Grady: Yeah.

Sonya Huang: What are your customers building on Snowflake, and are there certain types of AI applications that are better suited to be built on Snowflake than others?

Sridhar Ramaswamy: As I said, the categories of AI applications come naturally from the kind of data that are already there. I would say the broadest, broadest-use case is really using Cortex AI via SQL in either interactive queries and dashboards or in jobs that people are running. And so these span the gamut from, oh, let’s do sentiment detection with a small model. It doesn’t really have to even be that expensive. So that’s just like, literally it’s one function call. Or let’s do other kinds of data extraction where, as I said, you have things like a transcript or maybe clinician notes, you take that out and you get structured data from it.

Or the other thing that I talked about, Document AI, which is you extract structured data from things like receipts, from contracts, so on and so forth. That’s kind of our sweet spot. But I have to say, the product that makes even the people that go, “I have GPT-4, I have an army of software engineers,” the thing that even they struggle with is things like a reliable talk-to-your-data application, because even with GPT-4 out of the box, you end up getting 45-odd percent reliability, meaning it gets half the questions wrong when it tries to answer it. We are well in the 90s, and we are racing to get to 99 percent reliability on talk-to-your-data applications. Obviously, we restrict the domain and turn this into more of a software engineering problem than just like a pure AI model problem, but that’s the thing that makes every Snowflake customer perk up and go, like, “I want that,” because even the people with the money and the resources to spend on software engineering teams very quickly realize that this is a wall that they are likely not going to break through.

Sonya Huang: And how do you accomplish that? Maybe peel back for us how you’re able to get to the 90s percent. Are you training your own models? Just tell us about how this all becomes possible.

Sridhar Ramaswamy: It’s systems design.

Sonya Huang: Okay.

Sridhar Ramaswamy: Just like the magic of how you make a coding agent, or an effect—less a coding agent, more an effective copilot work in practice, it’s not always the giant models. It is carefully breaking problems down so that you present the right context to the model. It’s in deciding things like, “Oh, I see. The problem of answering a—whether to answer a question is different from how to answer the question.” So you can specialize and have different models for these different subtasks.

And also, what’s the—basically, I call this like a problem definition, a product structure question. We structure the product of Cortex Analyst so that it is more restricted than a free flow domain. What I mean by that is schemas are weird things. People do random stuff. They have horrible column names that mean completely the opposite. Every company has its own definition for revenue. And if you, like, take the best model on the planet and let it loose on an arbitrary schema, the likelihood that it’s actually going to understand the nuance of what’s in there, close to zero. Like in our big deployments, for example, our customers have 200,000 tables, and you can bet that there are several tens of thousands of tables with the word ‘revenue’ in it. They just don’t have the same meaning.

So it’s really like problem definition to me. By the way, this goes back to the magic of product. I think of any amazing founder, any amazing product manager as someone that can visualize what’s the right trade off to be making in order to create something that has broad applicability. And that’s the thing that we have done here. We constrain the problem. But as I said, we also explicitly train for things like when to refuse questions, as opposed to trying to pretend that you can answer every question. But obviously, there’s a precision recall tradeoff there. You can get 100 percent precision by answering no question. That’s not the goal. You want to be useful, but still be precise. But it’s a lot of software engineering.

Pat Grady: I want to go in a slightly different direction.

Sridhar Ramaswamy: Sure.

How companies can maximize their product velocity

Pat Grady: Okay. Which that reminded me of this, and I don’t know why, but you guys, you seem—the product velocity at Snowflake seems to have inflected to the positive.

Sridhar Ramaswamy: Yeah.

Pat Grady: Even in the last six months or so. And we’ve worked with a lot of founders where, you know, the bigger the company gets, the slower and slower the velocity becomes. And so I guess I’m curious, what have you guys done to positively inflect product velocity? Because that’s hard to do when you’re dealing with an organization at the scale of Snowflake.

Sridhar Ramaswamy: I’ve done this many times before, and the formula is always roughly the same, which is first and foremost, you make sure that you have a safety net that you believe in, which is you have, like, regression tests, so you don’t blow up big functionality. But if you’re pushing hard enough, you will make mistakes. And so you have to distinguish between different kinds of mistakes. For a database company, there are catastrophic mistakes. Like, if you write data badly, it’s going to take you months to get out of that. So you need to understand what is risk. And then you build a safety net for things like, as I said, to detect problems before they happen. But in case you do have problems, how you get out quickly.

At Google, for example, we built auto experiment scaling frameworks. Basically, you would come up with a new experiment. All changes went through this experiment framework, and this thing would automatically say, “I’m going to run this on a machine, watch it for 15 minutes, make sure that the machine doesn’t crash, and then it roll it out to 0.1 percent, 1 percent, 10 percent, with measurement all along the way.” All of a sudden, you have velocity because someone can design. People can design a whole bunch of experiments. They’re sort of now pushed out.

So as I said, the first part is the safety network, and so we spend a lot of time on that. The second part is the inner loop productivity, which is how quickly can you get a single change in quickly? Because ultimately it ends up being the decider for how many changes are you going to get through. Another system design, Snowflake actually went through a process that predates me, starting about two years ago, of how to make the system extensible. As I said, at Snowflake, we are very proud of the single unified product, but that can become something that gets in the way of speed. And so you have to design carefully for how do you make things extensible? So things like AI basically took advantage of that framework.

And then to a certain extent, to be honest with you, it is also the focus that leadership needs to bring on what is important. How do you drive clarity? At all times with all teams, there is an infinity of work to be done.

Pat Grady: Yeah.

Sridhar Ramaswamy: And driving that clarity, driving a sense of accountability. With the AI team, for example, I forced every team to make promises for, yes, over three months, but also, what are you going to do for the next two weeks and calibrate yourself on? Did you deliver on the things that you were doing, that you said you are going to be doing? It’s pretty much in my mind, if you want to get better and better, life boils down to say what you’re going to do and do what you said you would do, and examine and make things better.

And so it’s a bunch of things that have been there that I’ve been building up at Snowflake, but certainly I bring this sense of quality and speed are both requirements in what we do. It’s a change, but people like the idea of just getting more things done.

Pat Grady: Yeah.

Sridhar Ramaswamy: Like, you and I have never met a software engineer that says, “Yep, I want to release the day after tomorrow.” It’s like, no, you want to get it done today. And so that itself builds momentum. When you release a bunch of products and you have a lot of customers that are using it, that becomes positive energy for the team to build on the good behavior that kind of got you there. And so I would say the team has responded very, very well. I tell them, “Hey, listen, this is the world of AI. Stuff changes every week, and you need to build with that speed.” I’m very happy with how the team has responded.

Snowflake’s accomplishments in AI

Pat Grady: Is there anything in particular that you’re most proud of in terms of what you guys have done in AI thus far?

Sridhar Ramaswamy: I’ll say Cortex Analyst is probably the hardest product that we have designed and launched. Things like Cortex AI, which is like our platform layer, I am proud of it, but, you know, it is predictable infrastructure work, even though there’s a lot underneath in terms of, hey, should you use vLLM or something else? How do you optimize for inference? How do you get capacity in this annoyingly crazy world where it’s very hard to get your hands on GPU? There’s a bunch of stuff. But to me, that is a unique—things like that, things like Document AI are a unique combination of our strengths being applied to new areas in ways that can make a big difference to our customers. But you also know, Pat, that there’s a little bit of who’s your favorite child?

Pat Grady: [laughs]

Sridhar Ramaswamy: So I can’t really do that. And so there’s a bunch of stuff. Like, even if you take Polaris, which is our cloud catalog, you know, done in a matter of three months. And so I think there’s a lot of energy within the team because it’s a slow message, but it’s getting through that you can have speed and quality—they’re just different aspects of the same problem. And my firm belief all through my life is that virtuosity trumps strategy all day long.

Pat Grady: What does that mean?

Sridhar Ramaswamy: Your speed of execution, your speed of reacting to situations is going to trump strategy very, very quickly. Yes, you need strategy, but life is never about fixed strategy because we live in a very, very dynamic world. It’s hard to predict which product is going to be wildly successful, what your competitor is going to do. Like, we’re going to talk about, like, GPT-5. It’s like, it’s a big unknown whether it’s going to come out and what impact that’s going to have. So I place a huge amount of emphasis on, you just need to be really, really quick at what you do. And I would say, like, that’s the message that I’m trying to convey to the team.

Pat Grady: That’s very—I see this continuity from the Slootman era into the Sridhar era, because I know I’ve heard Frank say at least a few times the General Patton quotes, “A good plan executed violently today is better than a perfect plan tomorrow.”

Sridhar Ramaswamy: One hundred percent. One hundred percent. And it’s that adaptability—Napoleon has a famous quote which roughly—I mean, it’s not his, it predates him. It roughly translates into, you know, “I commit and I adapt.” Which is you go into an important area knowing that you’re not going to know everything, and then you’re adaptive to the situation that actually presents itself.

Pat Grady: Yeah.

Sonya Huang: Are there any misconceptions about Snowflake and AI that you want to debunk?

Sridhar Ramaswamy: We are a real player. It used to be that Snowflake used to be thought of as somebody that didn’t really get AI. But, like, early on, we relied on things like more of a partnership-oriented strategy for AI. But my big sort of observation, realization is that AI is a platform change in the sense that it is a new way in which you and I and everybody else in the world is going to get to software, is going to get to applications. And so once we had that realization, out came a bunch of product consequences, which is AI needs to be central to Snowflake. We need to make it super easy to both build applications, but also build the most important applications ourselves.

Cortex Analyst, for example, is a direct-to-business-user application. We have never really done things like that before. It is driven by a strong belief that AI is going to disrupt how information is going to be consumed very, very broadly. And I am proud of having a world class team from bottom to top, from foundation models to inference experts to product engineers that integrate the AI, plus also the product engineers that are creating applications on top of AI. That combined with things like broad data access, which is Polaris and Iceberg, I think puts us in a very, very good position.

The future of AI

Sonya Huang: Can we zoom out and ask a little bit about your—I guess, your hypotheses and your hot takes on the future of AI?

Sridhar Ramaswamy: Absolutely.

Sonya Huang: I just think you are so well positioned. You probably built one of the first, if not the first kind of LLM-native consumer applications at Neeva. And now obviously from your seat at Snowflake, you see so much. Maybe first on the LLM kind of race to scale, what do you think about all that? Are we reaching the limits of scale? What’s next for those guys?

Sridhar Ramaswamy: I mean, obviously this can go in a couple of different directions. I talk to a lot of experts, and there is a collective belief that there is a GPT-5 in the horizon. What I don’t think anyone has a clear bar for is what that’s going to represent.

Pat Grady: Yeah.

Sridhar Ramaswamy: GPT-4.0 was very cool, much faster. It also integrated multimodality natively in a way that’s pretty amazing. But when you think about reasoning capabilities, the ability to come up with plans for how to execute stuff, it didn’t feel like it represented a step change. And while agents are very hot, similar to Cortex—until Cortex Analyst came along, people didn’t really believe that you could build reliable talk-to-your-data applications. They were always kind of hit and miss. And remember the bar is very high. If you’re giving data to a business user, like, 75 percent accuracy is one out of four wrong.

And so I think the big unknown is whether these models are going to represent a big step forward in things like multi-step reasoning. And if they can, they’re going to unleash, like, a whole new class of applications that you and I just cannot imagine right now.

On the other hand, I think when it comes to driving broad adoption, there is a lot that can be done with the existing models. So many things that are useful for you and me every single day, whether it’s a piece of mail that we are looking at or looking through a PDF, just think about all the tedium that all of us have to go through. And so I think there is huge impact to be had simply in AI technology just permeating software as we know it, especially the user input part of software. So unlike other technologies, I think there is enough that AI has already delivered that is going to have a meaningfully large impact on society. It’s just going to take a while to run out. You know, I sincerely hope we don’t get to a phase where you need a billion dollars to train a great new model. I actually think that while what that model can do is cool, I think it also reduces the number of people that can have models like that to a very small number, and I think competition is just overall healthy. But it’s very hard to make a call.

Pat Grady: You mentioned this a little bit, but I’m curious to get your take on it a bit more. If GPT-5 is delayed or not a big step up or whatever the case might be, or if you just imagine a world in which the current capabilities of the foundation models, that’s what we’ve got, and it comes down to how do we implement those? How do we optimize those, how do we tune those? One of the things that we hear from a lot of people building an AI, the first couple weeks are like magic. Everything is amazing. This is great. And then the next few months are pretty painful. Oh, shoot. It can’t do this corner case. It can’t do that corner case. It’s not quite accurate enough. And people get really frustrated, and sometimes they can engineer their way out of it, sometimes they can’t. But sometimes it leaves people feeling kind of disillusioned, like this stuff’s not as good as I thought it was. Maybe the time’s not right.

And so I’d love to get your take. If we froze the capabilities of the foundation models today, what sort of changes will we see in the enterprise landscape over the next handful of years? What sort of stuff will we not see because we’re just not ready for it yet?

Sridhar Ramaswamy: To me, this is honestly the magic of software engineering. Part of what I feel we have implicitly accepted with ChatGPT is it’s sort of like—is it’s omniscience. You’re like, it can do everything. They don’t say it. In fact, they take pains to not say it. But just like Google search never tells you that’s a dumb query.

Pat Grady: [laughs]

Sridhar Ramaswamy: Think about it, right?

Pat Grady: It’d be kind of fun if it did.

Sridhar Ramaswamy: If it did. Right. Start to laugh at dumb queries that people type into it. Google’s like, “Oh, yeah.”

Pat Grady: I type lots of dumb queries.

Sridhar Ramaswamy: They’re like, “Oh, here are 100 million pages on the web, and here are the 10 best pages for you, Pat, for your dumb query.”

Sonya Huang: [laughs]

Sridhar Ramaswamy: And so I think it’s like some of it is good old fashioned AI enthusiasm, it can do everything. But some of it is just also plain dumb. You should not be doing that. To me, this is where things like, okay, let’s actually make grounded chatbots the norm for interacting with information. The model is, you know, this application should tell you where it got the information from. It should be very easy for you to verify said piece of information and feel good that you’re actually getting something.

Similarly, you need a test framework. You know, like, Harrison talked about, an observability framework to do this on an ongoing basis. But I think sometimes when it comes to things like chatbots, people forget, wait, there is such a thing as a set of regression tests. There is such a thing as acceptance criteria for software. Everything that we have—like, if somebody were to build a new application, like one of your founders, your expectation is that they got their clue together and are actually testing stuff before they give it to customers.

Pat Grady: Yeah.

Sridhar Ramaswamy: And somewhere in the world of AI, we’re like, no, no, no, it doesn’t matter. And these models react pretty violently to the addition of a period in a prompt. And so I think there needs to be this idea that you need good old fashioned software engineering, and you need to measure the performance of these things. And so I think this is where it goes away from these are hobby projects that can be hit or miss to here’s somebody that can actually software engineer this for you. And we think of that as a core strength of what we bring to the table, which is like, you should be able to have a predictable way to say this chatbot is going to work, or this agent-like application. This is the success rate that it’s going to have, or this is what Cortex Analyst is going to do for you in your domain, so that you’re like, “Okay, I feel good about deploying it.” So even if GPT-5 did not happen, I think there is a lot of magic to be done, but it’s also just work.

Pat Grady: Yeah. Yeah, yeah. Yeah, well put. Well, what’s the—I forget who said it. There’s a quote that we use every now and then. “People miss most great opportunities because they tend to be wearing coveralls and they look like work.” You know, I think this is one of those where like anything else, if you want it to be great, you got to work pretty hard on it.

Sridhar Ramaswamy: You got to sweat it out. And to me, this is also the place where the thinking of recall as something that you should tune, thinking of recall as an important part of how you think about these applications. Any ML engineer worth their salt will promptly come and tell you it’s like, “Okay, I have an AUC curve for you.” What are they trying to say? They’re basically trying to say there is a trade off between how much you squeeze the model to do and how good it is. There’s no perfect answer. That’s really what the AUC curve represents. And the more we think of AI applications as also having this AUC curve, there are trade offs to be made between reliability and ability to respond. And that’s a very conscious factor in how you should think about things, I think the better off we are going to be in terms of where can they deliver value.

Pat Grady: Yeah. Yeah.

Sonya Huang: I’m going to go back to the point you said a little bit earlier about reasoning and kind of delivering the next big leap, hopefully for GPT-5 and Claude, et cetera. It seems like the approach that most folks are taking is kind of bringing in search at inference time, and so a lot of more inference-time compute and kind of this AlphaGo style search stuff. I’m curious, just given you are one of the best people in the world at search, like, do you think that is the path to the promised land on the research side for bringing reasoning into these general models?

Sridhar Ramaswamy: Give me a little bit more context. Like, I can certainly see how search plays a role in how these models operate, but can you just tell me a little bit more?

Sonya Huang: Yeah. So if you take the example of if you take AlphaGo and you’re trying to decide what move to do next, if you can kind of create a branching tree of here are all the possible moves from here and do a search kind of over that of, like, here’s what move I should do next. I think people are trying to bring that logic out of the gaming world and into domains like—I don’t know if you saw Devin’s Cognition.

Sridhar Ramaswamy: Yeah.

Sonya Huang: Where they’re effectively searching over different things that you can do in your coding as well. And so just like at inference time, just giving the model kind of the ability to search possible paths to decide what to do.

Sridhar Ramaswamy: Yeah, there have been a number of papers on this. I think even NeurIPS had a bunch of papers about searching over domains as you come up with a plan. What I don’t have—to me, it’s important to understand—I’m forgetting the name of the NeurIPS paper, but it also had the same problem—they were doing tree search—is that they fundamentally rely on a model, typically a neural network, being able to do things like grade a particular point in a state space.

Pat Grady: Yeah.

Sridhar Ramaswamy: Basically AlphaGo, for example, has pretty solid ideas about what is an advantageous position versus what is not, and the search is guided by that. What isn’t clear in sort of very open-ended questions is, as you come up with alternatives for the search space, can you actually grade them effectively if it’s an open-ended plan?

Certainly a number of these techniques work well for games that have structure in which you can actually learn what does optimal mean, and you can begin to optimize towards it. What I don’t have as good a feel for is let’s take something as simple as cooking. You would think it’s simple, but if you take, I don’t know, 10 ingredients and 20 steps that you can take along the way, and various things that you can do in each of these 20 steps, and the steps themselves can be short, they can be long, you quickly end up with, like, this crazy combinatorial explosion of different ways of doing things, and yet there is just one perfect recipe or two or three.

That’s the part, honestly, I don’t have a good feel for in terms of, like, how do you even begin to measure the jump in terms of cognitive ability? It’s easy in structured environments, but out in the real world, where you’re trying to do some pretty complex things, I think it becomes trickier. We’ve built prototypes for basically, like, agent analysts, but it’s again, a structured space. So what we do—one thing, I’ve done numbers, like, pretty much all my life. I used to do, whatever, household finances for my dad when I was 10. Like, we did in a notebook. And over the past 20 years, every day I get this email that tells me how my company did the previous day. Used to be called Bean Counters at Google. Every day you got a report card.

Every few weeks something would go wrong. Like, you made less money somewhere. And we would start this predictable problem—like, predictable exercise of some poor analyst would go drill down into a bunch of different things, blah, blah, blah, blah, blah, look at sliced stuff. And then they would come back with like, “Oh, Sridhar. It was, like, Easter in Germany and Ascension Day in Brazil. And that’s why our numbers were off.”

And it took like a decade to model all of these complex things in the world into, like, a prediction model. So you’re like, okay, I can begin to predict. But if you think about it, the analysis that they do is constraint. It’s pretty much if a metric is wrong, go slice it by 10 different dimensions, go look at the results, see where likely the problem is. Certainly we have built prototypes of these AI analysts that can remove 60, 70 percent of the work that is needed in actually diagnosing problems.

It’s pretty free form, but you can tell a language model, “These are my attributes. Oh, go call Cortex Analyst with all of these parameters, get the output, take a look at it and then tell me what I should do next.” So you can begin to automate some of it so that this is actually useful. So you can do things like that, but a much more open-ended problem of here are 100 different things, incomparable things you can do and how do you judge and how do you prune, I think that’s the part I honestly don’t have good intuition for.

Sonya Huang: Totally. I’m gonna ask about search in a different sense, if that’s okay. You obviously have an incredible point of view on search, given your time at Google and at Neeva. And it seems like right now the consumer world is watching excitedly and nervously about, you know, is there going to be a new kind of search king crowned? I’m curious your take on the whole kind of AI search space right now.

Pat Grady: How about a hot take on Perplexity? Do you have a hot take on Perplexity?

Sonya Huang: [laughs]

Sridhar Ramaswamy: Look, I’m happy for Perplexity. And it reminds you again that right time, right place matters a lot. At Neeva, which converged onto a view of what search should be that was very similar to Perplexity, we were just two, three years early. And timing ends up being everything. You can think of Perplexity as like a consumer manifestation of how we want to deal with information. Let’s face it, I want to look through an eight-page doc to find the two lines that I really care about said no one. But that’s search.

And so in that sense, it’s absolutely the right place. I think the more important question is whether the business of search, which is carefully preserved with business contracts, not with consumer choice. Consumer choice is fiction in a whole bunch of things that we do. We eat what’s put in front of us and we will search with the default search engine that came in our browsers. We might resist it, but on aggregate with humanity, that’s the reality of the world. And so I would say that that is the bigger challenge, because search is mostly locked up by a few players that control the entry points. But I think that’s the fundamental problem, which is it is very difficult to break into the business of search. Consumers don’t like doing stuff.

Pat Grady: And this also gets to one of the kind of broader questions in the world of AI right now, which is incumbents versus startups. And historically, the battle is can the incumbents with distribution build cool products before the startups with cool products build distribution? And I think search is a great example of that. You might have the coolest products in the world. It’s awfully hard to change consumer behavior.

Sridhar Ramaswamy: That’s right.

Pat Grady: AI is an interesting test case for this, because so much of the coolness of the products is available through the open-source world or through third-party models. And so it feels like it might be a scenario in which incumbents are advantaged versus the startups. But do you have a point of view on that?

Sridhar Ramaswamy: I would take two different lenses to this one. One is what you said about models, open source models plus players like Meta that basically have infinite budgets and willing to open-source models. I think the world of creating models from scratch, unless you have an attached hyperscaler, an attached business looks very, very hard.

Pat Grady: Yeah.

Sridhar Ramaswamy: And you know, so I think, as I said, I hope this doesn’t go to, like, an ergo three GPT-5-class models  that the world has, because I think that’s a bad ending for the world. So I would definitely say that foundation model companies without a strong business to accompany them, it can be a product like, I think OpenAI has created a pretty solid product. It’s not just a foundation model.

PaGrady: Yeah.

Sridhar Ramaswamy: I think that’s one thing to keep in mind. I’d answer your second question of sort of disruption/innovation from a historical lens. I think of every generation of Silicon Valley companies as learning from the previous ones. They are smarter, they know the ways in which things can be disrupted, and they lean in pretty heavily.

We all know, for example, the IBM to the mid-range computer sort of disruption, and then the Dex and SGIs of the world then getting disrupted by the Microsoft’s of the world. And then the web coming along, leading to the rise of companies like Google or mobile. I would say that in each and every one of these transitions, powerful incumbents with very large pockets have shown an ability to lean in sooner, lean in faster.

At Google, for example, when I was there, we leaned in very heavily into the home assistance because Alexa was going to take over the world. That was going to be the way in which you and I and everybody else searched. We were terrified, and we put a pile of money into it and nothing came off it. And it didn’t matter. Why? Because the cost of a disruption is way higher than the amount of investment that you have to make.

I would say now this is generation five or something to that effect. I’d say all the incumbents are very aware of what can be disrupted and they lean into it. There’s a bunch of strategic thinkers, as I told you, I think of AI as basically shuffling the tiles on enterprise software. And a part of me goes, like, “No way. Snowflake is going to be leading the charge when it comes to AI, not waiting for it to develop.” But I think you see every enterprise AI company lean in the same way. And so this to me would be the question about how much disruption is AI going to drive in consumer software. Certainly there’ll be new categories. To me, if I were a startup, I’d feel a lot more comfortable that I’m creating a new category. Image creation, like done in a math scale, clearly amazing. But the same goes for videos, same goes for voice. There’s a bunch of specialization that you can do here, adapt them to marketing. New things feel like a much safer bet in the AI world than take your pick. I can do XYZ faster because I am AI enabled. I don’t think of that as having a whole lot of legs.

Pat Grady: Yeah.

Sonya Huang: Do you think ChatGPT has a chance of becoming the next Google? And to your point on consumer choice being a mirage and business deals are where this stuff gets locked down, like, I’m curious what you think of the Apple-ChatGPT deal.

Sridhar Ramaswamy: I think ChatGPT—I mean, the phone is a pretty interesting place. To me the phone, because it’s a controlled environment, actually offers enormous potential for consumers. I tell people something as ridiculous as copying, I don’t know, an address, like, from your calendar or a piece of email over to Uber. So dumb, so hard. You know, like, you would think Siri would, like, do this, copy the address from this email from Pat and stick it into Uber so I can get an Uber.

So to me, I think there’s a huge amount of potential again, in mundane applications, and because the mobile ecosystem is a pretty closed one where Apple can mandate things like, you must have APIs that make it possible to access your functionality using language models, or else you might not get any traffic, that sounds like a pretty good incentive for everybody to kind of get in line. So I think there’s a huge amount of potential there. I honestly wish there was more innovation in this space because again, all of this is super doable technology. You and I can argue about, should this be done in the cloud, what can be done on the phone? But, like, as a consumer, do you care? Like, we have great connections. I’m kind of like, if this thing works only when I’m connected to the internet, I’ll take it. And so to me, those are sort of—those are details.

I actually think ChatGPT is an amazing product. There’s underlying technology, but in so many different ways, they’ve actually created a stunningly beautiful product experience that spans the gamut from—you know, they’ve turned pretty much, like, visually illiterate people like me into budding artists. I tell people it’s like, “I’m good with words. I can talk all day long, I can write all day long.” And the magic that I can do with ChatGPT is truly amazing.

That or even, even things like I, you know, for example, like, I’m on this language kick, I’m learning Hindi. And at some point I was like, “Oh, I’m struggling with these numbers,” but off comes a prompt that says, “Hey, I want a CSV that translates numbers, just a string of numbers to Hindi. And can you do that? Can you just give me a CSV file that I can import into Quizlet that literally is faster for me to type than to describe to you?” I type it in, out comes a CSV file in 10 seconds, I download into Quizlet, I have a quiz.

And so pretty much everything that I used to do with Python scripts on structured data, I just do, like, with English. You just upload the CSV file and you’re like, “Oh, add these two columns, do this other thing, format it into this nice table and get it out for me.” It’s magic. So I think there’s absolutely a there there in terms of, like, is it a great product and a great business. But, you know, being the king of search is like, a few more zeros. They don’t come easy to people.

Closing questions

Pat Grady: [laughs] Yeah. All right, should we close with a couple of quick fire questions? Rapid fire questions?

Sridhar Ramaswamy: Yeah, let’s do it.

Pat Grady: Okay. Who do you admire most in the world of AI?

Sridhar Ramaswamy: Who do I admire most in the world of AI? I admire the people that are, you know, like, working on things like foundation models that are able to do it on the cheap without the infinity of resources. So for example, people like Arthur or Danny, I think they’ve gotten—Danny [inaudible]—I think they’ve gotten just, like, a remarkable amount of things done. Or from our own team, folks like [inaudible]. To me, they represent so much creativity because I go and tell them, “Limited budget, and what can you—you know—what can you do?” I think there are a set of just like amazing, earnest people that are driving research under tight constraints. So there’s obviously lots and lots of people, but it’s the doers that are doing the work imagining our future that I’m a huge fan of.

Sonya Huang: What’s your favorite AI application?

Sridhar Ramaswamy: ChatGPT, by far.

Pat Grady: Easy one.

Sridhar Ramaswamy: Easy one. Just the utility that I get from it day in and day out is just truly remarkable.

Sonya Huang: Okay, follow up then. What’s an AI app that you wish existed?

Sridhar Ramaswamy: Like, an actual talk-to-your-phone that can actually mediate between apps. That would be super cool, because remember as I said, just flipping between applications, doing very little things, such a pain.

Sonya Huang: Yeah.

Pat Grady: All right, we’re gonna end on an optimistic question. What is the best thing that can happen in the world of AI over the next five or ten years? What would you be most excited to see coming out of the world of AI?

Sridhar Ramaswamy: Software, which you can think of as encoding our thinking, capturing our ability to think and act in real-world situation, clearly has been transformational over the past 50-plus years. To me, AI as an enabler of access both to the act of creating software, but using software to all of the people in the world would be a significant step up.

And as I said, I don’t think it’s like lots of fancy new technology that you need. The newer technology can certainly help, newer classes of applications. You know, I was very proud of the fact that we put Google search, thanks to things like Android, into the hands of pretty much every human being on the planet. You know, you can be cynical about technology, but it’s a genuine step forward for humanity. To me, just AI models as the new layer between humans and software, and software and software is actually a significant step forward just in having this functionality be vastly more accessible to lots more people. As I said, both in the creation aspects, but also in the consumption aspect. I think that’s a pretty cool thing to look forward.

Pat Grady: Awesome. Thank you, Sridhar. Thanks for doing this.

Sridhar Ramaswamy: Thank you, Pat. Thank you, Sonya.

Sonya Huang: Thank you.

Mentioned in this episode

Cortex Analyst: Snowflake’s talk-to-your-data API

Document AI: Snowflake feature that extracts in structured information from documents