Skip to main content

Palo Alto Networks’s Nikesh Arora: AI, Security, and the New World Order

Palo Alto Networks’s CEO Nikesh Arora dispels DeepSeek hype by detailing all of the guardrails enterprises need to have in place to give AI agents “arms and legs.” No matter the model, deploying applications for precision-use cases means superimposing better controls. Arora emphasizes that the real challenge isn’t just blocking threats but matching the accelerated pace of AI-powered attacks, requiring a fundamental shift from prevention-focused to real-time detection and response systems. CISOs are risk managers, but legacy companies competing with more risk-tolerant startups need to move quickly and embrace change. 

Summary

Palo Alto Networks CEO Nikesh Arora leads the world’s largest cybersecurity company. He emphasizes the critical balance between innovation and responsibility, particularly around security and enterprise adoption of AI. High-consequence AI applications need better models and more precise domain data. The job of security is to monitor the data infrastructure and find vulnerabilities faster than the bad guys.

  • Domain knowledge and data are the new moats: Building specialized AI requires proprietary data and deep domain expertise. While general models trained on open data can serve broad use cases, mission-critical enterprise applications need precision models trained on high-quality, industry-specific data. The winners will be those who can access and effectively leverage proprietary data assets.
  • Security must be built in, not bolted on: As AI gets arms and legs—actual control over systems and processes—security becomes paramount. Companies need to implement robust guardrails, monitoring and controls before deploying AI in production. This includes inspecting inputs and outputs, preventing unauthorized access and maintaining human oversight of critical systems.
  • Speed of innovation requires disciplined execution: Moving fast in AI requires both a clear vision and detailed execution plans. Leaders must identify the North Star, ensure plans are achievable with adequate resources and systematically remove obstacles to execution. Without this discipline, teams end up climbing different mountains with inadequate tools.
  • Build for high-consequence environments: Enterprise AI applications cannot afford the error tolerance of consumer models. When AI mistakes can bring down infrastructure or enable bad actors, models need much higher precision and reliability. This requires extensive testing, monitoring and fallback mechanisms.
  • Balance innovation with control: While AI offers tremendous opportunities, responsible deployment requires balancing innovation with control. Companies should experiment aggressively in contained environments while implementing appropriate guardrails before giving AI direct control over critical systems or processes. Managing risks enables companies to capture AI’s benefits.
  • Understand enterprise readiness: Preparing enterprise data infrastructure for deployment of security systems is mostly identical to what’s needed to deploy AI systems. This aspect of digital transformation still represents a significant hurdle for many businesses. Still, Nikesh cautions, “It would be irresponsible not to experiment.”

Transcript

Contents

Nikesh Arora: Like I have a principle that I always joke, even on all hands. I said, “I’ve never met a person who comes to work to screw up.” I wake up in the morning. “Let’s go, sunshine. It’s time to go to work. Let me see how badly I can do today.” Everybody walks in with the right attitude. It’s something that happens at work that we create, that causes the unintended outcomes. It’s not the person who walks in. If you found the right person with the right domain knowledge, the right intelligence, the right attitude, then the rest is upon us.

Pat Grady: Today on Training Data, we have a very special episode with Nikesh Arora, the CEO of Palo Alto Networks. Since joining Palo Alto in 2018, Nikesh has built it into the largest and most valuable cybersecurity company in the world, with 70,000 customers and more than $120-billion of market value. Prior to Palo Alto, Nikesh spent a decade at Google as the Chief Business Officer as the company grew from $3 billion to about $65 billion in revenue. Nikesh is an extraordinary CEO with a inquisitive mind and a wonderful sense of what is happening in the world of AI, thanks to being in the center of it with Palo Alto and all of their customers. Please join us for a wide ranging conversation about AI, its impact on security and what excellent leadership looks like. We hope you enjoy.

DeepSeek and the new world order

Nikesh, thank you for joining us on Training Data. So we emailed you and asked you if you’d join us on the show and your response was, and I quote, “As long as we can talk about DeepSeek and the New World Order.” Let’s start there. Tell us more.

Nikesh Arora: Pat, we all have our interpretation of AI, and there’s a bunch of us trying to figure out and rationalize this in some sort of mental framework. So like everybody else, I’ve had my own and I think from my perspective, what we’ve seen in the last 12 months has been phenomenal. We have people trying to build effectively a brain of some sort, a brain with immense capacity to remember everything, to process everything, and do pattern recognition, which is kind of like my interpretation of an LLM. Now that brain, because it’s being trained on data that’s out there, is susceptible to reaching the wrong conclusions depending on data it’s using to train itself. So this is not a secret. And we hear of that in various contexts of hallucination or not having the right answer because I’ve never seen it before. And that’s fine. You can call it the early brain, but at some point in time, these things are gonna become very smart. Possibly as smart as you, Pat.

Pat Grady: [laughs] It would take several more years to hit Sonya’s stage.

Nikesh Arora: Exactly. Right. So at that point in time, I think we all have to start getting a little worried. So the question is: How much money does it take to build this brain, one, and two, how can all of us use it effectively? I think we can all use it effectively today in certain use cases as we’ve seen out there, whether it’s creative use cases or search use cases or data aggregation use cases or data regurgitation use cases. At some point in time, you’re going to take this brain and give it arms and legs and let it do stuff. That’s where things start getting dangerous. And we’ve seen examples where people gave these brains the right to do stuff too soon, too early, where they started giving away free cars or refunding airline tickets, which is not a good idea because that was their version of hallucinating and giving stuff away.

But on the other hand, people are sitting in cars where these brains have arms and legs that are driving us around without a driver. So there are examples where there are precision use cases which are narrow and task specific where we are letting these things get access to it. So I’m sorry for the long preamble, but the whole notion of the new world order was—and I don’t have an opinion on whether it cost $6 million or more. My opinion is that if somebody built a brain cheaply and made it available cheaply, it just expands the opportunity for a lot of these startups, a lot of people to try and deploy that brain to do various tasks. And that to me is a major shift in what has been sort of the mainstay of this industry where we all thought you spent a lot of money to build amazing models. And it looks like there could be task-specific models which could be built a lot cheaper.

Pat Grady: And you mentioned some of the hallucinations and the attempts to jailbreak these models or prompt inject these models. There’s a report that came out a few days ago about DeepSeek R1 that said 50 out of 50 prompt injections worked. So basically a hundred percent success rate on attacking the model. Is that a DeepSeek thing? Is that an open source thing? Do you have a perspective on what the implications of that might be? Maybe it’s not as simple as six million bucks gets you the same thing you get out of OpenAI.

Nikesh Arora: Well, the question is, which one do you want, right? And, like, at the end of the day, every model is putting a bunch of guardrails around it. These models are all raw. If they’re in the raw, they have—they’re in the raw. We remember the early versions of ChatGPT and Gemini. I think it was even called Vertex. Was it called that? It was called something else before Gemini. And those things had the opportunity for us to prompt inject as well. So there were versions of these models which had to have guardrails built around them. And those things, that’s what it costs money to do is build guardrails. And the guardrails initially were skin deep. As you know, we’ve read of these phenomenal stories in the early days where people were able to jailbreak them and get around them and get models to start doing crazy stuff.

So I think we will see more and more guardrails, more and more simpler attempts being blocked. I think there are still sophisticated things that can be done to these models. Even the more—in your mind, more expensive models have loopholes or have side doors which can be used to attack them to some degree. And we’ve seen that happen in the past. So yes, perhaps DeepSeek is not as guardrailed as it is, and perhaps it was built cheaply, but in the end of the day, whichever model it is, when you deploy it for a precision-use case and give it arms and legs, it doesn’t matter what guardrails the model comes with, you will have to superimpose better guardrails and controls around it. This is where people like us come in, where we say it doesn’t matter what you got, I’m still gonna put a Palo Alto firewall and put a straitjacket around this and make it only respond to task-specific stuff. Ie. the model is designed to improve your manufacturing process. You can’t talk to it about rewriting Shakespeare.

 

Pat Grady: Let’s talk about that for a minute. What’s in scope and what’s out of scope for Palo Alto Networks as it relates to securing AI?

Nikesh Arora: Well, from our perspective, look, we’re seeing two interesting use cases. One, we’re seeing a lot of people who are employees, who are kids, who are users using AI in some way, shape or form to augment their day job. And you can call it “augment” for now, and maybe it’ll creep up and do more and more of your day job. But it’s being used as human augmentation for now, right? Because we’re not giving it control. I’m not telling an AI agent, “Go write me a paper for my class,” or I’m not telling an AI agent to write me a blog. And possibly, one of these days they will. But for now, it’s being used for human augmentation, and the general fear in the enterprise is my employees are taking proprietary data and putting it in some model and it’ll be used for training, and over time it’ll get either out of copyright or get stolen or it’ll become part of general knowledge base which is not proprietary data.

So we have a use case where we can intercept data which is being used by employees or AI that’s being used by employees and provide visibility to enterprises and provide controls so they can stop your employees from going and using AI models or AI apps without any controls. That’s one use case. It’s kind of interesting, we see a lot of companies who want their employees to use AI, but they want them to be able to do it in a controlled fashion. Other more interesting use cases: I haven’t found a company which is not experimenting with some sort of AI project, whether it’s as simple as a customer service chatbot, which seems to be the most popular example, or some sort of workflow automation capability, which is another example, to the extreme where people are using it to perhaps slowly edge, and giving it control over certain control systems which may not be mission critical, but they’re experimenting there.

Pat Grady: Mm-hmm.

Nikesh Arora: In all these scenarios, the biggest fear is the model runs amok, the model gives the wrong answer or the model takes control or somebody hijacks the model. All those are scenarios which customers are wary about, which is kind of, like, understandable. In that scenario we have a product which is effectively—we formally call it the AI firewall. The AI firewall which inspects anything going in, anything going out of the model. It’ll make sure the model doesn’t have back doors, nobody can access it. The data is not being sent out of the model somewhere else. You can run it on prem, you can run it in your protected cloud instance. So those are kind of the two use cases we’re seeing. The behavior of the model is the responsibility of the people generating the model. Our job is to make sure their model doesn’t get hijacked, doesn’t get intercepted, doesn’t get taken over or manipulated so that people lose control of their quote-unquote “AI brain.”

Real threats vs hypothetical risks

Sonya Huang: Can you say a little bit more about, you know, what are the real threats from AI versus the perceived or the hypothetical risks? Like, I remember back when self-driving cars were still a little bit of a pipe dream and everyone was saying we’re gonna have these adversarial images, QR codes in the roads that are gonna make the cars, you know, become weapons and things are gonna go crazy. And that ended up being, like, a very academic, theoretical risk. It feels like there’s some of that happening in LLM-land. Like, what do you think are the made up academic risks and what are the very real risks you think are going to—you know, where AI is actually going to really help the bad guys and we have to protect ourselves?

Nikesh Arora: Well, look, there is these two scenarios, right? A scenario where the bad guys are just going to use the LLMs to attack us faster, right?

Sonya Huang: Are you seeing that happen already?

Nikesh Arora: It is already happening. Like, if there’s a critical security incident or vulnerability in a product, you can go to certain jailbreak models or open-source models out there which will give you a recommendation on how to exploit the CVE, because there are  3,000 models on Hugging Face, You can pick a model which hasn’t been given guardrails or given any morals effectively in the context of a brain and saying, “Hey, here’s a CVE. What are the five steps you’d take to protect it? And what are the five steps that bad guys could use to attack it?” So it says, “Oh, by the way, watch out for these five things bad guys could do.”

So there are models out there that can actually give you a recipe to figure out how to exploit a CVE. Or you can actually tell it, “I tried to attack a customer with option A, I tried option B. None of them work because it gives this return response.” And it says, “Hey, how about you try option C?” So there’s a whole bunch of ways that these models can be used because they’re very helpful right now, right? So they’ll try and solve your problem, and there’s a risk that—actually, not just risk, it’s actually true right now. And what that does is it reduces your mean time to attack and exfiltrate data, or mean time to breach. Which means the only way to solve that problem is to be as nimble, as effective and as quick as the bad guys are. Which sort of like, you know, as I always say, it’s kind of a disbalanced problem. They have to be right once, we have to be right a hundred percent of the time, which means they might need this sliver of data to attack you. We need the entire corpus of enterprise logs, enterprise data from every IT system to be able to understand where there may be anomalous activity which is being driven by AI.

Pat Grady: So does that mean right now AI is at the margin more helpful for the bad guys than it is for the good guys?

Nikesh Arora: Well, it depends. We can always sell our book and tell you if you’ve deployed our XSIAM product. We can be as equally effective and equally helpful and thwart the bad guys. But yes, you know, they’re not fully deployed. Not everybody has it. So yes, there’s a possibility that it just has made the ability to attack much faster for the bad guys, right?

And that’s kind of a real threat, it’s not a perceived threat. And I think if you play the movie forward and say, “Let’s abstract ourselves from this today, and this is version one or version two of AI,” you know, in five years from now, everything will be happening in a real time basis. Everything—every bad actor or bad LLM agent will be able to attack an enterprise infrastructure if it’s not fully secured. And there’ll be agents running around the infrastructure trying to make sure that every loophole, every door, every window is locked and constantly monitored. So you can imagine the battle of the agents on either side.

I don’t think it’s infeasible. It’s possible. But to get there, there’s going to be a serious upheaval required of the enterprise data that exists in a company. Which by the way, is not unlike the fact that to get effective AI for organizations we’re going to have to have a lot of good data to automate or manage or run businesses. So I think that’s kind of where we’re going to end up.

In terms of the other parts, Sonya, you asked about the perceived versus real threat. Look, think about it this way: Let’s assume—and we all, I think, I don’t know, I haven’t heard you guys talk about this, but I’m guessing you agree that at some point in time these models get smarter and smarter and they’ll be more and more capable. So let’s assume that’s going to happen. They get very capable. This person is equivalent of a PhD researcher from pick your favorite university and can do drug discovery. Now you’ve trained it, you’ve given it all the data that exists in enterprise. It’s all proprietary. It’s all the drug data for Alzheimer’s, Parkinson’s, you pick your favorite research project that you want to do, and you ask the model or this brain to give you an antidote to various medications. It could be amazing for society. Now the question is: In the wrong hands, this trained brain could also be asked to make a virus, to create that situation, right? Create a bioweapon. It’s possible. This brain has no guardrails. You’ve trained it on all the data, it has all the knowledge that you need to have. Then the question is: Can I make sure that this brain cannot be taken over by the wrong people, that it falls in bad hands?

AI regulation or nationalization?

Pat Grady: So just for fun, if you were supreme ruler of the universe and you had a magic wand.

Nikesh Arora: Yes.

Pat Grady: And you could determine exactly what regulation was going to apply to this hypothetical, what sort of regulation would you craft?

Nikesh Arora: You know, Pat, this is an interesting debate, and I had a debate about this with a very, very smart person who’s involved in some of the regulatory aspects of this. Look, at the end of this, there will be two versions, I think. One version is critical systems, where before giving AI control of critical systems, you’ll have to go through a serious certification discovery process with some part of the US government, right? You cannot give the control systems to AI for shipping routes and running cargo containers which can crash and burn, or control the entire electrical grid of the United States. You can’t give it to an AI model because you need to have controls in place, and need to be able to have a conversation around what the fallbacks are and what the control. So I think there will be a set of classified activities which will need some degree of consultation, some degree of certification validation. It’s kind of like, you know, FDA does drug approval. So there’ll be some version of, you know, AI approval which can have critical, irreversible impact if you give control to AI. And that’ll have to be some sort of certification mechanism.

And I think where it is not as fatal, where it’s not as critical, perhaps, you’ll have some degree of self responsibility, you know? You make a bad car, people have a problem with it, you’re responsible. Not every car goes through inspection process, but there is a tremendous amount of accountability to the car companies that if they don’t have seat belts, they don’t comply with regulations, that they’re responsible for the bad outcomes, which is why if you deploy AI in a bad way in your company and give it arms and legs in control, then you’re responsible. There’ll be some degree of self—because it is impossible for any regulatory authority to create an inspection system of this amount of compute data which can get it right every time. So there will have to be self policing and self accountability in there, just the way it exists today in many industries.

Sonya Huang: Nikesh, do you think AI labs get nationalized in this—you know, your version of supreme ruler of the universe?

Pat Grady: [laughs]

Nikesh Arora: AI labs get nationalized? I don’t think so. I think the problem is if—given that we’re living in a hypothetical, if it is true that a new model can be produced at a lot lower cost, which is in the single digit millions or tens of millions, then the AI lab could be anywhere. It’d be impossible to find, discover and control. So what’s stopping somebody from—and part of, like, these challenges, these regulatory concepts are very dangerous on a global basis today, which we live in effectively a world with no borders, even though I know that we have a whole different conversation on borders. But conceptually, what’s stopping somebody from deploying $50 million in a server cluster in a country which has lax regulation vis-a-vis this stuff, and me building it there or somebody building it there? So I don’t think that the idea that yes, of course, if it’s a $500-billion AI cluster that is needed to build the world’s super brain in AGI, yeah, you can find a way of maintaining some degree of oversight perhaps on it. But if the answer is 20 million bucks and I can build a world-class model which is really smart, then I think all bets are off.

Irresponsible to not experiment

Pat Grady: Let’s say I am not necessarily a CISO, maybe a CEO. Let’s say I’m a corporate executive of some sort and I see the potential for AI, so I’m excited about trying to use AI, but I’m very scared. I’m very scared because I think that when my people use AI they’re just increasing the attack surface and making us more vulnerable. And I’m also scared because I think there are bad guys out there who are going to weaponize AI against us and sneak in in ways that they might not have been able to sneak in before. What would your advice be? You know, top three things that you would advise this person to do. What can people do to get the benefits of AI without exposing themselves to unnecessary risk?

Nikesh Arora: I think that would be ill-informed fear in my mind.

Pat Grady: Okay.

Nikesh Arora: I think there are perfectly fine use cases which I’m sure you can enumerate and Sonya can, and a lot of people can, where you can run a model in a constrained on prem or dedicated cloud cluster which cannot be intercepted or cannot be manipulated. In the end, that model is only useful if you put your own data into it. And if all you do is have the model generate responses which you’re not letting it give it any control, you can run experiments. You can look at what the model produces and compare that to other things. You can do A/B testing and say, “Wow, the model says this, and my best researcher says this.” And you can run experiments, understand the power of AI without giving any control.

So I think that’s why the fear is a bit mislaid, because it’s not doing anything. It’s just trying to give you the outcome, and you can see if it’s faster, better. And both are possible or one is possible, right? Some things happen faster, some things happen even better. So I think running A/B testing, being able to test it is easily possible in today’s world without having any fear. I think it’s even possible for letting employees experiment with it in a way that it is not unmanageable.

I think where it starts to get more interesting, not dangerous perhaps, is when you start letting AI act on your behalf in whichever capacity. And that’s where I think any person, not just CEOs, anybody would have to go through a rigorous amount of testing to see how it reacts in various circumstances because depending on what you give it control to, it could have a significant impact to whatever product, service, business that you’re running. That’s where I think it becomes more interesting. But I think for now, running experiments, running models which cannot be hijacked or manipulated, models that won’t run amok, it’s all possible today, and I think it would be irresponsible for companies to not experiment.

Pat Grady: Well put.

Nikesh Arora: Because I don’t think that this thing’s going away.

Pat Grady: Yeah. You may not know exactly how to get to the future, but you know if you do nothing, you’re going to get left behind.

Nikesh Arora: Pat, I learned about ChatGPT on a flight to India. I was going there to go speak at my alma mater. And I read about this thing. I was sitting at Dubai airport, not doing anything for two hours. I kept playing with it, and I rewrote my entire speech. I went and said, “You’re about to witness the biggest technological revolution.” Now, I didn’t say it before Jensen said, “This is the iPhone moment.” But more important, he’s got a bigger mouth and he’s a supreme commander of AI, so we’ll let him—we’ll attribute that quote to him. But that’s okay.

Pat Grady: [laughs]

Nikesh Arora: And I felt it was a seminal moment. And I came back and I said, “You know what? First things first. I have no idea about this thing.” So I called a bunch of my teams, like, “What do you guys know?” “What do you mean? Nothing.” Bunch of the important, uninformed. But we were important, but we had an opinion. So the first thing I did is I put them all into a training room. I invited everyone from Thomas Kurian’s team to Matt Garman and his team or a bunch of startups. It’s like, just brain dump on us. And we did that for two days a month. We got people to brain dump. We had a bunch of our people go come with ideas. We had 70 ideas people wanted to execute. We cut them down to seven and we started playing with it. We ran everything, every possible problem that you could run with Vertex AI or with the first model of ChatGPT or the first model of Claude. We tried everything, we ran everything through. We had models running with AI, we had models running with semantic search. We were training with all kinds of data and we learned. We learned what is useful, we learned what is not useful. Now it’s doing some things by itself, which we had to go jerry rig, but we are partially informed. It’s better than being totally ignorant.

Pat Grady: What was the biggest surprise from those learnings?

Nikesh Arora: The biggest surprise? Well, the early version of this thing was pattern recognition, was data summarization, was—I’ll call it infinite memory, right? Once you train it with some data, it’s never going to forget it. Now there are use cases where I have 50 people solving the same problem, and depending on who answers your phone, they’re going to solve it differently. In this case, I improved the general level of awareness and knowledge for my entire team, saying, “Get it to tell you the answer and then work from there.”

So it did sort of lift the average intelligence or the average capability of the teams. And I think as it gets better and better, it’s going to shorten the time to answers. And I mean, at the end I want to expose a lot of this stuff to our customers, so they can go solve this problem themselves. So my problem is if you don’t start learning when every startup is learning, eventually the startups do your business, right? You’ve seen that in every technological revolution that we’ve run into, whether it’s a cloud, mobility, the internet, we saw it every time. And every time there was these large—now we can call them legacy, but large businesses with dominant market share with every asset at their behest which they could have deployed. And nobody should have seen the light of day who was competing with them with the new technology. But for some bizarre reason, every time you turn around, there was a Travis, there was a Chad Hurley at YouTube and there was a Larry Page, and there was a Mark Zuckerberg and there was an Elon Musk. So the challenge is that if we don’t go embrace this as early as we can and learn while everybody else is learning, we run the risk that we’re late and then we go in. It’s the law of unintended consequences.

AI has the opportunity to turn security on its head

Sonya Huang: Maybe on that note, one thing I’d love to understand is it seems like the biggest platform companies in security are kind of formed around these platform shifts. Like, you know, the firewall, identity as a perimeter, maybe the cloud and CSPM. Do you think AI is a new platform shift opportunity from a security point of view, and do you think a new security platform company—which could be you guys—emerges, or is this very much a similar kind of set of tools is going to serve the AI-first world?

Nikesh Arora: I think AI has the opportunity to turn security on its head. And the reason I say that is that security is a needle in a haystack problem, right? Because you don’t worry about it until you have to worry about it. So you suddenly wake up and get really, really smart very, very quickly because something’s happened in your infrastructure. And it’s just impossible to go from zero to a thousand, like, overnight because somebody calls and says, “Holy shit, there’s somebody in our infrastructure. They’ve exfiltrated a certain amount of data, or they are in the midst of exfiltrating data.”

And traditional security has been, I’d say, 95 percent at the border around prevention and five percent on detection and remediation, right? You buy a firewall, it inspects everything that’s coming in. You block a bunch of stuff. You buy some sort of remote access endpoint agent, you buy an endpoint XDR capability. And that all works because there’s a lot of prevention that happens in that process.

But the problem in breaches is it’s not what you prevent, it’s what you let in. And there’s things like zero day attacks which have never been seen before. So you haven’t seen it very often. You can’t prevent it. And the only way you figure out all that stuff is you ingest a lot of data, you look at it and look for anomalous behavior, right? You can’t rely on security signatures. So if you’re going to look at anomalous behavior, you need to be able to ingest all the data. You got to look at pattern recognition and say, “Does this happen like this every time?” And say, “Well, I don’t know, but looks like something’s different happening.” So I think this whole notion of doing pattern recognition, ingesting a lot of data, analyzing it on the fly and looking for things is easily possible with—call it machine learning, call it AI, call it whatever you want to call it, but I think that’s the only way we can do this at real-time speed.

Sonya Huang: What about what it means to be a security team, a CISO, a security practitioner in this new world? And, you know, we get 20 pitches a week right now for, like, the AI-powered SOC analyst or the AI SOC. What is your vision for …?

Nikesh Arora: We already have one, so you send them our way.

Sonya Huang: [laughs] What’s your view on how that evolves and what the end state is for kind of humans and security?

Nikesh Arora: Look, in security at a very first principle level, we sell two things, right? We sell a sensor which senses at the edge of your perimeter, whatever the perimeter is. Whether it’s your laptop, whether it’s your application, is it your customer accessing your bank account? That’s the perimeter, right? The perimeter is the edge of your technological footprint. That’s the perimeter. So we all sell sensors that sit at the perimeter and inspect. It’s like having a digital security guard at the perimeter. We all sell perimeters and we protect our perimeters. We inspect the perimeters, we block at perimeters, right?

And then what happens is that somebody’s written a bad app, somebody’s written a back door, somebody’s in a side door by mistake, then people enter through there. So we sell sensors, we protect perimeters, and then we analyze data in the back end to look for any vulnerabilities that might have been created by the infrastructure that you have. That vulnerability could be exploitable in the future, so we look for potential exploits. And that’s kind of what we do. Which means—and the reason I tell you the story is that it means if I want to sell any AI-powered anything, I need to be at points of data collection in the enterprise because AI requires data.

So again, this is the old adage, right? I am in the best place to collect all this data and analyze it. And of course, that doesn’t mean anything because in history, people who are in the best place got knocked off their knees and somebody else came and built something better because they were lazy sitting on their haunches. Now the only thing is we don’t want to be lazy. We don’t want to sit on our haunches. We’re out there hustling as fast as we can—not as nimble as possibly in a startup, but we’re nimble enough as a company. We’ve done 27 products which are in the magic quadrant on the right, so we’re not shy. But I think every security company, every security startup is going to walk to every customer saying, “I can build this for you.” The customer says, “Great. How do we start?” Says, “Well, let me go deploy a bunch of sensors around your perimeter so I can collect the data.” Holy shit. I already got a bunch of you guys in the industry, you’ve got sensors out there. Now what do you want me to do? Then give me all your data. I’m like, “Wait a minute, you want all my data? Who are you again?”

Pat Grady: [laughs]

Nikesh Arora: That’s kind of the risk you run into is this is a large data problem, and large data problems are harder to solve as a startup. Not to say it’s not being done. There are people out there raising $500 billion, but not everybody can do that in security.

Sonya Huang: Fair enough. Do you think security teams are comfortable giving arms and legs, I think, so to speak?

Nikesh Arora: Oh, no. I think they’re petrified.

Sonya Huang: Do you think that flips at some point, and when?

Nikesh Arora: Well, I think most security people aren’t asked, right? I mean, Waymo wouldn’t exist if they had talked to a security guy. Or Tesla FSD possibly would not exist. Some security guy would say, “Are you crazy? You’re giving the car control to your car?” All kinds of bad things could happen, right? So from a security perspective, these are all bad things that happen. I mean, security leaders are, for the most part, risk managers, right? They’re—they’re risk managers. They’re trying to understand what the business need is and how do I deliver the business need with the least amount of risk possible.

The safest room in the world is one with no windows and no doors, but it’s not very useful. So you gotta let doors and windows be created, which means you’re managing risk. So security people are risk managers. They sit down with the business, understand what potential risk does it cause. They’ll give you some ideas as to how to make sure that you protect against that risk. Then they’ll set up a whole bunch of safeguards. Let’s say, you know, gate one, gate two, gate three. If it doesn’t stop, gets stopped here, gets stopped here, gets stopped here. And then off to the races. So I think security people will allow the arms and legs. Have to, because that’s kind of the crying need of the hour. I think the question will be: What kind of security tools do we have in place to create those protections that customers can comfortably go ahead and use these capabilities? But that’s been true with every technology.

Relentless inspection

Pat Grady: Our partner, Jim Goetz—and for any listener who’s not familiar, Jim has been involved with Palo Alto since formation. And Jim has a creative mind. One of the things that Jim mentioned was after you came in about seven years ago as CEO, the innovation engine at Palo Alto, really started to pick up. And I think we see that today also with how quickly you guys have pounced on AI. I guess the question—maybe two questions. Question one: If you had to rate yourself, if you had to rate Palo Alto on agility, nimbleness, ability to respond to market conditions—I know you’re a tough grader, so you can’t give yourself an A+. How would you grade yourself? And then question number two: You do have all the advantages of scale and data and distribution, and being at those points where you need to collect the information to do whatever detection, remediation you need to do. But it’s hard to get a big organization to move fast enough to respond to the market. So question one, how would you grade yourself? Question two, how do you drive agility at this scale? Like, just practically speaking, what do you do to make that happen?

Nikesh Arora: You know, let’s go first step first. I’d give us a seven or seven and a half on a scale of 10 in terms of agility, because we have about 15,000 people, 5,000-6,000 people on our product side. So there’s a lot of stuff, a lot of complexity, a lot of legacy stuff that has to be brought along, a lot of stuff has to be ticked and tied to make sure that these things work. And, you know, I think part of the challenge, you know, is that we have an installed base of 70,000 customers, right? Any tweak you make which impacts 70,000 customers, brings their infrastructure down, you lose your license to operate. So it’s not like we can innovate, throw shit at the wall and see what sticks and go with that and ignore the other stuff.

So we have a serious responsibility in making sure stuff that we build, that we put in line has to keep performing and not bring down anyone’s infrastructure because the best security is in line. We have to be able to watch what’s going on. Inline security has the property that if it doesn’t behave, it can impact your infrastructure. So we have a very high responsibility from an availability perspective and not disrupting our customers, that we have to apply a higher precision standard as it relates to inline security. From that perspective, I think seven and a half is not a bad place to be. So we probably were at three or four, seven years ago as an industry—I don’t even say Palo Alto. I think as an industry it was three or four.

And I see the industry’s moved its agility. If I look at some of the newer players, they’re moving faster, they’re not sitting back anymore because they see the playbook for the future is not where you let other people sort of come by in the new swim lane and say, “Oh, nice to see you. Congratulations, great job.” Now it’s like, “Holy shit, how did they get there? We’re going to go chase them down.” So I think the industry dynamics have changed.

In terms of how do you drive agility, as you know, possibly from talking to Jim and from talking to us, that we have no sort of qualms about going and finding people who are doing it amazingly well and embracing them and saying, “You got this figured out. Let’s go do it together, we’ll run fast.” Right? Sometimes companies get trapped in this idea that I have so many resources I can take them down. They don’t understand there’s a team of 50, 100 motivated people funded by people like you who are out there running at sort of light speed, who’ve built an amazing product which are going to—then if they get traction, then they’re your competition every day in the market, and they get better and better and stronger.

So the question is: When’s the right time to say, “Oh shit, let’s embrace them. They’ve got less resources but still manage to kick our ass. Let’s go make them part of our team.” We’ve done that 19 times, as you’ve seen. So we’re not shy about embracing innovation if it doesn’t come from within. Having said that, I was looking at it the other day. I think more than half of our products are made in Palo Alto, right? Not acquired. So it’s not like we only have one strategy. We do both because in some cases building on our platform is a lot easier from a go to market and deployment perspective than buying something and spending time integrating it. So we’ve gone to the point of scale that it’s more important for us to innovate on our platform than just go out there and willy nilly try and look at the fastest innovator and try and stick them onto our tech platform.

So I think from that perspective it’s that constant balance: What do you buy, what do you build? How do you embrace somebody else doing it better? And then you got to be nimble and say, “You know what? I am going to get some stuff wrong.” The question is, when you get punched in the face, how quickly you recover, right? Don’t let them count to 10. So that’s kind of like how you maintain agility. And then the only other thing is, I call it relentless inspection. Relentless inspection of your go to market capabilities, of your deal. Relentless inspection.

Pat Grady: What form does that take? Like, what’s the sort of thing you might do in one of those relentless inspection conversations that somebody else might not do?

Nikesh Arora: Well, I’ll give you an example. For the longest time, I kept seeing us doing really well in certain things, and our teams would create all these incentive programs to drive more behavior to get people to sell. And I have my sales leaders tell me that everybody has an account plan. I said, “These things look like very interesting things. I should take a look at one.” So one fine day, this is possibly a year and a half ago, we were having a tough quarter. I said, “Great, here’s what we’re going to do. We’ll start from customer number one and keep going.” And they showed me the account plan. So what does that mean? Send them five slides, and fill those five slides and show up on a Zoom call and explain those account plans to me.

And I’ll fast forward. I’ve probably been through 750 of them so far in the company. I did about 15 yesterday. And it’s like theater now. There are 500 people dialed in from across the company.

Pat Grady: Wow!

Nikesh Arora: They all get to watch. Any salesperson at Palo Alto can dial in and watch an account review in process. Because for me, it’s basically them learning how to do it. And we go through it and say, “Who’s the person? Who’s the buyer? Do they understand the product? What did you pitch? How do you sell it? Did you sell it? Did you talk about this? Did you not talk about this? Why did you talk about this?” And by the way, the best thing for our teams is if you feel that this doesn’t look as robust as we’d like it to, me, you know, BJ Jenkins, our president, many of our product leaders, we will get involved. We’re not there to—we’re not just readily inspecting, we’re actually assisting. And, like, you’ll be surprised that people sit up, their laptops are open. Like, people are pinging people on LinkedIn, texting people saying, “Hey, do you know this person, this company? I don’t think our plan is robust enough. We don’t know the right people. Let’s go.”

So that’s kind of like grassroots. I have a little whiteboard in my office where I write down things which I want my team to remember. And the second thing written on it, it says, “Sales is a math problem.” Which people find hard to understand. Like, look, if you have the best product in the market and you are able to win and generate billions of dollars of TCV a year, then the question is: Why are you losing? It’s not the product because there are people buying the product. It’s working. It’s not that nobody’s willing to buy it. It’s not that lots of people are willing to buy it. What happened in that process where you were selling that you didn’t win, somebody else did. Let’s go inspect it. And sometimes you’ll find some product things that you need to fix. Many times you’ll just find execution errors.

Recruiting and retaining really exceptional people

Pat Grady: Let’s keep going on this thread of driving performance out of people, because one of the other things that Jim said was that you have sort of an exceptional ability to recruit and retain really exceptional people, and that you sort of drive followership in a way that’s unique. Like, you’re pretty hard on people and you demand a lot from them.

Nikesh Arora: Yeah. I went to speak yesterday for a person who I worked with at Google. Her name’s Lexi Reese, and she actually ran for Senate. She has a startup now. And she introduced me by saying, “I didn’t quite enjoy my time when I used to work for you, but I’m a better person. I learned a lot.” So I’m like, “Well, I’ll take it whichever way you give it to me.” But anyway, sorry.

Pat Grady: Well, and Jim said you balance that with a very nice human approach, where people know that you care about them and you’ll go to bat for them, you know, in the right situation. So I’m just curious what—you’ve now been an executive in a variety of contexts, and you’ve been successful every time. You know, Google is a consumer business, Palo Alto is an enterprise business. You know, you kind of grew up around marketing and sales. You’ve become more of a product person. And so you’ve got this diversified set of experiences from that.

Nikesh Arora: I’m an enterprise person.

Pat Grady: You’re a full-fledged enterprise person now, for sure. I guess, what leadership principles or what leadership techniques are sort of context independent that worked at each step in your journey? And are there other things that are kind of Palo Alto specific? But I’m mostly curious, like, what are the sort of core principles of your approach to leading people?

Nikesh Arora: Not a lot is Palo Alto specific, right? Because at the end of the day, my senior executives are not writing product documents, right? They’re analyzing strategy, analyzing go to market. They’re understanding. Now of course, do I have experts in cybersecurity? Of course. We couldn’t survive without that. So Nir Zuk, our founder, Lee Klarich, our Chief Product Officer, some of the other product leaders that we have, they’re very smart guys. They understand how products exist in this industry. They act as sounding boards. Sometimes I’ll challenge them, sometimes they’ll push back, sometimes they’ll accept. So there’s no getting this done right without the right domain knowledge. So you have to have domain knowledge.

But I think outside of that fact that there’s domain knowledge, you still have to have the right people. It’s possibly understood that I don’t suffer fools because I can fix a lot of things. I can’t fix intelligence. And if somebody doesn’t get it, that’s for sure. And you have to make sure you surround yourself with smart people. But if you find them, keep them. Because the next question becomes: What is the attitude these people bring to the table? As long as they are willing to learn and humble, and they understand they’re part of a team, all systems go. I have a principle that I always joke, even on all hands. I said, “I’ve never met a person who comes to work to screw up.” I wake up in the morning, “Let’s go, sunshine, it’s time to go to work. Let me see how badly I can do today.”

Sonya Huang: [laughs]

Nikesh Arora: Everybody walks in with the right attitude. It’s something that happens at work that we create that causes the unintended outcomes. It’s not the person who walks in. If you found the right person with the right domain knowledge, right intelligence and the right attitude, then the rest is upon us. And then the question is: How does management create the environment that people can thrive? And it’s not just about, you know, happy go lucky environment.

Like I always say, there are three jobs that we have as leaders. One is we have to identify the North Star. People have to know which mountain we’re going to climb. You get the best climbers and find out you haven’t told them which one. They’re all on eight different mountains around you. Holy shit. What happened? They’re all in different places. So my job is to make sure I identify the mountain. I fight, I argue, I debate. You know, I cajole. Whatever needs to happen to make sure we have a plan of record where we’re going. With ample degree of input, but in the end, somebody’s got to make a decision.

Then the next thing is, is it achievable? Can we write a plan to make it happen? Because the last thing you want is people come say, “I get it, you wanted to build this. But you gave me one pickaxe and one shovel and two people. And he wanted me to go dig the platinum mine.” Takes a lot more than that. So the next question is: Is there a feasible plan to get it done and are you resourcing it right? And very often companies—I mean, look at our industry, right? A lot of people had the right ideas. When I came to Palo Alto—it’s funny, I joke sometimes. I didn’t do anything different. We had a cloud security acquisition we’d done. We had an XDR acquisition we had done. We had the idea of building a SIM, we just hadn’t resourced it. We hadn’t written the plan for it. We kind of knew what we wanted to do, but we hadn’t sat down, debated, argued. So what does the future look like? And we hadn’t written the plan, we hadn’t resourced it. We had one tenth the resources we needed.

So you don’t have a plan, you have an idea. Ideas are not good enough. You got to have a plan and a North Star. You have to have resourcing that you can sort of execute at the plan. The third job as management is to keep communicating it and weeding out things that block the execution of the plan, whatever it is. Whether it’s a person who’s not doing it, whether it’s a resource that’s not available, whether it’s a contract that’s not working, it has blocking and tackling and sort of making way for your team so they can go out and execute behind you. So if you follow those principles and find the right people around you, and don’t suffer fools and have a good time while doing it—sometimes some people do get more scrutiny than the others, but it’s good for their career and good for their character. It’s amazing.

Managing Acquisitions

Pat Grady: Let me ask one more question on sort of management leadership, and then maybe we can go back to some AI topics. So you mentioned the 19 companies that you guys have acquired in the last six, seven years. Maybe two questions on that. One, at the moment when you’re pulling the trigger, what goes through your mind? Like, when it’s time to make the go/no go decision, how do you decide that an acquisition is actually an acquisition you want to make? Then maybe second question. One of the things our partner Jim mentioned was that pretty much all the founders who’ve joined forces with Palo Alto Networks have stuck around.

Nikesh Arora: A majority of them. Yeah.

Pat Grady: So what do you do post acquisition to actually keep them around? And maybe it’s the same thing that you were just talking about with leadership generally.

Nikesh Arora: A little more than that. I think before we get to decide it’s an acquisition we want to make, we spend enough time to understand, you know, is it even worth engaging with the company for a few hours or a few days, right? And we have some principles. I don’t like buying number two or three.

Pat Grady: Yeah.

Nikesh Arora: A lot of founders, a lot of companies say, “You know what? The first one’s a billion, the third one’s $200 million. Let’s just take the third one. We got enough resources, we’ll spit and shine it. It’s going to make it brand new, maybe worth a billion dollars.” Well, there’s a reason it trades at $300 million and not a billion, first of all, which means that possibly you have some gaps which the customers have identified and you haven’t. Two, you didn’t actually take out the biggest player in the market who’s still going to be four steps ahead of you, so now all you’ve done is taken a nimble startup which is number three, and possibly made it slower despite all your love and attention to it. He’s like saying, “Okay, let me—bring me along, bud.”

So now you’ve suddenly slowed down number three. You’ve enhanced the opportunity for number one and two. So you sit down and say, “Okay.” And a lot of times you joke about it, saying, “I wish this competitor would be bought by somebody in competition because they’ll slow them down.” So we make sure we’re only looking at one and two at best. And sometimes they’re neck to neck, sometimes they’ve chosen two different paths. And we do that in the browser space. We’re very happy with the acquisition of Talon. I think it’s doing really well. We did that in DSPM, with Dig.

Actually in our industry, what has happened since, Pat and Sonya, is that at first people looked at me and said, “What the hell? This guy’s buying these companies. I don’t know what his plans are.” Now we actually have a slide we keep track of. Once we buy something in a category, that category becomes hot. So people think we know something. We’ve done a good job of fooling people about that. But look, I think the principle is you got to make sure you’re buying the right—sort of right player in the market. Then you got to make sure that you can convince the founders that they believe a better together story than a go it alone story, right? There’s no sell and dump, because it’s not going to happen. We’re not going to take the asset. Because when you’re buying, you’re basically buying a North Star, an execution plan and a team that executes. But usually they’re a third or 40 percent down their journey. It takes four to seven years to build a great product. Usually these things are three years out, three or four years out. They haven’t fully matured into a full product that’s going to win in the market. So you need them around and you need a team around.

And so once we figured out this is the right company, the right attitude, we can actually make it work, and now given our scale, there are some technical considerations. Do we have to rewrite the stack, which takes longer? Is this a complementary area? Can this run on our stack easily? Is this something we’ve never done before? In which case it doesn’t matter which stacks on. So a lot of those considerations from an integration and time to market perspective.

But let’s assume all those hurdles have been surmounted and we’re actually engaged to the company. I have a rule. I walk in and tell my team—and I live it. I say, “Treat them on day one as they’re part of your team.” Because if they’re going to work with you, they’re going to remember every interaction. Very often, I find many companies, the most well-intentioned companies start treating it as acquirer and acquired. I come from a country which was acquired or ruled, and I don’t like this idea. There’s no, like, one rules the other. It’s like we’re part of the team, and the day this deal gets signed, we’ll all be on the same page. We’ll all be trying to drive the same stock price and same business forward. So for that six weeks that you’re in that discussion phase, why is it important that you’re the acquirer and you’re the acquired?

So let’s assume we do that. We like a company. Then I send off to finance and accounting guys and legal guys to do due diligence, and I tell this founder and their team saying, “Your job now in the next six weeks is to build a joint product plan and a joint org chart. At the end of six weeks, if you don’t like the product plan, or I don’t like it and you don’t like the org chart and I don’t like it, there’s no deal.” This is learned behavior. The first two, three times we didn’t do that and we discovered we spent the next six months arguing about what the product should be and who should be the boss. This doesn’t work, this is a bad idea. So, like, “Hey, buddy. You want the money? you can have the money. It’s my house, I’m going to paint it yellow. You don’t like the color? Tell me now and you can get somebody else to paint it pink, no problem.”

So that has an amazing cleansing property, because you’re making a decision with all the facts in front of you saying, “If I get part of—you to be part of Palo Alto, this is going to be the product strategy.” It could be yours, it doesn’t have to be mine. I’m not smart enough. So—but we have a joint product strategy, the North Star, and we have a joint plan of execution. And then very often, Pat—this goes back to Jim’s comment—most often, the founders we have bought companies from become the senior vice presidents of our company running their business. Our people work for them, which I think is unique in the market. Very often you’ll find there was an acquirer SVP who ran crypto or blockchain or pick your favorite. I’m using non-security terms to keep it—protect the innocent. But you say, “Well, since I’m responsible for this, these people are going to work for me.” I’m like, “Wait a minute. You had all the resources. You lost to them. We’re not going to have them go work for you. Maybe you can learn a few things.” So we did that a bunch of times, and in some cases our teams worked for them really well. In some cases our teams left, which is fine. So I think those are some of the things that allow us to make these amazing founders come work here and actually drive more value for us collectively.

General vs fine-tuned models

Sonya Huang: Nikesh, I want to ask you about some of the chess that’s now happening on the AI stage, because I think you’ve played the—you’ve played the chess game so flawlessly in the security market and, you know, you have so clearly emerged the winner. The AI space by contrast feels just white hot competitive hunger games right now. I’m curious your view.

Nikesh Arora: I think it’s a lot clearer than that. I think it’s just—it’s just not clear to the naked eye. But I think it’s a lot clearer, but go on.

Sonya Huang: Say more.

Pat Grady: Tell us more. Yeah.

Sonya Huang: Yeah, tell us more.

Nikesh Arora: Like, if you think about the state and maturity of AI, there are two extremes. We’ll call them the very precise, the AlphaGo-type situations which Demis and Google built together, which are I’d say fine-tuned models which are designed for drug discovery or the biopharma field. And there you see that they did a really good job. They focused on it, they trained the right data, they hopefully tweaked the models in such a way that that can actually become a useful thing for society.

So you have that, which is highly tuned AI models, very task specific or category specific. And then you have the generic ones. And the generic ones are the rage today between the Claudes and the Mistrals and the Geminis and the OpenAIs of the world. And those are large. They’re all encompassing, all knowledgeable. But we saw this movie before, right? We saw this movie in search. When I was at Google, then you had vertical search because the large Google search could not do as good a job of local search, so you had local search. The cloud couldn’t do a good job of product search, so you had product search in Amazon. So how can you be amazing at everything in this space when you couldn’t do it in the last few technological evolutions?

So I think over time, we’re going to have to figure out what the distinction between general purpose, large scale, I know everything, I can do everything model, versus models that are fine tuned for tasks. And I don’t believe that all the perfect information in the world exists in open domain that you can go out and build it without specialization. Which means you are going to need specialized proprietary data to build these models. And I don’t know how you share data between GlaxoSmithKline, Novartis and Pfizer and say, “I can build the best drug discovery model in the world because I have perfect information.”

So that’s a question that remains to be seen. So I think over time you’ll see a bifurcation from an enterprise use case. And in our business, in the enterprise side, you need precision. I can’t afford to be wrong. A wrong turn by a Tesla is going to kill somebody. A wrong block by Palo Alto is going to bring somebody’s infrastructure down, or a wrong permission is going to let a bad actor in. So I don’t have that tolerance that consumer models can have because they have low consequence. So I think high consequence, high consequence applications require a lot better models, a lot more training with more precise domain data. I think that’s going to become a sort of thing of its own. I think everything we’re seeing today is general purpose models, and eventually they’ll be like—and I don’t know the answer. Is it that general-purpose models become task-specific evolved models, or there’s a new category of task-specific evolved models which are built more in the sort of genre of the AlphaGo version.

Now on the general models, I think the people who can deploy them against existing consumer properties are sitting pretty, right? Because it creates more retention, more continued monetization of space. So whether Google can deploy a whole bunch of AI against its three-plus-billion users across multiple properties, or Mark Zuckerberg can do it vis-a-vis Facebook and three billion users across Instagram, WhatsApp and Facebook, that’s cool. I think Sam’s done a great job in building a consumer-direct business on the subscription side, which he continues to drive very well. And that’s become sort of his moat now because no other model has built a subscription-based consumer model.

So I think you’re seeing the general-purpose models being built by existing large consumer properties. You’re seeing a new consumer property emerge vis-a-vis OpenAI. I think the enterprise use case is still early because we haven’t seen the mission critical applications be developed because of lack of great training data. So anyway, that’s what I think.

Pat Grady: Let me ask you, one of the things that’s not on your LinkedIn profile is prior to Softbank, prior to Google, prior to T Mobile, if I had my facts straight, you were an award-winning equity research analyst covering telecom. And I believe that one of your claims to fame was calling the internet bubble and the bursting of the internet bubble.

Nikesh Arora: I still have that note. A cell note, which I wrote in November ‘99.

Pat Grady: Not bad. So are we in an AI bubble?

Nikesh Arora: Lightning doesn’t strike twice. How would I know? [laughs] Again, for the number of times I’ve heard, “It’s different this time,” we could all be very rich. But there are some things which are different, right? If you look at where the AI inflation has happened in the equity markets, it’s still in the plumbing. And the plumbing is real. It’s not like people are driving the plumbing up without substance because you’re selling four times or 10 times more chips than you sold two years ago. So there’s real revenues that underpin that.

Now clearly people are projecting that into a trajectory which I don’t understand. And every day you see a new development. You tell me. Is Stargate the future or is DeepSeek the future? And I don’t mean with all its negative connotations, I mean as a concept. Are we going to have cheaper models being built for large-scale application with limited specialization, or are we going to have a super model in the context of AI, which is going to be expensive but be able to do everything amazingly? You tell me the answer and I’ll tell you mine.

Lightning round

Pat Grady: Sonya should we head into lightning round or do you have more questions?

Sonya Huang: Let’s do it.

Pat Grady: Great. Lightning round. Okay. You just bought a cricket team. Why?

Nikesh Arora: [laughs] You know, there’s a bunch of us who are together, not just me. There’s ten of us, including your partner, Jim Goetz. We’re all failed cricketers, we’re all sports aspirants. So there’s a part of that—there’s part of the passion which says, “Wow, I can be associated with the sport at the highest level without having the talent.” It’s kind of interesting. That’s one reason for it. Now you couldn’t have a bunch of us buy it who are business savvy and say, “Well, is there a business model here or not?” And if you look at it, the only thing that’s left in streaming that is linear is sport. No longer news, television, movies, nothing is linear. The only thing that’s linear is sport. You want to watch it when it’s happening. Pretty much when it’s done, you know the score and you lose the interest to watch that event, right? The post-event viewership is a lot lower in sport than live viewership. Every other, its the other way around. Every other streaming content is the other way around. The post-launch viewership is higher than launch viewership, whether it’s movies or television or any podcast or any video streaming, like what you’re doing. So it’s the only linear sport out there—it’s being bid up. Cricket is the second most-watched sport in the world. IPL is the biggest franchise. London Spirit is the next best thing. It’s in the country, which is the home of cricket. And then we follow the same philosophy that I told you about my startups. You’re going to buy something, buy the best. So we bought Lords, which is the home of cricket. It’s going to be fun.

Pat Grady: Love it. What did cricket teach you about life or leadership?

Nikesh Arora: It’s a team sport. It doesn’t matter how good you are. If the other 10 people suck, it doesn’t matter. It teaches you that, right? You can have a bad day and you can still win because you participated with the rest of 10 people. So it teaches you about life, it teaches you about business.

Pat Grady: You’re wearing a Pebble Beach pullover. I hear you won a Pro Am recently. What’s your handicap?

Nikesh Arora: My handicap’s nine NCGA. And that’s a combination of the best pros that I could find, luck and a few misplaced good shots.

Pat Grady: Nvidia. $118 a share, $2.9 trillion market cap, 39 times earnings. Earnings are growing about 150 percent year over year. Long or short?

Nikesh Arora: I don’t understand it.

Pat Grady: If you were Jensen, would you be making the same moves?

Nikesh Arora: Jensen has played a very long game. I think he’s built a phenomenal franchise. I think what he’s done is no less than what Elon has done for electric cars. I think he took something which he built for gaming, thought about it, understood the large need for compute, and put all his energy and thought behind it. He’s been the longest-serving CEO in the world, right? So you can’t take it away from him. Just can’t even trivialize it. We have to talk about him with tons of respect. What he’s done is amazing. He has a vision. I think he’s taking it beyond just the chips, because he’s slowly building an ecosystem, saying, “My chips work with a lot of other things together.” So I don’t think from a long-term perspective you can argue that AI is not going to be relevant. I don’t think you can argue from a long-term perspective that we will be constantly doing some form of development which requires more and more compute. Like, in the history of mankind, compute and bandwidth and memory have never shrunk.

Pat Grady: Yeah.

Nikesh Arora: So it’s not about to start now. I think he’s sitting on a phenomenal asset. Is it a $3-trillion asset today? I don’t know. It’ll be a $3-trillion asset in 10 years, possibly more.

Pat Grady: What CEO do you admire most?

Nikesh Arora: I have a collection of CEOs. I admire traits that CEOs exhibit. It’s very hard to have one idol in life because one idol has the property that they could disappoint. But if you admire certain things certain people do, you know, you learn a lot from that aspect of it. And if you have the same circumstance, you might do the same thing. If you have a different circumstance, you might do a different thing.

I mean, I admire Elon’s creativity and what he’s done for the world. I wouldn’t want to work for him, but I admire what he’s done. It’s amazing. Like, I always joke with my team, I’m saying, “Would you go on a rocket to Mars built by the guys around you?” People are like, “I don’t think so.” Right? But imagine he’s got a bunch of people who build a rocket and people go up in that thing. That’s amazing. Will you sit in a car which has got no driver in it? And so people have done it. So look at what Satya did to Microsoft. Like, he took somebody—nobody believed he could turn this around, a $3-trillion company and he did. It’s amazing, right? So look at Tim Cook. Steve Jobs is a hard act to follow and Tim’s done a phenomenal job in taking that amazing company and maintaining it down the middle and constantly innovating. Look at Mark Zuckerberg recently, right? He’s taken that thing around and turned it around. Now the fact there are certain things that they’ve done which I respect is amazing, that doesn’t mean anything about the rest of their lives. And I don’t need to worry about it.

Pat Grady: Which CEO is executing the best in AI right now?

Nikesh Arora: For all the conversation around Sam, I think what he’s done is amazing, right? I mean before ChatGPT came about, we weren’t talking about AI, right? And before ChatGPT came out, you know. you think Google didn’t know about AI? I knew about AI when I was at Google. Do you think Google didn’t have a self-driving car then? They did. Do you think Satya didn’t know what AI means? He did. But look at what’s happening right now. You can’t run into a CEO who won’t spew the words “AI.” So Sam has created the next—the impetus for the next technological revolution. That’s the way Steve Jobs did it with the iPhone. And the fact that was a straight face. He can go out there and get people to commit to spending half a trillion dollars in building infrastructure. And every—I think the Mag 7, every one of the CEOs is spending way more money in building compute and data centers because nobody wants to be left behind. I’d say Sam’s done a great job executing AI. Now, you know, history is hard and business is hard, and we don’t know that means that you’ll be the winner in the future. But damn, has he done a great job in getting us to where we are? Yes.

Pat Grady: I think that’s it. Thanks Nikesh.

Mentioned in this episode

Mentioned in this episode:

  • Cortex XSIAM: Security operations and incident remediation platform from Palo Alto Networks
  • CSPM: Cloud security posture management