Skip to main content
Greetings, Earthlings: Philip Johnston of Starcloud on Data Centers in Space
Episode 81 | Visit Training Data Series Page

Greetings, Earthlings: Philip Johnston of Starcloud on Data Centers in Space

Philip Johnston, founder and CEO of Starcloud, explains why space will become the primary location for AI compute infrastructure within the next decade. After witnessing SpaceX’s massive manufacturing scale at Starbase, Philip realized that declining launch costs would make space-based data centers cheaper than terrestrial ones. He breaks down the physics of heat dissipation in vacuum, the economics of solar power without atmosphere, and why the marginal cost of space infrastructure decreases while Earth-based costs increase. Philip previews a future where close to a trillion dollars per year in CapEx flows to space compute. And, yes, we get his take on aliens.

Summary

Space will soon be cheaper and more scalable than Earth for data centers: Falling launch costs and rising terrestrial energy and land constraints mean that, within a few years, manufacturing and deploying data centers in space will outcompete building on Earth—potentially creating a trillion-dollar annual CapEx market.

The two hardest problems are heat and radiation, not cold: Despite the perception that space is cold, the vacuum makes heat dissipation extremely difficult, requiring novel radiator designs. Radiation also threatens chip reliability, so extensive particle accelerator testing and shielding innovations are essential.

Inference workloads are the first and biggest opportunity: Due to latency and bandwidth realities, AI inference—not training—will dominate space-based compute early on. Space-based processing also uniquely benefits satellite and government workloads with massive data collection needs.

Space “real estate” and regulatory dynamics are wide open but evolving: Currently, orbital slots are first-come, first-served, but as space gets crowded, early movers will gain valuable, tradable positions. Filing with agencies like the FCC and ITU is key, and these rules may change as the space economy matures.

Push teams toward maximum AI utilization as a forcing function for innovation: Philip demands engineers spend $10,000 monthly on tokens, deliberately using an extreme metric to eliminate hesitation around AI tool costs. This cultural norm drives aggressive experimentation and ensures the team fully leverages AI capabilities rather than self-limiting based on perceived expense.

Transcript

Full conversation

Philip Johnston: The problem with doing this build-out on Earth is that the marginal cost on every additional data center goes up every time you add one, because we’re using all the easy places to build energy projects.

Pat Grady: Yeah.

Philip Johnston: In space, the marginal cost goes down for every additional unit, because you’re now manufacturing at rate, and the more Starships you fly, the cheaper it gets and all the rest of it. And so there comes a crossover point where it just makes zero sense to continue building things on Earth. So I think it will be close to a trillion dollars per year of CapEx spend within 10 years being deployed in space, so by far the largest market opportunity ever.

Sonya Huang: We are thrilled to have with us today Philip Johnston, founder and CEO of Starcloud. You were the first to put up data centers in space, and just a few months ago, your first data center, Starcloud-1, sent back the message to Earth, “Greetings, Earthlings, or as I prefer to think of you, a fascinating collection of blue and green.” What a poetic thing to think about—AI in space looking back at us. Congratulations on what you’ve done. I’m excited to ask you all about data centers in space for this episode. Maybe first to get started, why build data centers in space?

Philip Johnston: Yeah. So firstly, thanks so much for having me. It’s freaking awesome to be here. So a quick background on myself. I mean, I’ve been interested in space my whole life. I actually spent a few years with McKinsey with the space agencies of the different governments around the world. And that’s where I started to notice that the launch cost was very rapidly coming down. And so three years ago, I kind of randomly on a weekend decided to take a trip down to Starbase, Texas, where SpaceX is building the Starship launch vehicle. And I think it just blew me away, the scale of the new sort of gigafactories they’re building. I think they’re planning to build  three Starships per day or something on that order. And so the coming capacity and the potential launch cost is, you know, orders of magnitude off where it is today. And so, you know, I started thinking about, okay, well, what is that going to enable? What new businesses will that enable?

And with my co-founder Ezra—and Ezra and I go way back. We grew up in the same place in the UK—we started looking at the concept of space-based solar, which is where you have these huge solar panels in space, and then you somehow beam that power down. It’s not really a new idea. I mean, people have been looking at this since, I think Isaac Asimov in the ‘40s was talking about it. The problem with space-based solar is you lose most of the energy in transmission from space to Earth. And we very quickly realized, okay, well, once we get that power down, you know, most new energy projects on Earth today are being built to power data centers. So either directly or indirectly, that power is going to be going into data centers. And so if we can find instead a cheap way to get the data center to space, we don’t lose all of that power and transmission. We can consume that power close to the source. And that then became the basis of a white paper that we put out in 2024, and from there, that’s how the company got going.

Pat Grady: Well, let me ask you—so there’s a distinction. I heard a lot in there on it is “possible” to do data centers in space.

Philip Johnston: [laughs]

Pat Grady: Why do we need to do data centers in space? It may be possible. Why do we need to do it? Why do we need to go to space?

Philip Johnston: Yeah, that is a great question. The main reason is we are very quickly running up against constraints on where and how we can build new energy projects terrestrially to power data centers. So for example, if you want to build a new 100-megawatt energy project, you’re looking at a five-to-ten-year lead time just on the permitting, particularly in North America. And so for example, if you want to cover 10 square kilometers of countryside with solar panels, there’s a lot of people who are going to be very annoyed about that. And so you just have this—you know, we’re very rapidly plowing towards a brick wall where it’s going to be extremely difficult to build new energy projects. We’ve already built in the easy places.

Pat Grady: If we could wave a magic wand and remove the regulatory constraint, What’s the next constraint we run into?

Philip Johnston: Well, it is actually just cheaper to build things in space once the launch cost gets below a certain point. So for example, with terrestrial solar, which is the cheapest form of energy we have, you’ve got three big costs. The first one I mentioned, which is the cost of permitted land. The second is the cost of battery storage and backup power. And then the last is the cost of the solar cells themselves. So in space, number one, we don’t need permitted land; number two, we don’t need batteries and backup power because they’re 24/7 in the sun; and then lastly, we need eight times less solar because one square meter of solar panel in space produces eight times the energy of one square meter of solar panel on Earth.

And so there’s a break-even point where the launch cost—which is our main additional cost in space, where the launch cost comes below the cost of those three factors. We see that breakeven to be around $500 a kilo, but as the cost of permitted land goes up, which it is going through the roof right now, that breakeven point actually comes even closer to $1,000 a kilo. But as I say, even if your permitted land cost is zero, you still have those other two factors. And so at some point, if you’re going to build data centers anywhere, you know, once the launch cost is below a few hundred bucks a kilo, you’re going to do it in space because it’s just cheaper.

Pat Grady: What have you learned about maintenance in space?

Philip Johnston: Yeah, that’s a great question as well. So for maintenance, we’ll be operating very similar to the way that Starlink satellites work in the initial years. We’re not going to have robotic maintenance or anything for the first few generations at least. And so that means we need to have redundancy on the critical systems, and then we overprovision things which fail over time like solar panels, you lose a few percent per year.

It’s very, very important that the chips do not have a higher failure rate in space than they do on Earth, because the chips are one of the largest costs in this. And so a huge amount of our time, probably 70 percent of our engineering time is going on to the heat problem, and the other 30 percent is going on to making the chips as reliable as possible in space. And that means a whole bunch of testing in different particle accelerators.

We did two rounds of testing at the cyclotron proton beam accelerator in Knoxville, one round of testing at the heavy ion particle accelerator in Brookhaven National Lab. And we run—in 24 hours, we can simulate five years’ worth of radiation. And with that then, all of that data then goes into informing our choice on shielding and other software for bit flip mitigations and things like that.

But in terms of what we’ve learned on the first satellite, it’s actually the H100 that we have on orbit right now, we’ve not had a single restart failure yet or issue that needed a restart from the chip itself. There are other areas which we may need to put a bit more attention to, for example, the power delivery and solid-state drives, but the actual chip itself is extremely resilient. And GPU workloads in general are very resilient, and the reason is they’re stochastic in nature. And so if you—for example, if you type into ChatGPT, “Write me a poem about space,” it will give you two different poems. The quality of the poem will be the same, you know, it will have the exact same quality of the output, but the specific instance will be different. And so with a bit flip, on any part of that, or on most of the parts of that workload, it actually doesn’t make a difference to the quality of the output. So it’s actually—it’s surprisingly resilient.

Sonya Huang: You said you spend the bulk of your engineering time on the heat problem. I think most people have this notion that space is cold, and so therefore it should be an easier problem.

Pat Grady: When Sonia says “most people,” I thought that was the case.

Sonya Huang: [laughs] Yeah, sorry.

Pat Grady: Until a few months ago.

Sonya Huang: So can you just talk about, you know, what exactly is the heat dissipation problem and what are you doing to solve it?

Philip Johnston: Yeah, for sure. So yeah, as you mentioned, space is cold. And in general, that’s actually—once you get far enough down this rabbit hole, that ends up being great. What’s not great is that space is a vacuum. And so obviously like a thermos flask is designed that way because a vacuum is an insulator. And so the only form of heat dissipation you can have is infrared radiation. And so everything in this world glows in infrared. If you had a camera on your face, your face would be glowing in infrared.

And the same is true in space. And the amount that it glows is proportional to the temperature differential between the temperature away from your faces or away from the satellite versus. And it actually scales with the fourth power of the temperature. So a very small increase in the temperature increases the heat dissipation by a huge amount.

And so one of the critical things is we need to run these radiators as hot as possible. There’s a few different ways you can do that. Either you can try and run the chips as hot as possible. The problem with that is the chips have a shorter lifetime if you run them hot. The other thing you can do is there’s a few ways to artificially boost the temperature of the radiator. So things like heat pumps, which you can take, for example, 60-degree fluid from the chips, and then you can turn that into 100-degree radiator temperature with heat pumps.

Sonya Huang: Got it. And so would you say the heat dissipation problem is like a solved problem for you all now? You obviously have one GPU in space. What is it going to take to solve the problem for, you know, eventually, hopefully gigawatt-scale data centers in space?

Pat Grady: Yeah, how does it change as you scale?

Philip Johnston: Yeah. So the first one actually has a very different thermal management system than the next one coming up. The first one, we submerged the entire motherboard, power systems, GPU and everything else in this phase change material. It’s like a material that goes from solid to liquid as it heats up. We can’t run that continuously, though. That was merely just to prove how this works. The second one is much closer to the end state, which has got this enormous low-cost and low-mass deployable radiator. So it has liquid—we’ve got our custom heatsinks next to the GPUs. This fluid runs past the GPUs and then out to this extremely large deployable radiator. From that one to the next one, just scaling it up, it actually is pretty simple. And so yeah, we’ve tested this in thermal and vacuum chambers and it works. We just need to now put it on orbit and make sure it actually works on orbit. And that’s going to happen later this year.

Sonya Huang: Awesome. And how much—I guess, relatedly, you have one GPU up in space currently.

Philip Johnston: Mm-hmm.

Sonya Huang: Do you see yourselves launching …

Philip Johnston: Five, actually.

Sonya Huang: Oh, you have five now.

Philip Johnston: Well, no. There’s 5 NVIDIA GPUs on that first one. The H100 is the one that gets the press. [laughs]

Sonya Huang: I see. I see. How much—I guess, what is the launch capacity, so to speak, of what you can get up in a single payload, and how big do you think these individual data centers can get?

Philip Johnston: Yeah. So we’re designing for the Starcloud-3. The next one that we’re launching is around eight kilowatts, so pretty small still. The one after that, which is what we’re now designing, is the Starcloud-3. We can fit 50 of them per Starship, and they fit out of the Pez dispenser form factor, that door that Starship has, that little slit.

Pat Grady: Yeah.

Philip Johnston: So each one of those is about 200 kilowatts, about three tons. And so if you have 200 kilowatts per Starcloud-3 satellite—and it’s all for inference, essentially—that means you can fit 50. So that’s about 10 megawatts per Starship launch. And so once a Starship is flying at rate, you know, we’re expecting to fly hundreds of these per month. And so you’re talking, you know, several gigawatts of new capacity per month, tens of gigawatts of new capacity per year.

Pat Grady: So you mentioned it’s all for inference. I was wondering about that,because pre-training, you want contiguous compute. Might be tough if you’re sending everything up into space. Inference, you want low latency. There’s some—you know, there’s a speed of light component, getting information to and from space. Is that a bottleneck at all, or is that it is low latency enough that it doesn’t matter for inference?

Philip Johnston: It’s as low latency as Starlink. So if you can do any inference workload through Starlink, if you were using ChatGPT on your phone through Starlink, for example, it would be exactly the same. So sub-50-millisecond latency when you’re on Earth. And that means any, like, if you have a Zoom call, that could easily happen with 200-millisecond latency and you wouldn’t notice the delay there. So basically any inference workload, you know, voice agents for customer service or back office business processing agents or video generation or ChatGPT or anything else can be done with this constellation.

Pat Grady: And then maybe slightly different question, but kind of on this vein, if you had a trillion dollars just sitting in a bank account, and you had to use it to build the compute backbone for AGI, how much of that trillion dollars is going into space?

Philip Johnston: One hundred percent.

Pat Grady: Okay. All right.

Philip Johnston: [laughs]

Pat Grady: Paint that picture for us.

 Philip Johnston: I mean, we really are talking about by far the largest market opportunity ever. So we are talking about trillions of dollars per year of CapEx spend going. My best guess is that within five to ten years, at least half of all new compute capacity is being deployed in space for the energy. So the problem with doing this build-out on Earth is that every additional data center you add to the grid, like the marginal cost on every additional data center goes up every time you add one, because we’re using all the easy places to build energy projects.

Pat Grady: Yeah.

Philip Johnston: In space, the marginal cost goes down for every additional unit, because you’re now manufacturing at rate, and the more Starships you fly, the cheaper it gets and all the rest of it. And so there comes a crossover point where it just makes zero sense to continue building things on Earth. So I think it will be close to $1 trillion per year of CapEx spend within 10 years being deployed in space. So by far the largest market opportunity ever.

Pat Grady: Where and when do you think we will first cross over? When I say “where and when,” I mean, like, what geos will become untenable, and therefore you’ll need to go up into space?

Philip Johnston: As soon as Starship is flying frequently, it will be cheaper to build data centers in space. So my guess for—it looks like the first Starlink payloads will be either end of this year, early next year, Starlink V3. And then as I understand it, it’ll be 12 to 18 months after that that the first commercial payloads are going up. And so that will be on the order of mid to late 2028. And then once it’s flying frequently, it becomes way cheaper.

Sonya Huang: Do you think that there’s stuff that needs to be solved in terms of data transmission? Do we need optical lasers sending data back and forth up there in order to kind of once we’re operating data centers at scale in space? And are those all solved problems?

Pat Grady: Mesh network in space?

Philip Johnston: That’s solved. Yeah. It didn’t used to be until two or three years ago, but Starlink has basically solved that. And there’s a bunch of other constellations coming online—Amazon Leo, Kepler. And also, once we have a few of our own satellites, we can do our own optical backhaul. So that problem would have been a big problem until quite recently, but it’s now solved.

Sonya Huang: And so each Pez dispenser will be its own data center? Do you see them ever, you know, coming together? You had that picture, that concept photo in your first white paper. Do you see them being able to dock onto each other?

Philip Johnston: Eventually, yeah. It doesn’t really make too much sense to do that initially, because the only reason you would do that is if you want to train a large model in space. And to train a frontier model, you need—whatever the largest data center on Earth is, you need at least that in space. So right now that might be, like, 300 megawatts or something. You know, it’s going to be a long time before we’re going to be able to dock together a 300-megawatt structure in space. And by that time, the biggest one on Earth will probably be three gigawatts. So it’s like a moving goalpost.

And the other thing to say about that is training at the end state will be less than one percent of all AI workloads that are being done. And so it’s just not a very good market to go after anyway. We showed it in the initial video because we didn’t want people to come back and say, “No, you can’t do training in space.” And we’re like, “Well, you could if you wanted to, but probably not ideal for the initial.”

Pat Grady: [laughs]

Sonya Huang: Got it. So more of a provocative photo. What about—you mentioned at the very beginning, robots. Do you think we’ll end up having maintenance robots in space to maintain these data centers?

Philip Johnston: I don’t think we’ll necessarily be maintaining our small inference nodes, but certainly we will have fleets of robots building large structures in space. I mean, they’ll definitely be on the moon. Like, if you’re going to build big manufacturing facilities on the moon, essentially something like Optimus will be doing that. Like, an Optimus robot doesn’t require too much modification to work in space. You just essentially put it in a spacesuit and that takes care of the thermal and radiation aspects of it.

Sonya Huang: You don’t see Optimus going to go maintain your …

Philip Johnston: Not really, because they’re too small. Each one’s only 200 kilowatts. So we just need to make sure that they’re—if we docked it together, then yeah, you could have Optimus maintaining it. But you wouldn’t fly ops between each of ours, and—well, maybe you would, I don’t know. That starts to sound a bit sci-fi.

Pat Grady: What’s the useful life on them, and how do you retire them?

Philip Johnston: So we’re designing it to be the same as the useful life of the chips, so five, six years. Potentially possibly longer, actually, in space, because our marginal cost of energy once we’re launched is zero. So there’s an argument to be made that we can run them longer. But end of life for now is the same as Starlink, so deorbit. There is another possibility, which is putting them in some kind of graveyard orbit, they call it. But for now, it’s just deorbit.

Sonya Huang: What goes into making a great—you guys have a bunch of mechanical engineers, satellite engineers. What goes actually into the engineering of solving this, and what are the core competencies you look for?

Philip Johnston: Yeah. So as I mentioned, the two biggest challenges are the thermal and the high radiation environment of space. So for the thermals, we’ve got, for example, the guy from NASA’s Jet Propulsion Laboratory who designed the radiator or all of the thermal system for the Europa Clipper mission. That was NASA’s largest, most expensive deep space mission ever. He also designed the thermals on the Firefly lunar lander and for three of the NASA payloads. And then another guy from Amazon Kuiper—or Leo constellation now who’s lead thermal engineer there. And then a bunch of people from SpaceX for the thermal side of things. And then for the radiation testing side of things, my co-founder Adi has previously launched a bunch of GPUs, and did all this kind of particle accelerator testing.

Sonya Huang: Has anything surprised you from the testing?

Philip Johnston: A few things, but it’s—this is like our core IP. [laughs] We’re a bit tight-lipped about some of the things.

Sonya Huang: Yeah.

Pat Grady: It’s okay. It’s a friendly audience.

Philip Johnston: [laughs]

Sonya Huang: Okay. So you very much seem like a SpaceX, Elon maxi based on some of the things you’ve said. What do you think of some of the alternative space launch companies?

Philip Johnston: I’m very hopeful and positive about them in general. But I mean, Elon—I mean, you guys have a massive SpaceX position, so you guys are presumably SpaceX maxis, too. And what a great investment from Shaun, by the way. Like, I think it was Shaun. SpaceX, like, I think Shaun said it’s the best company ever. I do think SpaceX is the best company ever. I think they’re, like, unbelievable at what they’re pulling off. So they’re now—they’re just so far ahead of everybody else. Like, the other companies that could do—you need a reusable upper stage to be anywhere close to cost competitive.

So you have Stoke Space. Relativity is potentially going to look at it. I think that New Glenn is going to—the Blue Origin rocket. They haven’t announced it, but they’ve started hiring for heat shield engineers, and you would only do that if you have a reusable upper stage. And then Rocket Lab, I don’t think are even trying. So even if they were to start now, you’ve got a five- to ten-year-long development cycle on a reusable upper stage. And so yeah, I think SpaceX is basically building one of the most entrenched monopolies of all time in terms of the launch capacity.

Sonya Huang: And to that end, you guys partner with them. They’re your launch partner. How does it feel to be building something where now Elon has also stated that his intention is to put a lot of data center capacity up in space?

Philip Johnston: Yeah, yeah. So SpaceX are amazing partners. Our company definitely wouldn’t exist without the rideshare program. And in general, they’re extremely—they work hard to foster the whole ecosystem. I mean, they launch their own competitors. They launch Amazon Leo, the Kuiper constellation. They launch OneWeb, which are both direct competitors to Starlink, and they open source their patents and things like that. So yeah, we love working with SpaceX.

In terms of the way that I think this plays out, because you’re right, now they’re going extremely aggressively into building their own data centers, so SpaceX will have a lower cost base than us because they own the launch. I think the way that we fit into this is kind of twofold. Number one, SpaceX are mainly going to be serving their own workloads, so Grok and Tesla and others. They may offer a third-party cloud service, but as I understand, there’s no intention to offer a box that people can put their own chips on, and then—which is the core offering that we have, which is we essentially give people a box, and it has power, cooling and connectivity, and then they can put whatever chip architecture they want in there and sell to whichever customers they want. So you can think of us more like Equinix, while SpaceX might be more like AWS or something like that. But so they will have a lower cost base than us, but we will have a lower cost base than all of the hyperscalers.

So the way I see this playing out, if it’s true that on a sort of five- to ten-year timeframe, most new data center capacity is being deployed in space, what’s going to happen is in three years, once Starship is flying frequently, all of the hyperscalers are going to realize this and they’re going to be like, “Oh, shit. If we don’t have access to space compute, we are screwed, because we can’t scale anywhere near as fast as those that do.” And so at that point, they have three options, I think. So one is they can pay Elon for his space data center capacity.

And for sure, some of them will do that. That would be a good option. Some of them won’t. I think lots of—it seems unlikely that OpenAI or Meta or Google or Microsoft would do that. Or they can start building their own satellites. Again, some of them might do that. It seems unlikely. I mean, Google, for example, say they’re doing what we’re doing. What they’re actually doing is they’re paying Planet Labs to do a demo in 2027, which, I mean, yeah, it seems like they’re not moving particularly aggressively if they are doing that. Or they’ll look around and they’ll say, “Okay, who has the most—we need to move quickly on this. Like, who has the most advanced capability in the market?” And at that point, we’ll be by far the most advanced in terms of what’s deployed on orbit and the engineering team and all the IP that we have. So I think at that point we’ve become an interesting partner with those guys. And I do mean partner, not necessarily just acquisition target. You know, I think there is a customer relationship where we provide the infrastructure and energy and they do the cloud providing part of it.

Pat Grady: Well, yeah, and I have the business model question then. Why choose the Equinix business model versus AWS or even Akamai?

Philip Johnston: It’s a good question. Yeah, we’ve been certainly looking at the cloud, like being a cloud provider ourselves. In the initial early days, we will probably have to do something like that, because nobody’s going to trust us with their chips until we’ve proved it works for a few times. We would much rather be an infrastructure and energy play than a cloud provider. And the reason is our core, like the core IP of the company and the core skill that we’re good at is building satellites that can dissipate heat and protect you from radiation. We don’t necessarily want to rebuild—you know, AWS has spent 20 years building an incredible application layer on top of AWS, and customers don’t necessarily want, you know, to not be able to use that functionality.

And so the other point of it is the most expensive part of all of this is the chips, and we would rather have somebody else finance the chips, and they can decide whatever chip architecture they want and all the rest of it. Maybe much further down the line, it will make sense for us to have a cloud offering, but initially, I think there’s a great business and it’s also much higher margin, depending on which way you look at it, it can be much higher margin.

Pat Grady: Okay. I have a question about real estate.

Philip Johnston: Yeah.

Pat Grady: How does real estate work in space? Because earlier you were saying one of the issues on Earth is running out of physical real estate to go build data centers on. How does real estate work in space? And as space gets more crowded, how do you think it will work?

Philip Johnston: Yeah. I mean, for now it’s essentially first come, first serve. And so we’ve just filed for a constellation of 88,000. It would allow us to deploy about …

Pat Grady: Who do you file with?

Philip Johnston: In the US you file with the FCC. If you’re going to interact with US ground stations, you have to. If not—which actually in the end state we won’t—you can pick any regulator in the world. And then they are under the ITU, the sort of global governing body. It’s weird that the FCC manages this but, like, I also think that’s weird. I think it’s a legacy from the days when the only thing satellites did was communication and RF spectrum and things like that. Now they’re going to—now they’re doing much more. It’s a bit of a legacy hangover that the FCC is …

Pat Grady: So real estate today is first come, first serve. How about 10 years from now?

Philip Johnston: Ten years from now, I expect it will be—certainly the most valuable slots will get filled up, and then it will probably be that whoever got them first will have the right to sell them.

Sonya Huang: Pat’s about to figure out how to be our overlord landlord in space right now.

Pat Grady: Big time space commercial real estate guy.

Sonya Huang: [laughs]

Pat Grady: All right. Question about security. How does security work in space? So let’s say a bunch of critical workloads are running on your satellites and somebody decides to attack them. Like, how does that work?

Philip Johnston: Yeah. I mean, we have a very good precedent for this, which is the Starlink satellites. So, like, in Ukraine, for example, the military is using them. So if Russia—it’s not that Russia hasn’t tried or doesn’t want to take out Starlink satellites, they definitely do want to. It’s a lot easier to blow up a data center—even if you’re Russia, to blow up a data center in Virginia than it is to blow up a data center moving at 27,000 kilometers an hour in low Earth orbit. And so if that were to happen, I mean, that would be considered an act of war. Where the Starlink satellites are flying right now, they’re flying much lower than they used to, and so there’s no sort of real risk of a Kessler type—of, like, a chain effect destroying low Earth orbit. So yeah, I mean, the US, that is the primary function of Space Force is they’re now building a whole bunch of interceptors and things to deter …

Pat Grady: [inaudible]

Philip Johnston: Yeah, exactly.

Sonya Huang: And I just have trouble visualizing how big space is. So this may be a gigantically dumb question, but as we get to this kind of Dyson sphere, like there’s tons of gigawatts, more than that in space. Does it get to the point where there’s just less light coming through the atmosphere because we have so much up there in LEO?

Philip Johnston: That is a great question. Not the way that we’ve designed it, which is we’re going to fly in this what they call a dawn-dusk sun synchronous orbit. And it’s actually good for us, it’s good for astronomy, it’s good for the fact you don’t block stuff out. So, like, let’s say this is the Earth, or let’s say, like, this is the Sun and this is the Earth. It’s not like we’re flying around like this, we fly over the poles, and so we never cast a shadow on the Earth, and we’re also never in Earth’s eclipse, so we never go behind the shadow of the Earth either.

Pat Grady: Interesting.

Philip Johnston: Yeah. And it’s great because it means we’re only visible in the night sky at dawn or dusk. And so we don’t then have problems with astronomy and all the rest of it.

Sonya Huang: Okay. There’s been some—as data centers in space has become almost a memetic thing, thanks to you, there’s also been some fierce criticism of it. What do you think is the chief criticism? You know, what actually resonates to you of the criticism, and what would you say is unfounded?

Philip Johnston: Yeah, I think things like the thermal problem is pretty easily solvable. Sometimes people put a cost equation out where they’re still using the Falcon 9 launch cost. And, you know, I say to people, like, if you don’t think the launch costs are going to come down, then we’re, like, a terrible business. If you do, then, you know, we may be the biggest business ever. The one that not many people talk about, but is actually probably the most significant, is we need the chips not to have a higher failure rate in space than on Earth, because the chips are such an expensive part of what we’re doing. Even if they have like a 10 percent higher failure rate, that would basically wipe out all of the savings from the energy.

Pat Grady: Speaking of which, what are the components of the ideal space data center? We sort of simplify and just say “GPUs in space.” GPUs, CPUs, memory, cooling, like, what all has to go in the box?

Philip Johnston: Yeah, it’s much simpler than most satellites.

Pat Grady: Okay.

Philip Johnston: Because most satellites, for example, the Starlink satellites, a huge portion of the mass and the cost is these phased array antennas. We don’t need anything like that. So it’s pretty simple. It’s solar panels, radiators, the box—like the bus—the chips. And the chips obviously then come with memory, motherboard, power system. Although we need very small batteries, you can’t send power directly from the solar panels to the chips, so you need some buffer in there. But we don’t need to have 24 hours of battery storage, which you do for most data centers on Earth. But that’s basically it. One reaction wheel, which is extremely unusual because most satellites have at least three reaction wheels. They’re very heavy because they need to turn the satellite. So as it spins up, it needs to turn the satellite. We only need one because the satellite’s very long, and you have this, like, natural stabilization from the gravity gradient between the closest point to the Earth and the furthest away point to the Earth. And so you only need a reaction wheel in this axis. It’s not going to move either this way or this way. And then two lasers for communication.

Sonya Huang: And so you can strip out a lot of the stuff that goes into land-based data centers then.

Philip Johnston: Yeah, lots of the stuff. I mean, chillers, cooling towers, battery backup power, AC to DC converters. Yeah, there’s a whole bunch of things we can strip out. Like, that’s why—so, like, a huge component of the cost saving that most people don’t talk about—most people are talking about the fact, okay, let’s say we can do three cents per kilowatt hour on the energy versus eight cents per kilowatt hour. That’s one part of it. The second part of it is our infrastructure cost is instead of, you know, you’re looking at $15 to $20 million per megawatt for new infrastructure for a data center on Earth. So that’s all the things like chillers, cooling towers, battery backup. For us, it’s only less than $5 million per megawatt. And that’s because it’s just literally solar panels, radiators. There’s, like, nothing else, really.

Sonya Huang: You said three cents versus eight cents. Is that roughly where you think the power costs are coming in?

Philip Johnston: Our all-in energy cost in the end state will be much lower than half a cent per kilowatt-hour, including the launch cost.

Pat Grady: Wow.

Philip Johnston: Yeah, the three cents is what we’ve signed for with one of our LOIs.

Sonya Huang: Okay. I see why this is going to be one of the biggest businesses of all time. How are you thinking about sequencing out? Do you have customers already? Do you have contracts lined up? Like, what do you think will be the first workloads that you’re running commercially in space?

Philip Johnston: Yeah, so we’ve sequenced it where, you know, there’s a bit of uncertainty about the timeline on Starship. And so the first few satellites are designed to provide edge and cloud services for other spacecraft, particularly military and government satellites and Earth observation satellites. And so yeah, we’ll be running workloads for various military customers. We already are actually on Starcloud1.

And you can basically keep the business running for as long as it takes until Starship is ready on those types of contracts. They’re on the order of a thousand times more dollars per hour for GPU time than a terrestrial contract would be, or if you’re competing with terrestrial.

So that’s this one we’re launching later this year. We’re launching another one next year, very similar, Starcloud-2 and Starcloud-2.1. And we can basically just keep doing that. Say Starship was delayed two years or three years, we can just keep launching these edge nodes for other spacecraft. And then as Starship ramps up, then we’ll be launching the Starcloud-3 satellite. And that’s the first one which is cost competitive with terrestrial data centers.

Sonya Huang: For your space customers, is there a reason they can’t just run their workloads on land-based data centers and beam it up, versus …?

Philip Johnston: Yeah. The main reason is we’re hugely constrained on the amount of data you can downlink from space to Earth. So for example, like a SAR satellite—synthetic aperture radar—they might be collecting five gigabytes of data a second. Then they have to wait for a ground station, because they’re only transmitting data through this very slow RF at the moment. When they’re above a ground station, they might be getting one gigabit a second data rate—gigabit, not gigabyte. So much slower than the amount of data they’re collecting. And so right now, they just throw away 90 percent of the data they collect, or it’s just not used.

And so in future, any satellite that can connect in with an optical terminal to the transport layer—like the SDA, Space Development Agency, has this transport layer—will be able to connect to us. They can ship enormous amounts of data to us through optical in space, and then we can run inference workloads on that in space. And that might be, for example, identifying a vessel in a normal—you know, they might send us 10 terabytes of data of just ocean. We can then identify the location of a vessel in that. At the moment, they don’t have the processing power on board to do that.

Pat Grady: Interesting. So the initial workloads are likely to be data that is collected in space and processed in space.

Philip Johnston: Yeah, exactly.

Pat Grady: Yeah, that makes sense. Okay, you spend a lot of time in space. Aliens.

Philip Johnston: [laughs] Great topic. I love this.

Pat Grady: Great! Let’s go! Are there aliens?

Philip Johnston: There almost certainly has been aliens in our galaxy. There’s almost certainly aliens alive in the universe. Doesn’t look like there’s any intelligent life in our galaxy right now.

Pat Grady: Why do you say there almost certainly has been?

Philip Johnston: Are you familiar with the Fermi Paradox? Like, this question of why—yeah.

Pat Grady: Go ahead and explain it, though.

Philip Johnston: All right. So the Fermi Paradox is the idea  that we should see more life in our galaxy than we do, or we should perhaps see life everywhere in our galaxy. If there had been life anywhere on—there’s sort of 400 billion stars in our galaxy, each with 10 planets. So you’re talking about four trillion planets in our galaxy alone. And there’s, by the way, a trillion trillion galaxies, but just in our galaxy, the Milky Way, and each one has been habitable for the last 10 billion years. So we’ve got four trillion planets potentially habitable for the last 10 billion years.

It would seem there’s two possibilities. Either we are staggeringly rare—and that is a possibility—unbelievably rare. We’re literally the first to reach this level of complexity in our galaxy’s history. Or intelligent life is somewhat short-lived. Now my working hypothesis at the moment is that intelligent life is somewhat short-lived.

And so yeah, they call them the Fermi Great Filters. If we’re extremely rare, the Fermi Great Filter is probably something like moving from single-cellular life to multicellular life. Like, that is extremely hard for life to do, let’s say. If the Fermi Great Filter is in front of us, which personally I believe it is, that means let’s say once you hit superintelligence, you know, it wouldn’t take very long for a swarm of a million killer AI drones to make mincemeat of both themselves and the planet. And we’re building swarms of a million AI killer drones. So, like, yeah, to me it wouldn’t be surprising if in the next, you know, few hundred to a thousand years we do not pass the Great Filter. Maybe it’s a little bit doomerism.

Like, the other alternative is we’re literally the first, and I’m quite happy to continue living life as if we might be the first. You know, I think we should send probes out to other stars, and I think we should, you know, expand and explore the galaxy and all the rest of it.

Yeah, but in terms of why do I think there’s been others? Yeah, I just think it seems pretty unlikely if on four trillion planets for 10 billion years, we’re literally the first to have reached this level of complexity. All of them would have probably seen—they would have all understood the Fermi Paradox, too. They would have all looked around and been like, “Wait.” Because it only takes one million years or two million years to colonize the whole galaxy from the point we’re at now. You know, even with the Voyager probe technology, you can get to Alpha Centauri in about 50,000 years, which is like the blink of an eye in galactic and evolutionary timescales. So, you know, we could send self-replicating probes to every star in the galaxy within about two million years. Like, we don’t see that anywhere, any evidence of Dyson spheres or intelligence in our galaxy at all. And so yeah, to me it’s pretty likely there’s been intelligence in our galaxy and it has not survived very long. What’s your opinion?

Pat Grady: Well, I don’t know if I have an opinion on that, but I have a question for you as a follow-up, which is—well, by the way, on that, the other thing that I think is an interesting theory is the, you know, ants by the side of the road hypothesis, which is intelligent life is not short-lived, we’re just irrelevant to it.

Philip Johnston: I like that too, but you would see Dyson spheres all across our galaxy. Like, it wouldn’t be difficult. Like, if you’re an ant in the middle of Manhattan, you’re not like, “Where are the humans?” Like, you know, the humans are pretty obvious.

Pat Grady: Yeah. The question I had, though, is you mentioned earlier, talking about sticking Optimus in a spacesuit and sending it to the moon. And so clearly you’ve thought about kind of the steps to becoming an interplanetary species, you know, starting with the moon and Mars and whatever. How do you see that rolling out?

Philip Johnston: I mean, the only thing I have to really go by is the plans that Elon has been putting out. It seems like that’s by far the most likely. Like, the Artemis programs honestly seem like a bit of a disaster, but Elon’s roadmap is unbelievably—like, I think they can actually execute on that. And there’s a reason to do it now. Like, building mass drivers and shooting AI satellites from the moon is like an extremely strong economic incentive for getting to the moon. And then once we’ve done that, we’ll go to Mars. So yeah, in my lifetime, I think we’ll have people on Mars. I think we’ll have cities on the moon in my lifetime.

Sonya Huang: What do you think are the best business models in space other than data centers?

Philip Johnston: [laughs] Definitely data centers are the best one. There’s a whole bunch. I think asteroid mining will be a huge business at some point. It might take a little while. Tourism, lunar hotels, low-Earth orbit hotels will be a big business.

Sonya Huang: Have you reserved one of the slots yet?

Philip Johnston: From Skyler, GRU. I don’t have $200 grand. I think that’s how much it costs. [laughs] But no, I think it’s probably quite a way off. And I think SpaceX is probably very well positioned to do that. And Elon even said he was going to enable people to get to the moon. And then what else? I think yeah, manufacturing in space will be a big business. There’s many more communications businesses that will be built.

Sonya Huang: Manufacturing what in space?

Philip Johnston: Well, at the moment, you know, companies like Varda are doing crystal structures, particularly for medicine and other things, but that’s purely because they want to take advantage of the low Earth—of the microgravity. I think over time, just because you can get access to more energy in space, it will be—you can do lots of things. For example, if you wanted to do refining of material from the lunar surface or from asteroids, you know, you can use the energy in space to do that.

Sonya Huang: Similar to the alien question, do you think AI is gonna help us understand the universe? Like, is the universe conscious? Things like that.

Philip Johnston: I hope so. Yeah, yeah. I mean, AI will understand the universe a lot better than we do. Like, what’s coming with AI is something that’s a trillion, trillion times smarter than all of humanity combined, so it will have a much better grasp on the reality of the universe than we do. And whether it’s able to explain that to our dumb human brains is another question. [laughs]

Sonya Huang: But what are you most excited for it to teach you?

Philip Johnston: I would love to understand more about consciousness. I think that would be the most interesting thing to me, particularly the hard problem of consciousness and why seemingly robotic, you know, things like humans have qualia, intentionality, and have sensations and like the—yeah, just consciousness in general I’d be very interested to understand.

Sonya Huang: I agree.

Philip Johnston: What about you?

Sonya Huang: Same answer.

Philip Johnston: Oh, yeah? Nice. What about you?

Pat Grady: On that one …

Sonya Huang: How do I maximize multiple money returns?

Philip Johnston: [laughs]

Pat Grady: Maximize net multiple money returns for limited partners while helping founders build legendary companies for data engineering and beyond. No, I agree.

Philip Johnston: You guys are using AI quite a lot internally, right?

Pat Grady: We are, yeah.

Philip Johnston: So I did this when we went to fundraise. I was like, “Okay, I’m gonna ask Gemini which space data center startup it would invest in if it was like …”

Pat Grady: What’d it say?

Philip Johnston: Starcloud. I was like, “Yes! It knows what it’s doing.”

Sonya Huang: Good Gemini.

Philip Johnston: Maybe it’s because it knows that I run Starcloud. I don’t know. Maybe it’s a bit sycophantic, but I tried it with different windows and like—but if I was a VC, I would 100 percent do the same thing. Maybe it’s more sophisticated than that.

Sonya Huang: We’re doing everything. So for example, there’s a lot of signal in what kind of infrastructure and tools that the models recommend you to use and those companies are [inaudible] …

Philip Johnston: Oh, that’s going to be …

Sonya Huang: So we’re mining that right now as an example.

Philip Johnston: Oh, that’s amazing.

Sonya Huang: There’s just so many ways to be creative, I think. And our younger people are probably the most token hungry, token consumptive, and they’re each kind of figuring out different creative ways to do things.

Philip Johnston: Yeah, I posted on our Slack yesterday. I try to be like slightly—this might sound like—this might sound like a weird way to phrase this, but I posted, like, “Monthly reminder that I’m not going to be happy until every engineer is spending $10,000 a month on tokens.”

Pat Grady: Yeah. Yeah.

Philip Johnston: And I know they’re going to—they’re sitting there going, “That surely is not the right metric to track.” But I just don’t want them to be like—I want to really drum it into them, like, this is literally what I expect. And I will be happy when you’re spending 10 grand a month on tokens. So, like, sometimes they come to me and say, “Can we spend $300 a month on Grok 4 Heavy?” It’s like, “Yes.”

Pat Grady: [laughs]

Sonya Huang: In the end state, how much of GDP do you think will be spent on inference?

Pat Grady: 99.9 percent.

Pat Grady: Wow!

Philip Johnston: So as in I think we’re building a Dyson sphere, and a Dyson sphere will be almost all of the physical economy. So yeah, you know, in sort of 500 to 1,000 years, 99.9 percent of the economy will be space compute, and almost all of that will be inference.

Sonya Huang: Unfortunately, 1,000 years is outside of our investment timeframe, but I agree with you.

Philip Johnston: I mean, it depends what you mean by “end state.” Yeah. I mean, in the next few decades it’s going—have you seen the charts of percentage of electricity consumption that goes into compute or anything like this? That graph is not stopping until it gets to 99.9 percent.

Pat Grady: Yeah.

Sonya Huang: Awesome. This was so cool. Philip, thank you for joining us today. You live in the future and you’ve brought that future to us, I think, faster than we could have ever hoped. And so thank you for joining us today. This is an awesome conversation.

Philip Johnston: Thank you so much for having me.

Pat Grady: Thank you.

Philip Johnston: Thank you.

More Episodes