Physics Gets a Vote: Nominal Cofounders on Hardware Development in an AI World
Nominal’s cofounders (Cameron McCord, Jason Hoch and Bryce Strauss) realized that the new age of reindustrialization requires a new approach to hardware engineering and testing that’s closer to how software is developed. They founded Nominal with the insight that while SpaceX, Tesla, and Anduril built proprietary internal platforms for hardware testing, the thousands of new hardware entrants can’t afford to replicate that work. We discuss their belief that all hardware companies will become physical AI companies, and why they think Nominal’s role as the verification layer will be critical – because unlike a video game, physical products require rigorous validation before they enter the real world.
Listen Now
Summary
Insights from this episode:
Hardware testing is the critical bridge between simulation and reality: Cameron and Jason emphasized that blending simulation outputs with real-world telemetry is essential—”physics gets a vote”—and that much of the value comes from continuously iterating between the two.
Data infrastructure for hardware is still catching up to software: Unlike software, hardware engineering often lacks centralized, cloud-based data practices. Many organizations still rely on local storage, fragmented tools, and manual reporting, creating barriers to scaling AI applications.
Testing is the ideal entry point for AI and data platforms in hardware: Testing has been neglected by digital transformation and lacks a system of record, but it’s where ROI is most immediate and where iterative, experiment-driven engineering naturally aligns with AI-driven insights and automation.
Traceability and cross-functional data are increasingly vital: As hardware development accelerates, the ability to connect data across manufacturing, R&D, and operations—enabling continuous testing and rapid anomaly detection—creates competitive advantage and is foundational for future AI-powered workflows.
The future of hardware is ‘physical AI’—but data and real-world validation remain paramount: While the dream is agentic AI designing, building, and testing hardware, the path there requires massive real-world data collection, robust validation, and new infrastructure. AI can accelerate tedious work, but safety and physical consequences demand more rigorous testing than pure software ever needed.
Transcript
Cameron McCord: We’re entering a period where there is going to be much more hardware testing. So I actually think that we are—like, the pendulum is going to swing back. I think we are coming to grasp with how little we actually understand about how physical systems operate in the world, and how, like, lacking we are from a data perspective. It’s going to be a race to try to, like, collect this data and actually develop these models.
Jason Hoch: I always think of it as like, if you have AGI designing, like, a video game for your child, like, you might let them play it without it being, like, rigorously tested. It’s just a video game. But if you had AGI, like, building a toy for your child, you would, like, really want to make sure that it wasn’t physically dangerous. It’s like, the physical world will just always be different because that’s what we live in.
Sonya Huang: Cameron, Jason, thank you so much for joining us today.
Cameron McCord: Thanks for having us.
Sonya Huang: Nominal is the all-in-one data and AI platform for hardware engineering. You are used by amazing companies from Anduril all the way to the Corvette racing team, in industries including aerospace and defense, robotics, autonomy and more. And I think one notable stat you just shared with me, you’re used by four of the top five defense primes in the US. Congratulations on everything so far, including the recent raise.
Cameron McCord: Thanks so much. Yeah.
Jason Hoch: Thank you.
Sonya Huang: Let’s jump right in. You’ve talked about how we’re entering a new age of hardware, and that, you know, America is rapidly reindustrializing its industrial base. Can you just discuss that?
Cameron McCord: Yeah. I think we talk a lot about there are a few, I think, tailwinds, macro tailwinds that Nominal is not really benefiting from now in terms of industrialization and hardware development. Really, when we think about that, we have this huge compression in timeline. People are trying to build and field hardware products faster than they ever have before. We think hardware testing plays a particular part in that, and we can cover that in much more detail.
I think reindustrialization more broadly, more money is going into building hardware products really rapidly, particularly in an area where we’re very prevalent in, which is in aerospace and defense. And I think really the paradigm of how hardware is being developed is shifting really, really rapidly. I’ll give a little sort of vignette, I think, of how we kind of think about it. You know, I think if you look back at core software development and you think about what kind of happened over the past two decades, I think one really good way to think about it is actually talking about GitHub as an example. And I’ll use that to talk about, you know, GitHub is a version control system, right? A VCS. But if you go back and do a little history, it’s like companies that were building pure software, you know, they would locally manage versions of their software that they would develop. They would—you know, eventually they started to centralize that internally, but still within the company. All internally managed. And then eventually it got so good, it became productized and outsourced. And, you know, venture dollars pour in. And that was really, I think, the first of the creation of something like GitHub. But I think all of the sort of CI/CD and DevOps, you know, tools that we take for granted today and the software testing problem really is a solved problem. But that same luxury does not exist for the hundreds of thousands of hardware engineers that are now at the frontier of software-defined hardware, autonomy and robotics. And that’s really the space that Nominal is playing in.
Sonya Huang: What do you think is driving the, you know, hardware feels hot again? I have so many hardware companies on my calendar every week. It seems like there’s a whole generation of founders that feels regalvanized. They’ve been trained at the likes of Anduril, SpaceX, Tesla. Like, what do you think is driving it? It feels like there’s something in the water in the kind of reindustrialization kind of startup community.
Cameron McCord: Yeah, I think there’s probably a positive frame answer to that question, and probably, like, a negative frame answer to the question, too. I think the positive frame is I think that we’re just like—I think humanity is sort of reconciling. There’s, like, these big oscillations. I think with, like, a lot of the ambition and a lot of the things we want to exist in the world are in the physical world. And I think people are just sort of coming around. Again, two decades of, like, the SaaS-ification of the world, and I think people are just excited to build real things again. And I think particularly companies like SpaceX, like Android, like Tesla, I think have proven that if you make investments in the infrastructure and the tools to do this type of hardware development, it’s a massive competitive advantage.
That’s a positive framing. I think that the opposite framing, I think, is, you know, we can talk about AI and how it is impacting many, many worlds here. I think hardware is still a world where there is defensibility in itself, because hardware is hard, right? I think it’s capital intensive, it’s difficult to bend metal and steel and electronics. And, you know, all of this world is very difficult. And so I think that, you know, there’s people excited about it from an investment perspective as well.
Sonya Huang: Awesome.
Alfred Lin: So one of the things that we’ve always observed is that there’s a big gap between what works in simulation and what works in real life. What’s that gap today? How do you, like, help founders with that gap?
Cameron McCord: Yeah.
Alfred Lin: And how do you make it concrete for them?
Cameron McCord: Yeah, I’ll start and then Jason, I’ll pass to you as more of the expert. But I lived this problem very viscerally in my time at Anduril. I got there around sort of 2018, 2019 frame. The company was very early, and there was—I think it was very in vogue, particularly then, to try and simulate everything. And I think that the real power comes from blending simulation outputs of models with real-world telemetry sensor data logs coming off of physical systems. And the advantage is being able to do that continuously and very iteratively. I saw the pendulum swing to let’s do everything in simulation, let’s get as early as we can in the design lifecycle. Like, we can solve problems there. But we sort of always joke, like, physics gets a vote, it still gets a vote. And we have started on …
Alfred Lin: Physics gets a vote?
Cameron McCord: Yeah. Physics gets a vote. I mean, we particularly have started with hardware testing as the narrow kind of wedge that we’ve built Nominal around in these sort of early years, because that is where software-defined hardware is, like, touching reality for the first time. And I think it is where most of the—it’s the tip of the spear for how software is going to impact the physical AI and the development of systems. Eventually, I think we will spread more and more into the simulation and design worlds, but I think being able to merge those two is actually where the advantage comes from.
Jason Hoch: Yeah, I was going to say part of the reason our customers have an appetite to partner with someone like Nominal is because, you know, these hardware organizations 25 or 30 years ago, they developed a model of solving these things in kind of a fragmented way. So the people who were building your simulation would be different than the people who were doing the first prototype, would be different than people who are doing the manufacturing. And as it all becomes more connected, the lack of a common data platform or infrastructure starts to really become obvious.
So recently, I talked to someone who for 30 years has become like a specialist—this is at one of the traditional primes—like a specialist in their specific proprietary simulation technology. And while it’s amazing the lengths that they’ve gone to, you know, it’s all getting disrupted very quickly by the incumbent players like Anduril. And so to move at the speed that people are kind of expecting nowadays, you have to make sure that the engineer who is maybe involved at the early stage of the lifecycle can actually take the logic, the validation they’re building on a tool like Nominal and apply it much, much later when something’s actually out in the field and they’re monitoring something that’s a production use case.
Alfred Lin: Before you guys started Nominal, what did the primes use? What did Anduril use? What did SpaceX use to do all this testing and monitoring and learning to change the product?
Jason Hoch: Well, so SpaceX is really interesting to us because kind of like unlike other players, they decided from the beginning that they had hired some of the most talented, intelligent, hardworking engineers on the planet, and they wanted to empower those engineers. And they said that the existing software that people use for test and especially test data analysis wasn’t good enough, and they started to build something proprietary and in-house. And when we were starting the company and kind of studying that, we said, like, hey, this is a huge reason for their eventual success. Like, it actually led to this acceleration. But, you know, a thousand companies that are now being started this year, next year, it doesn’t make sense for all of them to build a platform like that. So that’s part of the motivation behind Nominal.
Cameron McCord: And I’ll give an example of the many companies that are not SpaceX or Android or Tesla. I think this sort of status quo in the industry for test data management is pretty shocking. It still is an area where for most hardware development, data is almost by default stored locally. So there’s a lot of network accessible storage. It is still a world where, like, the cloud is not common. It is engineers downloading data from a central drive to their local machine, their laptop, to run their own individual MATLAB or Python or, you know, insert other parsing or analysis software to come to their individual result. Like, I’m an avionics engineer, Jason’s a GNC engineer, you’re a thermal engineer. Like, we’re all doing our work independently, and then we are trying to find a mechanism to post those insights and results back, often via screenshot. So PDFs, PowerPoint engineering is, like, still the bleeding edge for many, many of these companies. And I think we often talk about, like, the early days of Nominal. We were trying to, like, rip the industry from 2003 to 2019, 2020. And just like good software practices, sound data engineering. Like, Jason often talks about, you know, what we built today at Nominal is having to get 11—10 or 11 really, really hard software problems right to empower our users. And then now we’re on a very exciting journey, I think, of coming from 2020, 2021 into the world we’re living in today for our users, which is pushing the frontier.
Sonya Huang: Yeah.
Alfred Lin: How much are you educating incumbents? And like you said, you were working with four out of five defense primes. How much are they really adopting AI? How much are you educating them on what they need to do to improve their product? It still looks like it takes many, many years for them to make any change to their hardware products.
Jason Hoch: Well, as Cameron’s saying, like, the state of the art here is, is kind of behind. And so as we kind of catch them up, that’s the necessary first step to using AI. So as someone who uses AI tools every day, you might think it’s natural for a hardware engineer to ask a question like, “Hey, what happened in the last 50 tests that I ran, and is relevant to the test that I’m looking at now?” But that kind of assumes that the data from the last 50 tests even lives in one place. And that’s kind of the problem that needs to be solved. And the primes are interested in solving that. They recognize the value there, and some of them are getting, I would say, tired of trying to build it in-house themselves and they have an appetite to work with a partner like us.
Cameron McCord: A conversation I often have with a chief engineer, CIO, CTO, sort of across the table is like, this concept that they’re well aware that there are insights trapped in their hardware systems. So this is the real world of, like, data acquisition systems, test stands, lab testing, power supplies, instrumentation. Like, that is their bread and butter for bringing their hardware products to life. And such a small percentage of that data is ultimately making it into some central repository where it can be sort of structured with metadata, organized, cataloged, just like that basic step. It used to be digital engineering. I think that was sort of the term of art that was very in vogue. And now the conversation is rhyming more with physical AI, but I think the building blocks to getting these organizations ready to build AI capability and applications on top of that really starts with that semantic layer that Nominal provides in a lot of the way that we catalog this hardware data for our customers.
Jason Hoch: And I’ll say that the ambition of AI here gets me really excited, because sometimes it’s asking really interesting questions of, like, okay, is there something that my team didn’t catch when they did all the review of—if you have 10,000 sensors that are each producing a million points a second, like, that’s a ton of data that, you know, automation can maybe surface things we wouldn’t otherwise notice.
But we should recognize that some of it’s also going to just accelerate the more tedious parts of data ingestion and data review. So right now it might be the case that, you know, one of our hardware engineering users every week, they want to automate, hey, this data check should be happening every single time we do a flight test. Even as I’m not becoming involved in it, we’re having to do that testing at a remote location, there’s a flight operator who’s going to be doing it in my place. Oh, but I still want that data check to be happening. Like, maybe the friction to them doing that is they don’t want to learn a custom, domain-specific language for encoding that check. If they could use an English-to-code prompt in a tool like Nominal, that might be the thing that unlocks them to actually get that across the line, and then they can focus on the more creative, more judgmental, human aspects of designing hardware systems.
Sonya Huang: You mentioned the GitHub analogy earlier. If you map out the hardware design lifecycle, so to speak, I’d imagine there’s the design of the thing, there’s the testing of the thing, there’s the manufacturing of the thing and there’s the monitoring of the thing in production. That’s what my simple brain kind of maps it onto. Is that fair?
Cameron McCord: Yes. Yeah.
Sonya Huang: Why start with testing? And you mentioned it’s one of the only categories—it’s a category that’s been sort of by PDFs. Like, design tools, manufacturing, these each have their systems of record. So why has testing been neglected to date?
Cameron McCord: Yeah, I think it’s really—I mean, one answer to the question would be like, you start with it because it has sort of been neglected and it doesn’t really have its system of record. I mean, one way to frame Nominal is like, we can be a form of a system of record for testing particularly. I think it’s like, there’s a quick business reason, I think, for starting with it, which is like, it is an area where I think demonstrating ROI with speed is just so clear for a customer result.
So there’s sort of this mantra in hardware development where, you know, testing is like this function. There’s sort of incremental improvements you can make, you know, save seconds that compound to minutes and hours. And, like, that’s real value for a customer that’s trying to field a product in a competitive market.
But there’s always this sort of long tail of risk that everyone who’s been on a major hardware program knows. There’s always something hidden in the data that they can’t figure out, and it’s sort of like an all-hands-on-deck effort. It can halt programs. And so Nominal’s been able to help customers sort of, I think, surface insights there.
I think the other answer, though, is just that testing is by definition iterative. Like, that is what testing is. It’s sort of the most classic experimental, independent variable, like science, right? And so I think it is just iterative in nature, which is exactly what Nominal wants to be aligned with, which is like, how can we drive that sort of iteration? And I think when you look at the hardware development lifecycle, testing is a really good place to start. And then we have this vision—and our customers pull us in this direction already—of, if I use a software platform, data platform in testing, I develop all of the validation logic that governs that system’s performance on a specific test, that should be the exact same set of logic that is easily dispersed in an organization to the production manufacturing sort of end-of-line quality test where I am just automatically running in Nominal. We call them checklists but, like, validation logic, essentially. And then I should also be able to deploy that to the edge, to this hardware system. And so almost like Nominal core, our core product becomes like the authoring hub of all logic that governs the performance of physical systems. And I can sort of version control it and deploy it at the edge.
Jason Hoch: And for people in the audience who are software engineers, I just want to clarify, because hardware testing is so rich, and it’s one of the things that I’ve come to really appreciate as someone who comes from a more pure software background. You know, when I think of software testing, there’s something as basic as a unit test, which is just so simple and deterministic, and even, like, richer kind of like end-to-end or production-level testing software, it just pales in comparison. Like, if you are building an aircraft and you’re performing a flight test, like, the test still involves—there’s a physical machine, there’s hundreds of people involved. There’s someone in it who’s flying. And so you might do pre-tests to make sure that that’s safe. You know, it actually becomes closer to what you might think of as a quote-unquote “production use case” coming from a world like software.
Sonya Huang: Totally. And then to your earlier point on physics gets a vote, testing does seem like, you know, where the rubber meets the road. Like, does the thing behave as expected? Which is really all that matters. My AI brain immediately hops to what an interesting dataset you’re collecting there, right? Because you now have data across customers on different configurations, different design patterns and, like, how they actually perform in tests. And so can you talk a little bit about—do you have designs about going further into kind of pushing AI research in that space?
Cameron McCord: Yeah. Yeah. And I’d say, like, Nominal is already in use, you know, with companies that are doing, you know, sort of physical model development and training these sort of models. And Nominal, where we started to be really valuable for these customers, which was an interesting insight for us, was there’s so much I think that you have to sort of like—what’s a good way to say it? You have to be able to sort of separate out when you are testing the performance of models on hardware systems.
And so Nominal, it turns out, a thing that we were really good at doing is automatically finding anomalies in data. And so for customers that are trying to figure out, am I collecting good data to then inform the development of my model in a robotic system? Let’s just take a robotic arm, for example—simple example. There could be issues with the servo, issues with the motor, issues with the physical performance of that system that are actually going to make all of the data you collect bad.
And so Nominal’s sort of running in the background actually saying, “Hey, of the 120-second test where this robotic arm folded a piece of laundry, actually only this percentage of data did we have, like, high-fidelity confidence that the actual physical telemetry and components were performing at—you know, within calibration, within standard.” Therefore, you can extract those pieces of data to go into sort of like actually training a model.
And that’s just the sort of crawl step of this. But yeah, I think we’re getting more and more involved with that with our customers, and think that will be sort of an integral part of that stack in an area where they frankly don’t see it as a differentiated capability that they would want to build themselves. It’s hard to, and their proprietary IP is developing the model itself. But, like, Nominal, I think the ability for us to derive insights across many of those use cases, I think is going to be helpful for customers to bring them [inaudible].
Sonya Huang: You know how in the coding space there’s, like, the verification agents?
Cameron McCord: Mm-hmm.
Sonya Huang: It seems to me that you guys can almost be like the verification agent that assists in each company’s development of its design agent, so to speak.
Jason Hoch: Yeah. Yeah. I mean, this is the analogy that I’m the most excited about, which is like, it would be amazing to have unit testing for hardware, but part of why agents have gotten so good in the world of coding is just because things are verifiable. And so, like, that learning loop can go really fast, and it would be a huge dream to have that for hardware, but I think it’s necessary to build, you know, essentially like test and validation infrastructure to get there.
Sonya Huang: Yeah. Makes sense. You brought up the robotic arm example, so I have to ask: do most companies have separate hardware and autonomy teams that you observe today? And then is it separate hardware and autonomy stacks? Do you serve one side of the house only, both sides of the house?
Cameron McCord: Yeah. Well, we see it—let’s pick the robotic arm one and keep unpacking it. I think what we see is we kind of see—no pun intended—we often see three teams that have three different stacks. And depending on if the company actually manufactures its own robotic systems. But there’s a manufacturing stack and there’s a manufacturing team. So the people that are actually assembling the robot, it could be, you know, even that digital thread could start at a supplier, and they’re on site, you know, doing the final construction. But there’s a manufacturing team. There’s normally, like, an R&D team that does a lot of prototyping, kind of experimentation, more of what we were talking about, the sort of model development use cases. And then there’s generally, like, a customer-facing team. So fleet operations, they’re trying to observe how the robotic system is performing out in the wild, collecting all of that onboard telemetry information.
So three different teams and three completely different stacks. And so it’s been really interesting to come through and work with customers to actually find the way that Nominal spans all three of those use cases and how powerful that is. We talk a lot about continuous hardware testing. Like, it’s a term that we speak internally about at Nominal—and externally. And so being able to have that sort of invisible thread between an anomaly or an issue that happened with the robotic system deployed in the field, where that comes back to the R&D team, they can quickly triage it. And then if it does derive from a physical component, you know, malfunction or something that’s out of calibration, you can sort of follow it all the way back. Like, I think that’s a big area where Nominal plays.
Jason Hoch: Yeah, I would say that a word that our users care a ton about is just “traceability.” Like, they always want to understand, like, where did this part come from, what test did it undergo, and the cataloging. And that just gets really, really complex at the scale of systems that our customers are building. So if you’re building an aircraft, you know, it’s not the case that you can have every single subsystem go through every single test all the time. It’s just too expensive. You don’t have enough budget, time, resources. And so keeping track of that is fundamental to doing good hardware engineering work.
Alfred Lin: So today, you can have basically Claude code write substantial software. What do you think is needed before we can have an AI system design, manufacture, test, monitor and sort of come up with new hardware from scratch?
Sonya Huang: Trying to vibe code an airplane?
Alfred Lin: Maybe the airplane shouldn’t be an airplane. It should look something different, especially if we want vertical lift airplanes.
Jason Hoch: It’s one of the things that I talk about when I’m trying to hire a team, like, when I’m trying to say, like, “Hey, if you’re a software engineer, come work on Nominal.” It’s like we’ve all spent so much time building the internet, and the internet works pretty well, but we’re still really far away from being able to vibe code an airplane.
Like, I think about right now I can’t—I have to assemble IKEA furniture myself at home, right? Like, it would be great to have that problem solved. And that’s, like, such a microcosm of saying, like, “Hey, can I design my own IKEA furniture at home?” So it just feels like there’s many, many steps between where we are today and being able to vibe code hardware. But a lot of them come back to—whether it’s like the feedback loop of, like, is this thing working or not, or even just how do we even have training datasets to do hardware AI research? Like, a lot of it comes back to the problem of data collection, data cleaning, data standardization, which is, again, really where we’re focused.
Alfred Lin: But if a company uses Nominal, if they integrate all the data, they have the data from the test, they have data from how different designs perform, they have data from all the context on how something was made. Shouldn’t it be able to sort of learn from all of that?
Jason Hoch: Yeah, I think so. Like, I think about—I was talking to someone this week about, you know, when a test is happening, like even just the audio data of the operators talking to each other during that test, like, that’s a really valuable data set to collect and start to incorporate into a platform like Nominal. I think before AI tools, that would seem like a little bit too much effort, like the bang for buck wouldn’t be there. But now it’s like, oh, of course we should do that. That should all just kind of be brought into one place. And I think over the next couple of years, I’m excited to see what’s unlocked by just even having the data asset collected.
Cameron McCord: I think there’s a lot of really frontier work I think happening in a lot of the modeling and simulation side—CFD, fluid dynamics. Like, people are picking apart. I think the testing world is one where it’s—I think we’re doing it. Like, Nominal is the one that is going to do it.
And maybe I’ll answer the question too, by giving a vignette of some work that we’re doing, some pretty frontier work we’re doing with the US Air Force. So we are working with them, working with DARPA—the Defense Advanced Research Projects Agency—on this really cool effort called CYPHER. It stands for CYber-PHysical systems Executing in Real time. It wouldn’t be the, you know, Defense if it wasn’t a lot of acronyms. But essentially, for those kind of listening in, quick, high level about what test engineering looks like for, you know, a major airplane or weapon system sort of development, it’s this giant matrix of very deterministic test points that need to be satisfied. So my system needs to be between this and this value during this condition. And it’s just literally this giant matrix that kind of is burned down very sequentially, often over the course of years.
What this effort is getting at is actually involving AI agents that in sort of faster than real time are paired with digital twins and that recommend the next best sort of test condition, the sort of knowledge-maximizing next test condition extremely quickly. So rather than run a flight, go fly, collect data, see if I met one discrete deterministic test point, land, look at data, say yes, do it again, actually, now that especially the systems themselves are autonomous, you can have really high endurance. And so in again, real time or faster than real time, change the paradigm of testing from a matrix where I discretely go through to actually just sort of like a gradient curve where I’m sort of like always adjusting my vector extremely quickly, and sort of retraining my model and updating the digital twin physics-informed surrogate model of, like, what the world is. That’s really cool. And I think that is the nirvana that we’re getting towards. And I think, like, it’s—we’re seeing it in sort of the earlier design phases again, but I think it’s just been really hard to do in the test world. But the fact that we are working, I think, hand in hand with the government on this, where they have access to test ranges and infrastructure that make this stuff possible is really exciting for us.
Alfred Lin: How advanced is our Defense Department on the use of AI? Or not advanced?
Cameron McCord: Yeah, it’s interesting. I think this administration particularly has been, like, very forward-leaning on AI. So it’s actually been—you know, AI used to be sort of a disqualifier almost from some contract opportunities. Just because it’s—we talk about Nominal as like the epitome of mission-critical applications. You don’t want experimentation. Jason sometimes—we have a Slack channel where we’ll post—you know, we use coding agents and tools as well. And they’re really good for a lot of, like, front end, you know, React components and different things. But some of the recommendations for some of the, like, back end, you know, things our team will like laugh at and be like, “If we had merged that, it would have been really bad for the customer.” So I think, like, there’s good reason to have some sort of, you know, skepticism, but that’s changing quickly. So I think, like, the—yeah, the Department is, like, really leaning into more and more experimentation here. The sort of collaborative combat autonomous aircraft platforms are really, like, pushing the frontier. We have worked closely with Anduril and some other vendors on that project. So I’m inspired by—no pun intended—the gradient of, like, where we’re going.
Alfred Lin: Can I simplify your business to collecting data, visualizing it, analyzing it, iterating it, report on it? Then isn’t that perfect for agentic AI?
Jason Hoch: Yeah, I think if you think about the loop that is hardware testing, there’s a ton of different—like, every single point in that process could be accelerated. So earlier I talked about, like, there’s some tedious aspects of data review. And I would say one of them is reporting, where once you already have the data analyzed, if you, the electrical engineer who’s designing a battery subsystem, have already kind of done the interesting parts that extract from your brain only the things that you know about, like, okay, how do I take these input channels and actually synthesize it into the did the system perform what I think it should have performed or not in a way that my team can understand, the VP can understand. Now at this point, other people might want to ask that question and get it into a certain PowerPoint slide format so that they can disseminate it. Or, you know, literally in some cases it’s like there’s a PDF that I must ship to our customer, like someone who’s purchasing this. Like, yes, AI can accelerate all of that.
Cameron McCord: Yeah. I mean, I get excited by the shift of the paradigm. We sometimes internally talk about, like, it used to be that 50 humans would be involved in the testing and validating of one physical hardware product. I think right now we’re sort of in the, like—that changed from a ratio to, like, one to one but, like, how do we get to a world where one human can sort of be using agentic tools in this space? Using Nominal can sort of be doing it in parallel for 50 systems. And what does that kind of look like? And so we’ve already built really, really interesting and powerful things in our system, just where you can have that sort of chat interface, LLM interface, where you’re saying things like, “Hey, plot the kinematics of the drone.” And that’s just like a really simple example, but on the building blocks that Nominal sort of has, like, you know, users’ eyes sort of light up when that’s just a task that is extremely manual that they would have to go through. But there’s still these areas where I think, like, human insight has been really key, and we’re trying to build—one way to look at it is we’re trying to build a massive dataset of the human-enriched data, which is, I think, mechanical engineering masters, PhDs enriching this data and doing it in Nominal is a powerful asset.
Sonya Huang: Totally. What inning do you think you are in terms of AI in your product? And if you were to zoom out to AI nirvana for Nominal, what does that look like?
Jason Hoch: I’d say it’s, like, still early innings, just, like, thinking about how much has changed even just the last three months. Like, I think I’m someone who’s just like, 12 months from now, I hope we still think that we’re in the early innings, because if we don’t, then we’re probably not humble enough about just, like, what’s coming around the corner. But I think about, like, the features that we’ve added today, and, you know, I think we could have twice as many software engineers at Nominal building AI capabilities and still discovering new things that our users might find exciting.
So one thing I was joking about earlier is like, do we need, like, a Moneyball for hardware testing where it’s like, if you’re watching a sports game, there’s always these very obscure stats, like, oh, if this person completes this play, then they’ll be the third best. And I obviously don’t watch a lot of sports.
Sonya Huang: [laughs]
Jason Hoch: But seriously, like, when we talk to our customers, you know, one of the reasons they like Nominal is, like, we’re putting more data in front of more eyeballs than they’re used to having going on in their organizations. And what that leads to is someone notices something that when you catch it in that moment, it only takes you 30 minutes to address something that’s going wrong, versus if it went unnoticed, it could lead to something exploding and then it’s, like, two days of the entire company being shut down from their most critical test campaign. And when you just—again, like, the volumes of data are only going up as these systems get more complex, it makes a lot makes a lot of sense to have agents kind of monitoring almost as pair programmers in your control room as you’re doing these high-scale tests and saying, “Hey, you’re not looking at this, but relative to the last 50 times you’ve done this, it’s out of family and it’s probably worth someone investigating.”
Sonya Huang: Yeah, got it. I guess if you zoom out to this AGI future, hardware company of the future, what does the hardware company of the future look like?
Cameron McCord: I have a thesis that actually I think that we’re entering a period where there is going to be much more—obviously I believe this from a business perspective, but much more hardware testing. So I actually think that we are—like, the pendulum is going to swing back. I think we are coming to grasp with how little we actually understand about how physical systems operate in the world, and how lacking we are from a data perspective. And so I think companies are building more and more hardware. I think it’s going to be a race to try to collect this data and actually develop these models. I think it’s good for Nominal.
I think eventually that’s going to come full circle, where the best way to build a hardware product is minimizing the amount of real-world testing, but it’s a world where you have AI agents working along the very simple steps you laid out in hardware product development, optimizing each of those steps, and then optimizing sort of the steps between those, and actually being able to link the design space to the test space with agentic reasoning across how do I optimize testing of the system in the smallest amount of time possible, and only preferably do it once? Like, pre-train, pre-simulate everything, and then run that sort of agentic test agent across my physical system, and hopefully get a hundred percent satisfaction. But I think we’re far away from that. And I think to get there, there’s going to be this huge explosion of the need for more testing and more fusion of real-world test data and model outputs.
Jason Hoch: I always think of it as, like, if you have AGI designing, like, a video game for your child, like, you might let them play it without it being, like, rigorously tested. It’s just a video game. But if you had AGI, like, building a toy for your child, you would, like, really want to make sure that it wasn’t physically dangerous. It’s like the physical world will just always be different because it’s what we live in.
Sonya Huang: Yeah. Do you think all hardware companies will become like physical AI companies?
Jason Hoch: I think yes. I think in the sense that—I mean, at least I hope that the design, even the generation, the manufacturing, as all of these things hopefully become accelerated by more sophisticated AI tooling, I hope that people’s creativity is unlocked in the physical world in the same way that it is in the software world right now.
Alfred Lin: Because most hardware just does one thing and one thing well. It should be a lot more flexible.
Cameron McCord: Yes. Yeah, I think it’s a really good point, Alfred. I think the ability to unlock, I think, more versatility. And I’ll give, like, the present-day simple example, which is like, if you talk to people, they’ll often cite—I think it’s the F-18. I’ll give another federal example, but the F-18 jet, like, the limitations and inefficiencies of that vehicle as a result of the process in which it was tested, there’s all this extra stuff on it. The way that, like, rear fins are mounted is like, any aviator would say it’s a very inefficient sort of vehicle. And I think that’s just like an interesting example of what you get when you have the worst test process. And I think about, close your eyes and squint, like, when you have the best test process, I think you can actually build in a lot more flexibility and versatility into the end product, which will be really, really interesting.
Alfred Lin: That’s fascinating. Why not just take all that data, all the reasons that it became inefficient, feed that into an AI model and say, “Let’s strip out all the things you don’t need from that F-18?”
Cameron McCord: Yeah. I mean, I don’t actually know this to be true, but I’ve talked to some pretty emboldened people that I think are trying to do that type of work by example, I think to showcase. And I think that fits in line with some of the efforts we are working with—as much as we talked about what the status quo tools are, there are people pushing the frontier there right now, both at the primes and other places.
Alfred Lin: You both graduated from MIT. Why would a person graduating from MIT—why should they join Nominal?
Jason Hoch: I mean, I think that Cameron talked about 20 years of SaaS-ification, and one of the things I’m really passionate about right now is just that for our customers’ use cases, like the running of software, you have to think about the laws of physics. Physics gets a vote, not just in did this hardware system work or not, but if you have a scale of data that is too expensive to ship to AWS, and that data is necessary to determine do your physical systems work or not, you have to just operate with a set of software and computing principles that a lot of people have moved away from. But I think if we’re really ambitious—it sounds like this room is—about where physical AI is going to go in the next 10 or 20 years, a lot of people are going to spend a lot of time thinking about their problems. And Nominal, I think we’re on the leading edge of where software engineers are going to disproportionately be spending time in the next decade.
Alfred Lin: Are you guys ever going to build hardware yourself?
Jason Hoch: I think yes. I think—no, I’ll just give my take on—Cameron’s smiling already, but we shouldn’t play all of our cards, but the supply chain of hardware data is really what we spend a lot of time thinking about. So you have the source of the data would be a sensor, and then it goes all the way to, you know, you’re crunching it, you’re giving these reports to people who can actually apply their human judgment to is it safe to launch this satellite?
Now how do you get better and better at managing that supply chain? It’s probably by touching every part of it. I always say that we have to earn the right to capture data. Like, we have to make our users’ lives better. We can’t just say, like, “Hey, you have to use this tool because it gets the data cataloged in the right way.” We say, like, “Hey, you should use this tool because it will actually shave an hour off your day. Oh, by the way, it also catalogs your data in a way that’s organizationally beneficial.” And when I think about those workflows and pulling the thread all the way, how do you reduce the number of steps involved in this person’s labor, it eventually gets to hardware.
Cameron McCord: I was smiling just because I—Jason said yeah, I don’t want to play all the cards, but it’s something that I think is going to be happening sooner than later.
Sonya Huang: Our partner Sean would be beaming right now. He constantly reminds us that hardware is the only moat. And not only do you guys sell to hardware companies, it sounds like there might be some interesting things up your sleeve.
Cameron McCord: We have a lot of, I think, very unique insights there. And yeah, are further along there than we might be letting on.
Sonya Huang: Wonderful. Well, I think it’s an incredibly exciting time for hardware, for the physical world, for physical AI. And it’s inspiring to see you all build a company around it and, you know, build the GitHub equivalent that’s going to just radically transform the professionalism, the reliability, the speed of all the engineers who are now inspired and galvanized to go off to space. And so congratulations to you all on what you’ve done and excited to see what you continue to build.
Cameron McCord: Thanks so much. We say all systems Nominal.
Alfred Lin: All systems Nominal.
Cameron McCord: All systems Nominal. Thank you.
Sonya Huang: Thank you.