Skip to main content
Photography: Katie Thompson

In January 2023, Yaron Singer woke up in a suburban Las Vegas Airbnb with the feeling that things were finally on the upswing for his company, Robust Intelligence. The startup was gaining traction as one of the world’s top AI security firms, having developed the industry’s first AI firewall—a model engine that exposed AI safety failures and inaccuracies, protecting AI technology from malicious external threats, and also from making critical mistakes. 

Singer believed he was crossing T’s and dotting I’s away from landing one of his biggest clients yet: A leading consulting and systems integration company looking to ink a seven figure, multi-year deal. To celebrate, he was treating his team to a performance of Cirque Du Soleil’s ‘O’ that evening as part of their company offsite. Unfortunately, those death-defying, aquatic acrobatics wouldn’t be the day’s only brush with mortality. 

Emerging from his room, Singer got a call from the managing director of AI for the consulting company—to tie up loose ends, he assumed. Instead, the manager shared the news that their CEO had just returned from the World Economic Forum in Davos where one new technology had dominated the conversation: Large Language Models (LLMs). AI was heading in a new direction—from predictive to generative—and the CEO was worried that Robust’s offerings were no longer relevant. 

Singer quickly realized that this wasn’t about losing one client, this was about potentially losing every client. And yet, in spite of the existential threat this posed to his business, underneath it all Singer felt stirrings of excitement. He had been anticipating an AI watershed moment like this all the way back since his graduate studies at UC Berkeley. If LLMs were about to change the world, Robust Intelligence had the potential to play a meaningful role as a safeguard for the technology. 

Now in order to make that happen, Singer and his team needed to adapt their offerings. And they needed to do it quickly. 


Singer is the youngest in an Israeli family of academics where everyone—Mom, Dad and sister—have earned PhDs in computer science. There was something of an unspoken expectation that Singer would follow in his family’s well-trodden path, so much so that Singer remembers asking his mother at age six if he too had to get a PhD when he got older. (She laughed and said no. He remembers not being convinced.) 

His ambivalence about following in his family’s scholarly footsteps remained until he had an epiphany about academia’s appeal as an undergraduate. “I learned about all these beautiful constructs and systems all the way from math, to theory, to computer science,” Singer says. “To imagine that I could make an everlasting contribution to one of those fields—there’s nothing like it.” 

Singer enrolled in a PhD in computer science at UC Berkeley, which is where he first grew curious about startup culture. “I was doing this work on algorithms but what I wanted to do was apply it to the real world,” Singer says. “As an academic, nobody would talk to me, and I needed data. So the only way I could think of was to start a company, implement my algorithm and collect a lot of data.” 

His company, BIDWAVE, used machine learning and algorithms to determine how users should be compensated for posting ads on social media. But once the system was built, he kept getting wrong answers, revealing a very inconvenient truth about AI: “Everywhere that we have algorithmic decision-making that is taking AI input is extremely vulnerable,” Singer says. The algorithms, he concluded, were in trouble. 

Singer uses the example of Google Maps: You put in your destination and it finds the fastest route. It does this in two steps: the first relies on machine learning predictions about how long it will take to get to each location, and then once it has that information, it runs an algorithm. “The problem is that the times from one street to another—they’re not the real times, they’re predictions by some AI model, but the algorithm treats that as the truth,” Singer says. If the machine learning models are not exact—“and they’re never exact”—they undermine the quality of the algorithms making decisions. 

As he dug in further, Singer saw that there wasn’t just a hole in the market for a product that could keep businesses safe from these issues—there was no market at all. 

Meromit Singer, Yaron’s wife, a computational biologist who also completed her PhD in computer science at UC Berkeley, puts her husband’s insight plainly. “He was really the first to realize how dangerous it can be to have all these companies running their machine learning and assuming that the parameters are correct—but they’re not completely correct,” she says. “There’s some error there. And then what guarantees do you have and what vulnerabilities do you have?”

For some, stumbling upon this realization would have been enough to drop out of grad school and immediately pivot into startup mode, but Singer still felt the pull of the ivory tower—this time to the East Coast to teach at Harvard. Ironically, thousands of miles from Silicon Valley, Harvard is where he wound up meeting his future cofounder, Kojin Oshiba, an undergraduate seated in the front row of his graduate seminar.

“As an academic, nobody would talk to me, and I needed data. So the only way I could think of was to start a company, implement my algorithm and collect a lot of data.” 

Yaron Singer

While Singer was brought up in a household dedicated to computer science, Oshiba was raised in Tokyo by two serious bibliophiles: His father was a librarian and his mother worked in a bookstore. (In fact, Oshiba’s first name comes from the novel, The Wayfarer—which translates in Japanese as Kōjin—after his mother completed her thesis on the book.) 

For all of the ways their childhoods were different, Oshiba and Singer shared one very specific experience in common: Both are considered Third Culture Kids, a term coined by the American sociologist Ruth Useem which defines the experience of expatriate children who spend their formative years outside of their passport country. Singer had spent part of his childhood living in Colorado where his father was on a teaching sabbatical, while Oshiba spent three years in Canada where his father worked in the East Asian studies department at the University of Montreal. 

The move to Montreal got off to a rocky start. “I went there and I spoke zero English, so I struggled quite a bit for the first year,” Oshiba says. “But after three years I came back, and from there onwards, I felt like I didn’t exactly fit into the Japanese society.” Having had a taste of the wider outside world, Oshiba wanted more—he decided to attend university in America at Harvard. 

As the novelist Graham Greene famously wrote, “There is always one moment in childhood where the door opens and lets the future in.” For Oshiba, this was when he received special permission to take Singer’s graduate seminar and found himself in the front row, staring at the man who would eventually become his cofounder of a company valued at almost a half-billion dollars. 

Singer first took note of Oshiba when Oshiba led an impressive workshop on TensorFlow, the open-source software library for machine learning and artificial intelligence. Shortly thereafter, Singer and Oshiba began to co-write and publish papers. For his part, Oshiba always felt that Singer respected him as a true collaborator in spite of their gap in experience. “I felt like I was working on a problem with someone who’s obviously very smart and who’s accomplished a lot in his life, but it wasn’t like ‘Kojin, go and implement this.’” Oshiba says. “It was very much incubating ideas together and ‘what do you think? And why don’t we do this?’”

On his end, Singer was coming around to the notion of a life outside of academia. He had been granted tenure early, and with this goal achieved, realized how eager he was to see the practical applications of his research. The time felt right to build a company. 

At the end of the semester, Singer invited Oshiba into his office to ask him if he wanted to co-found a startup. Oshiba was equally excited by the prospect of collaboration, wherever that might take them. “Before asking any details,” Oshiba remembers, “I said, yes.” Partnership confirmed, Singer and Oshiba were just left with the question of what exactly they wanted to co-found. 


The pair kept coming back to the inconvenient truth about algorithms Singer had stumbled upon years ago at Berkeley—that AIs were fallible, resulting in vulnerabilities and inaccuracies in any algorithms that rely upon them. This led them to a simple, novel concern about AI security: “AI and machine learning, they’re glorified algorithms—like algorithms on steroids,” Oshiba says. “So the activity of identifying limitations in algorithms is identifying limitations in AI.” 

They set out creating a series of axioms they believed gave their company a purpose: the first, that AI would continue grow at an an exponential pace; the second, that with everything they knew about AI, there was no responsible way to grow the field without developing a way to secure it; and third, the corollary of these would lead the AI security market to eventually evolve into a multi-billion dollar market. 

It was nearing the end of 2019 when Singer created a pitch-cum-pilot program for JP Morgan Chase that revealed the weaknesses in the bank’s use of predictive AI. But Chase turned them down after an engineer on their team concluded there was currently no need for a product like theirs. 

“AI and machine learning, they’re glorified algorithms—like algorithms on steroids. So the activity of identifying limitations in algorithms is identifying limitations in AI.”

Kojin Oshiba

Around the same time, Singer went to San Francisco to meet with investors, including Bill Coughran, a partner at Sequoia Capital who had been a VP of Engineering at Google. If anyone was going to get the importance of their idea, Singer hoped it would be Bill. “Coming from his role at Google, I felt like Bill knew more about AI than I did,” says Singer, “He didn’t need to be convinced about our assertions that AI will be eating the world, and that it’s a vulnerable technology.”  

Singer was right. Coughran not only became Robust’s first investor, but he also introduced Singer to Chase’s head of AI and Quantum Security, who recognized that even though there might not be an immediate need for an AI firewall, there would be far sooner than most were expecting. In February 2020, he told Singer that he’d have the budget to kick off a program in six months. One month later, Covid-19 shut down the world.

Now, in addition to getting his company off the ground, Singer was also learning how to run it fully remotely, trying to land his first customers in a market besieged by high interest rates and inflation. Finally, in September of 2020, Robust Intelligence landed the first sale of their AI firewall product to Expedia after a cold outreach on LinkedIn. 

Expedia’s VP of data science was sold on the promise of how Robust’s product could be used for quality assurance, perhaps even more than its value as a security tool. At the time, he had a lot of PhDs in his organization who were terrific researchers but lacked coding expertise, and he believed Robust’s technology could help identify the issues in the models they were building. The only problem—Robust still didn’t actually have a product. They’d used Figma to create a mock-up of the product that looked real. Expedia was willing to partner with Robust despite their product’s nascency because there was simply nothing else like it in the market. “I was like, ‘Shit, now we have to make this work,’” Oshiba remembers. 

Expedia had given Robust sample data that allowed them to test their technology, but not enough to get it working at scale. This left Robust hoping for something that few other startups ever would: a lengthy contract process so that their engineers had runway to perfect the firewall.

At the time, Expedia’s AI models were predictive, or discriminative in nature. They would show a consumer the cost of a hotel or flight, and the prediction of whether or not they would purchase the room or the ticket was AI driven. Robust’s firewall needed to analyze the accuracy of Expedia’s AI’s predictions, while also testing it for vulnerabilities. Thanks to an intensive push, the fledgling startup looked like it was on track to deliver a functional product by the time the contract was signed, but three months before their planned delivery to Expedia they hit another snag. Seven of Robust’s twelve staff members left the company, discouraged by the fact that they’d only managed to win a single customer a year and a half in.

Other founders might have thrown in the towel after this setback, but Singer knew in his core that the AI revolution was just on the horizon. At a picnic table near Robust’s San Francisco office, Singer sat down with Oshiba to confirm whether or not they wanted to recommit to their vision for the company, or if they were going to return money to investors. Both remained all-in.

So with a ‘streamlined’ team of just five employees, Robust Intelligence pushed forward and managed to ship to Expedia on time. Chase soon returned from their pandemic reticence, and then other customers followed suit, including Deloitte, Intuit, the US Government and a certain consulting and systems integration firm who were interested in inking a seven-figure deal. Singer decided it was time to take the team to Vegas to celebrate.

The fateful Vegas phone call finally confirmed the AI inflection point Singer had been waiting for. AI’s ubiquity—and a corresponding uptick in desire to keep it secure—had finally arrived. Up until that point, Robust had been securing predictive AI, like the work they did for Expedia. ChatGPT (and LLMs more broadly) didn’t predict, they generated—creating images and text based on prompts issued by a user. According to Singer, this ultimately deepened AI’s security risks in profound ways. “The leap to extracting personal medical records or tax information is short,” Singer says. “The privacy breach is unprecedented.”

This threat also created an unprecedented opportunity for Robust, assuming they could figure out how to update their offerings fast enough to keep the company afloat. “All the customers were on hold because they weren’t going to be putting any money on non-generative AI and they didn’t know what their product roadmap was going to look like,” Singer says. 

He proposed a tiger team of two engineers that Oshiba would lead, with the goal of having a working generative AI Firewall prototype within six weeks. Then they would implement a constant six week release cycle, adding more and more test cases, protection mechanisms (and staff) with every new iteration. Within six months, the entire company was focused on building the guardrails that could keep LLMs safe for companies to implement.

Singer met their first paying generative AI customer, a Fortune 100 company, at a Cisco conference where they had been invited to speak after word got out about their latest AI firewall. The company signed on for a pilot, which revealed major vulnerabilities in their system. It also led to a discovery: Companies that were deploying generative AI applications were often fine-tuning existing foundational models in an effort to improve accuracy and cost effectiveness, but in the process, they were exposing those models to new security risks. After Robust presented their findings, the company immediately extended their contract. 

On December 27th, 2023, The New York Times further corroborated Robust’s value proposition. They sued OpenAI and Microsoft for copyright infringement, alleging that their published work had been used to train chatbots. In 2024, Robust was invited to advise on the lawsuit, and helped to demonstrate the simplicity with which LLMs can be trained on copyrighted materials from the internet, which in turn can make that content accessible to a tech-savvy bad actor. “What our team discovered is that anyone can actually extract complete articles from models like ChatGPT,” Singer says. “Which shows that not only can someone with malicious intent get access to copyrighted material through these models, but that they can also access the data that was used to train these models, which then becomes a huge privacy concern.”

Around the same time, President Biden signed the Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, a growing awareness of what Signer had known for the better part of the last decade: In order to harness the power of AI, you first need to understand its risks and how to keep it secure. 

Robust Intelligence began 2024 very differently than 2023. Instead of losing a major client and having to change their entire product and technology, they were approached with a new opportunity. 


Despite their increasing success, Singer could see that LLMs were scaling faster than his company possibly could. In order to have the impact he aspired to, he would have to partner with a larger company. So in the summer of 2024, after receiving three other offers, Singer accepted Cisco’s $400M bid to acquire Robust Intelligence.  

He views Cisco as precisely the kind of company that can give Robust the necessary support and infrastructure to secure an ever-expanding AI landscape. That infrastructure, powered by a workforce of twenty-five thousand people, creates a step change in terms of Robust’s reach and opportunity. “AI is now being put at the front of Cisco’s focus,” Singer says. “And Robust Intelligence now has the opportunity to lead the charge.”

They are already seeing the returns of this partnership, landing new clients like BMW, who came to Robust for help securing their AI-powered concept car, Dee. This is a security challenge on a completely different scale than, say, securing a travel platform trying to predict if a customer will buy a ticket. As the founding team puts it, “the initial problems we were focused on were these small errors that AI models make that were bad for business but not the end of the world. But a car crashing into a wall—or even companies who are using AI to decide whether to hire someone or not—these really impact people’s lives.” 

For the man who once dreamed of what it would feel like to make a lasting impact in his field (albeit, originally in a more theoretical realm), Singer can’t help but be compelled by Cisco’s centrality in the technology landscape. “Cisco runs the internet, and if every AI application that is running through the internet can be secured through a Robust Intelligence AI firewall, what bigger impact could we have on the world?” he says, adding, “this is the definition of the dream.” 

Related Topics