Announcement

Created on

April 30, 2026

Updated on

April 30, 2026

We’re Not Just Hosting a Conference, We’re Closing the Gap Between AI Research and AI Risk

There's a conversation that needs to happen in AI security. Not a panel discussion. Not a keynote. A real conversation: The kind where a researcher who just published a paper on adversarial attacks to autonomous agents sits across from the CISO who has to make a deployment decision about that exact class of system by the end of the quarter.

That conversation almost never happens. And that gap is costing us.

Two Worlds, One Urgent Problem

AI security has a structural problem that no tool or policy can fix on its own: the people who understand the threats most deeply and the people accountable for managing them at scale rarely occupy the same room.

Every day AI security researchers are publishing groundbreaking work on attack surfaces, model vulnerabilities, and agentic system failures. Meanwhile, security leaders at enterprise companies are making real-time decisions about deploying systems that touch sensitive data, trigger financial transactions, and take autonomous actions, often without a clear line of sight to what the research community has already figured out.

The result is a dangerous lag. Research insights that could harden production systems sit in conference proceedings. Operational learnings that could sharpen research questions stay locked inside company walls.

We at Virtue AI decided to do something about it: Virtue AI Presents: CTRL+AI.

Why Virtue AI Is Building This Room

Most security companies sponsor conferences. Virtue AI is building one. And the distinction matters.

Virtue AI's co-founders aren't observers of the AI security research landscape. They are the landscape. Bo Li, Dawn Song, Sanmi Koyejo, and Carlos Guestrin are award-winning researchers with active faculty appointments at UIUC, UC Berkeley, and Stanford respectively, publishing ongoing work at leading venues like NeurIPS. When they founded Virtue AI, they made a deliberate choice: the company's mandate would be security that works at real-world scale, not just in theory.

That philosophy didn't stay inside the product. It shaped who we are as a company. And now it's shaping who we’re bringing together on June 4th.

Virtue AI isn’t hosting because it's good for brand awareness. We’re hosting a conference because we have spent years living in both worlds (writing papers and shipping products) and we know what's missing when those worlds don't talk to each other.

What Makes This Room Different

Look at the initial speaker lineup and you'll see the thesis in action.

On one side:

  • Ravi Krishnamurthy, VP of Product for AI Foundations & Responsible AI at ServiceNow.
  • Sunil Agrawal, CISO at Glean
  • Seth Spiel, Head of Product for Security AI at Splunk.

These are people who are accountable for governing AI systems at scale. They've seen what happens when AI fails in production. They carry the weight of those decisions.

On the other side:

  • Dawn Song — Board Director & Co-Founder, Virtue AI; Professor, UC Berkeley
  • Bo Li — CEO & Co-Founder, Virtue AI; Abbasi Associate Professor, UIUC
  • Sanmi Koyejo — Head of AI & Co-Founder, Virtue AI; Professor, Stanford University

These are co-founders and researchers who are actively pushing the frontier of what we know about AI security, publishing findings that will shape how the field thinks about autonomous systems for years to come.

This is not a room designed for polished takes and pre-approved soundbites. It's designed for the harder, more productive work: aligning on what the real problems are, what the research says, and finding the path to true AI security, governance, and compliance at scale.

A Conference That Practices What It Preaches

There's one more thing worth noting, and it's not subtle.

CTRL+AI has a two-tier ticketing model. An Industry Pass for executives, practitioners, and professionals. And a Research Pass: a request-based, reviewed access tier for students and researchers. Because cost shouldn't determine who gets to participate in the conversations that shape this field.

That's not a standard conference decision. It's a values-based decision. It reflects the same belief that runs through everything Virtue AI does: that the academic-to-industry pipeline isn't just good for business, it's essential infrastructure for getting AI security right.

If This Room Is for You, You Already Know It

If you're building AI systems that take actions, touch sensitive data, or make decisions (or if you're responsible for securing them) there is no more important conversation happening this year.

June 4, 2026. The Presidio, San Francisco. Seats are limited, and they will fill.

Register here

Strengthen Your AI Posture Today

Virtue AI brings control, governance, and resilience to enterprise AI.