My Research Journey
Building practical tools for formally verifying neural networks.
Neural networks are now used in many high-impact settings: autonomous driving, medical decision support, finance, and public services. In many of these settings, however, we still rely much more on empirical testing than on formal guarantees about model behavior.
Benchmark accuracy can look strong, but that does not tell us whether a model is robust under the conditions we care about. That gap between "works well in practice" and "comes with a defensible guarantee" is what pulled me into verification research.
"It's like the difference between a bridge that seems sturdy and one that an engineer has actually certified won't collapse."
AI deployment is moving faster than our ability to verify it. In safety-critical settings, a perception error is not just an ordinary software bug; it can directly affect the reliability of the larger system.
My research is about building practical tools that let us formally verify neural networks. Not just "test it a lot and hope for the best," but establish mathematical guarantees about what a network can and cannot do under clearly defined assumptions.
Research Timeline
Entered NNV Research
First encountered neural network verification and was immediately hooked — the question of whether we can mathematically prove a neural network won't fail felt both urgent and deeply unsolved.
Completed Master's Thesis
Focused my master's thesis on neural network verification, diving deep into convex relaxation methods and discovering that existing approaches were far looser than necessary.
Started PhD Journey
Began my PhD with a clear mission: build practical tools that make formal verification of neural networks accessible and scalable for real-world systems.
WraLU Accepted
WraLU — a tool for constructing tight convex hull constraints for ReLU networks — was accepted for publication, showing stronger relaxations with lower constraint overhead than prior baselines.
WraAct Accepted
WraAct extended this line of work beyond ReLU to more general activation functions, broadening the range of networks and layers that can be handled in verification pipelines.
What I'm Working On
Making AI Systems You Can Actually Trust
Think of neural networks like a chain of mathematical functions stacked on top of each other. The tricky part is that most of these transformations are non-linear — they curve and bend in ways that make them hard to reason about mathematically.
I'm working on wrapping tight geometric boundaries around what these functions can possibly do — making that safety fence as tight as possible so we don't lose precision.
Verification That Scales
Here's the frustrating part: you can either do verification perfectly and wait three weeks, or get an answer fast but maybe miss something important. I'm trying to find ways to get both.
- • Adaptive bound propagation: Start with fast approximations, then refine only where you need precision
- • GPU-accelerated verification: Using auto-differentiation frameworks to make verification run on the same hardware that trains networks
- • Compositional verification: Break big networks into pieces, verify each piece, then combine the results
Building Tools People Actually Use
Verification research is worthless if nobody uses it. That's why I'm focused on the broader ecosystem of trustworthy AI:
- • Certified adversarial training: Methods that bake robustness into networks during training
- • Formal specifications: Translating domain requirements into mathematical properties
- • Practical tooling: Open-source libraries like wraact, propdag, and torchvnnlib
The Hard Parts
The unsolved challenges that keep this field interesting — and keep me up at night.
The Curse of Approximation
Every verification method involves some approximation — and as networks get deeper, errors compound exponentially. Like playing telephone with a hundred people.
Modern Architecture Chaos
Verification was born in the era of simple feedforward networks. Now we've got Transformers, GNNs, Vision Transformers — each needing architecture-specific solutions.
Scalability vs Completeness
Complete verification can provide definitive guarantees for a given specification, but it remains difficult to scale. Incomplete methods scale better, but they may return conservative or inconclusive results.
Bridging Theory & Practice
Academic benchmarks don't reflect real-world constraints. Verification needs to become a runtime concern, not a post-hoc validation step.
What I've Built So Far
WraLU & WraAct
Convex hull approximation tools aimed at making verification tighter and more efficient. In our evaluations, they improved approximation quality and construction cost relative to prior baselines, helping verification pipelines handle larger benchmark models more practically.
Educational Content
Comprehensive NNV Guides making neural network verification accessible to non-specialists.
Open-Source Tooling
Libraries like wraact, propdag, and torchvnnlib for PyTorch-based verification workflows and supporting infrastructure.
Research Publications
Papers in venues including POPL, OOPSLA, CCS, S&P, WWW, and AsiaCCS across verification, security, and programming languages.
View all publications →Where This Is All Going
The technical foundations are solid, practical tools are emerging, and real-world demand is only growing.
Learning-Based Verification
Using neural networks to help verify other neural networks — networks that predict verification outcomes to guide search.
Symbolic Reasoning at Scale
Combining symbolic methods (SAT/SMT) with machine learning in smarter ways.
Real-Time Verification
Continuous certification during deployment with dynamic adaptation.
Beyond Robustness
Formal methods for fairness, equity, privacy, and other critical properties.
Verification-Aware Training
Making verification part of the optimization objective from day one.
My view is that formal verification should become a routine part of ML engineering for the systems that need strong assurances most. We are still early, but better methods and better tooling are moving the field in that direction.