My Research Journey
Building practical tools for formally verifying neural networks.
Here's the thing: we're putting neural networks everywhere. Self-driving cars. Medical diagnosis systems. Financial trading. Even systems that decide who gets a loan or bail. And honestly? Most of the time, we have no idea what these networks are actually doing.
Sure, the math works. The accuracy looks great on test sets. But can we prove these systems won't make catastrophic mistakes? Usually not. That's what got me into verification research — this gap between "works really well in practice" and "guaranteed safe."
"It's like the difference between a bridge that seems sturdy and one that an engineer has actually certified won't collapse."
We're in this weird situation where AI deployment is moving way faster than our ability to verify it. When a neural network misclassifies a stop sign as a speed limit sign, that's not a cute bug — it's potentially fatal.
My research is about building practical tools that let us formally verify neural networks. Not just "test it a lot and see what happens," but actually prove mathematical guarantees about what a network can and can't do.
Research Timeline
Entered NNV Research
First encountered neural network verification and was immediately hooked — the question of whether we can mathematically prove a neural network won't fail felt both urgent and deeply unsolved.
Completed Master's Thesis
Focused my master's thesis on neural network verification, diving deep into convex relaxation methods and discovering that existing approaches were far looser than necessary.
Started PhD Journey
Began my PhD with a clear mission: build practical tools that make formal verification of neural networks accessible and scalable for real-world systems.
WraLU Accepted
WraLU — a tool that computes minimal convex hull constraints for ReLU networks — was accepted for publication, demonstrating orders-of-magnitude tighter bounds than prior methods.
WraAct Accepted
WraAct extended our approach to general activation functions beyond ReLU, broadening what formal verification can cover. Accepted and pushing the boundaries further.
What I'm Working On
Making AI Systems You Can Actually Trust
Think of neural networks like a chain of mathematical functions stacked on top of each other. The tricky part is that most of these transformations are non-linear — they curve and bend in ways that make them hard to reason about mathematically.
I'm working on wrapping tight geometric boundaries around what these functions can possibly do — making that safety fence as tight as possible so we don't lose precision.
Verification That Scales
Here's the frustrating part: you can either do verification perfectly and wait three weeks, or get an answer fast but maybe miss something important. I'm trying to find ways to get both.
- • Adaptive bound propagation: Start with fast approximations, then refine only where you need precision
- • GPU-accelerated verification: Using auto-differentiation frameworks to make verification run on the same hardware that trains networks
- • Compositional verification: Break big networks into pieces, verify each piece, then combine the results
Building Tools People Actually Use
Verification research is worthless if nobody uses it. That's why I'm focused on the broader ecosystem of trustworthy AI:
- • Certified adversarial training: Methods that bake robustness into networks during training
- • Formal specifications: Translating domain requirements into mathematical properties
- • Practical tooling: Open-source libraries like wraact, propdag, and torchvnnlib
The Hard Parts
The unsolved challenges that keep this field interesting — and keep me up at night.
The Curse of Approximation
Every verification method involves some approximation — and as networks get deeper, errors compound exponentially. Like playing telephone with a hundred people.
Modern Architecture Chaos
Verification was born in the era of simple feedforward networks. Now we've got Transformers, GNNs, Vision Transformers — each needing architecture-specific solutions.
Scalability vs Completeness
Complete verification gives 100% certainty but doesn't scale past small networks. Incomplete methods scale but might miss vulnerabilities.
Bridging Theory & Practice
Academic benchmarks don't reflect real-world constraints. Verification needs to become a runtime concern, not a post-hoc validation step.
What I've Built So Far
WraLU & WraAct
Fast, precise convex hull approximation tools that outperform existing methods by orders of magnitude. They've been used to verify real production networks — making the gap between theory and practice smaller every day.
Educational Content
Comprehensive NNV Guides making neural network verification accessible to non-specialists.
Open-Source Tooling
Libraries like wraact, propdag, torchvnnlib integrated into PyTorch.
Research Publications
Papers in top venues (POPL, OOPSLA, CCS, S&P) across verification, security, and programming languages.
View all publications →Where This Is All Going
The technical foundations are solid, practical tools are emerging, and real-world demand is only growing.
Learning-Based Verification
Using neural networks to help verify other neural networks — networks that predict verification outcomes to guide search.
Symbolic Reasoning at Scale
Combining symbolic methods (SAT/SMT) with machine learning in smarter ways.
Real-Time Verification
Continuous certification during deployment with dynamic adaptation.
Beyond Robustness
Formal methods for fairness, equity, privacy, and other critical properties.
Verification-Aware Training
Making verification part of the optimization objective from day one.
My bet is that in ten years, formal verification will be as standard in ML development as unit testing is in software engineering. We're not there yet — not even close. But every tool we build brings us closer.