Research Statement#
Research Vision#
Deep learning has become transformative across safety-critical domains—from autonomous vehicles to medical diagnosis systems to financial decision-making. Yet the rapid deployment of neural networks often outpaces our ability to formally understand and verify their behavior. This fundamental mismatch between deployment velocity and verification capability represents one of the most pressing challenges in trustworthy AI.
My research addresses this gap by developing practical formal verification methods for neural networks. The vision is clear: enable practitioners to build and deploy verifiable AI systems with confidence. This requires bridging the gap between theoretical verification guarantees and practical scalability, making formal methods accessible and efficient for real-world networks.
Current Research Focus#
My research centers on three interconnected areas:
1. Efficient Convex Hull Approximation for Activation Functions
Neural network verification fundamentally challenges us to reason about non-linear transformations. Rather than treating activation functions as black boxes, I’ve developed techniques to efficiently construct tight convex polytopes that over-approximate their behavior. This work (WraLU and WraAct) demonstrates that mathematical elegance and practical efficiency are not mutually exclusive—by leveraging the inherent geometry of activation functions, we can achieve bounds that are both precise and computationally efficient.
The core insight: leverage structure to reduce approximation. Instead of generic relaxations, we exploit the specific geometry of functions like ReLU, Sigmoid, and Tanh to construct minimal constraint sets. This approach has proven effective on modern architectures including ResNets with tens of thousands of neurons.
2. Scalable Verification for Real-World Networks
Current verification methods struggle with scale. Complete verification guarantees are computationally expensive; incomplete methods lose precision. Rather than accepting this tradeoff, I’m exploring techniques that maintain both soundness and scalability:
Bound propagation with adaptive refinement
GPU-accelerated verification using auto-differentiation
Compositional verification for modular network analysis
Hybrid approaches combining deterministic and probabilistic certification
The goal is making formal verification practical not just for academic benchmarks, but for the diverse architectures and scales encountered in industry.
3. Formal Methods for Trustworthy AI
Beyond pure verification, my work explores the broader landscape of trustworthy machine learning:
Certified adversarial robustness through randomized smoothing and formal training procedures
Formal specifications for neural network properties—translating domain requirements into mathematical guarantees
Real-world applications in autonomous systems, medical AI, and cyber-physical systems
Formal training methods that incorporate verification objectives into the learning process itself
The conviction underlying this research: verification should guide training, not just validate results. By integrating formal methods into the training loop, we can build networks that are verifiable by design.
Key Challenges in the Field#
The Curse of Approximation
Every verification method involves approximation. As networks grow deeper, approximation errors compound exponentially. Finding ways to maintain tightness despite depth remains an open challenge. My work on convex hull approximation directly addresses this, but the fundamental limits of bound propagation suggest we need fundamentally new techniques as networks scale further.
Modern Architecture Diversity
Verification was born in the era of feedforward CNNs. Today’s landscape includes Transformers, Graph Neural Networks, Vision Transformers, and hybrid architectures that verification tools struggle to handle. Attention mechanisms, layer normalization, and complex information flow patterns present novel challenges that demand architecture-specific solutions.
The Scalability-Completeness Tradeoff
Complete verification provides certainty but doesn’t scale. Incomplete methods scale but sacrifice completeness. The field needs hybrid approaches that intelligently allocate computational resources: fast approximate methods for initial screening, complete methods for critical samples, and probabilistic certification for large-scale networks.
Bridging Theory and Practice
Academic benchmarks don’t reflect real-world constraints. Production networks operate under latency budgets, memory constraints, and continuous update cycles. Verification must become a runtime concern, not a post-hoc validation step. This requires fundamentally rethinking how we approach certification—not as a single pre-deployment check, but as an ongoing guarantee maintained throughout the system’s lifecycle.
Practical Adoption
Even mature verification techniques remain underutilized in practice. Building practical tools, creating accessible documentation, and developing standards for verification reporting are crucial for widespread adoption. This is why I maintain educational content alongside research—the best verification method is worthless if practitioners don’t understand how to use it.
Research Outcomes and Contributions#
My research has produced:
WraLU and WraAct: Fast, precise convex hull approximation techniques that outperform existing methods by orders of magnitude on efficiency while improving precision
13-Post Verification Curriculum: Comprehensive technical content making neural network verification accessible to practitioners
Verified Software Tools: Open-source implementations enabling others to build verification-aware applications (wraact, propdag, torchvnnlib)
Publications in Top Venues: Papers in POPL, OOPSLA, CCS, S&P demonstrating impact across verification, security, and programming languages
But beyond metrics, the goal is building a community of verification-aware practitioners who can responsibly deploy AI in safety-critical systems.
Looking Forward#
The next frontier of verification research involves:
Learning-based verification: Neural networks assisting verification, either predicting verification outcomes or learning to refine bounds
Symbolic reasoning at scale: Combining symbolic methods with machine learning insights
Real-time verification: Continuous certification during deployment with dynamic adaptation
Fairness and equity verification: Extending formal methods beyond robustness to other critical properties
Verification-aware training: Making verification part of the optimization objective from the start
The ultimate vision is a world where formal verification is as standard in ML development as unit testing is in software engineering. We’re not there yet—but the technical foundations are solid, and the practical applications are becoming increasingly clear.