News#
New!!! 🎉 🎉 🎉[Oct. 2025] I present our work Convex Hull Approximation for Activation Functions (OOPSLA’25) in Singapore (Thu 16 Oct 2025 16:00-16:15 at Orchid West of Marina Bay Sands Convention Centre).
[Aug. 2025] Our paper Convex Hull Approximation for Activation Functions is accepted by the ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA’25) within Systems, Programming, Languages, and Applications: Software for Humanity (SPLASH’25). Happy!
[Jan. 2025] Our paper AI Model Modulation with Logits Redistribution is accepted by the ACM Web Conference 2025 (WWW’25). Congrats, Zihan!
[Dec. 2024] Prof. Bai and I have a tutorial on Robustness Verification of Neural Networks using WraLU in the Australasian Database Conference (ADC’24) on Tue 16 Dec 2024 16:30 AEST (UTC+10) in Gold Coast, Australia. We are awarded the Distinguished Tutorial Speaker!
[Aug. 2024] Our paper Uncovering Gradient Inversion Risks in Practical Language Model Training is accepted by the 31th ACM Conference on Computer and Communications Security (CCS’24). Congrats, Xinguo!
[May. 2024] I will have a presentation about ReLU Hull Approximation in the workshop Formal Methods in Australia and New Zealand during 29–30 May 2024 at the University of Queensland, St Lucia. My presentation time is 15:30–16:00 on 30 May 2024.
[Apr. 2024] I will present our work ReLU Hull Approximation in FPench community monthly meeting on May 2nd at 9:00-10:00 AM Pacific time.
[Apr. 2024] Our paper CORELOCKER: Neuron-level Usage Control is accepted by the 45th IEEE Symposium on Security and Privacy (S&P’24). Congrats, Zihan! [Live Video]
[Jan. 2024] I present our work, ReLU Hull Approximation in POPL’24 in Kelvin Room of the Institution of Engineering and Technology (IET) at London, UK. [Live Video]
[Nov. 2023] Our paper ReLU Hull Approximation is accepted by the 51st ACM SIGPLAN Symposium on Principles of Programming Languages (POPL’24).
[Nov. 2023] I participate in the 24th International Conference on Formal Engineering Methods (ICFEM’23) at Brisbane, Australia. And I present our work, Formalizing Robustness Against Character-Level Perturbations for Neural Network Language Models and my doctoral symposium paper, Verifying Neural Networks by Approximating Convex Hulls..
Educational Content#
I maintain a comprehensive blog series on Neural Network Verification, covering everything from foundational concepts to cutting-edge research techniques. The series includes 13 in-depth technical posts designed for both practitioners and researchers interested in formal verification methods, robustness testing, and trustworthy AI.
The blog is organized into four complementary phases:
Phase 1: Foundations - Core concepts and theory of neural network verification
Phase 2: Methods & Tools - Verification techniques, bound propagation, and formal specifications
Phase 3: Practice - Robustness testing, adversarial training, and certified defenses
Phase 4: Advanced Topics - Diverse architectures (Transformers, RNNs, GNNs) and real-world applications
→ Explore the Complete Neural Network Verification Blog Series
Blog Series Phases#
The verification curriculum is organized into four complementary learning paths:
🎓 Phase 1: Foundations
Core concepts and theoretical foundations of neural network verification.
📚 Phase 2: Methods & Tools
Verification techniques, bound propagation, and formal specifications.
🔧 Phase 3: Practice
Real-world testing and training approaches for robust networks.
🚀 Phase 4: Advanced Topics
Diverse architectures and real-world applications of verification.
My Recent GitHub Repositories#
I’m currently working on some exciting tools that I’m thrilled to share with you:
wraact: A tool to approximate activation function hull with convex polytopes. It supports various activation functions like ReLU, Sigmoid, Tanh, and GeLU, and can be easily integrated into existing neural network verification frameworks! This is a regularly maintained and updated repo for the core algorithm in WraLU and WraAct.
shapeonnx: A tool to infer the shape of an ONNX model when the official tool is down. It’s a simple yet powerful tool that helps you understand the dimensions of your model’s inputs and outputs!
slimonnx: A tool to optimize and simplify your ONNX models by removing redundant operations and resolving version issues. It makes ONNX files cleaner, more efficient, and ready for action!
torchonnx: A tool for converting ONNX models to PyTorch models (
.pthfor parameters,.pyfor structure). It’s simple, lightweight, and designed for seamless model conversion.torchvnnlib: A tool to convert VNN-LIB files (
.vnnlib) to PyTorch tensors (.pthfiles) for efficient neural network verification. Take full advantage of the PyTorch ecosystem.propdag: A bound propagation framework for neural network verification. It supports any DAG (Directed Acyclic Graph) structure, covering both feedforward and backward propagation patterns for verification. This tool allows researchers to focus on their algorithms without worrying about complex computation graphs.
This widget is provided by ClustrMaps.
Contents: