2026 Blog Posts¶
January 2026 | Technical Deep-Dive
Building a unified framework for tight convex hull approximation of neural network activation functions. Covers the journey from ReLU (POPL’24) to S-shaped functions (OOPSLA’25), the DLP framework, WithOneY optimization (1.31x-3.00x speedups), and validation across 14 activation types.
Topics: Neural Network Verification, Polytope Approximation, Convex Hull, Activation Functions, Formal Methods, Sound Approximation, Python
January 2026 | Technical Deep-Dive
How I built a high-performance VNN-LIB to PyTorch/NumPy converter achieving 10-100x faster loading through two-tier processing, binary format conversion, and dual backend design. Tested on all VNN-COMP 2024 benchmarks with 100% conversion success rate.
Topics: VNN-LIB parsing, two-tier architecture, regex optimization, AST fallback, backend abstraction, verification tooling, binary formats
January 2026 | Technical Deep-Dive
How I built a compiler-based ONNX→PyTorch converter that generates clean, vmap-compatible code through a 6-stage pipeline. The challenges of vectorization compatibility, numerical precision across hardware, and lessons learned from validating VNN-COMP 2024 models.
Topics: Compiler design, ONNX-to-PyTorch conversion, code generation, vmap compatibility, vectorization, numerical precision, verification tooling
January 2026 | Technical Deep-Dive
How I built a shape inference engine that tracks actual dimension values through ONNX graphs, enabling static resolution of shapes ONNX marks as dynamic. Tested on 136 VNN-COMP 2024 models with 100% success rate.
Topics: Shape inference, dynamic shapes, ONNX operators, verification tooling, shape tensor tracking, static analysis
January 2026 | Technical Deep-Dive
Why I built a pure Python ONNX optimizer specifically for verification workflows, the technical challenges of operator fusion and numerical correctness, and the design decisions behind a functional pipeline architecture. Achieving 100% optimization success and ONNXRuntime compatibility.
Topics: ONNX optimization, graph transformations, operator fusion, verification tooling, numerical correctness, functional pipeline
January 2026 | Technical Deep-Dive
How I built a reusable infrastructure framework for neural network verification through template-based design, DAG-native graph traversal, and smart cache management. A pure Python framework with zero dependencies that separates infrastructure from algorithm implementation.
Topics: Template method pattern, DAG traversal, topological sorting, bound propagation, verification infrastructure, abstract base classes, reference counting
Comments & Discussion