Phase 2: Methods & Tools Guide 5

SMT and MILP Solvers for Verification

Complete verification using SMT and MILP solvers, including encoding theory, QCQP/SDP formulations, and the exponential complexity of ReLU activation patterns.

When you need absolute certainty---definitive answers, no “unknown” results---you need complete verification. SMT (Satisfiability Modulo Theories) and MILP (Mixed-Integer Linear Programming) solvers provide exactly this: they always return either “verified” or a counterexample. These are the workhorses of complete neural network verification, encoding the verification problem as logical constraints or optimization problems that mature solvers can handle.

This guide explores how SMT and MILP solvers work for neural network verification, when to use them, and why they provide the strongest guarantees despite computational costs.

Why Complete Verification Matters

Incomplete methods (bound propagation, abstract interpretation) are fast but sometimes return “unknown.” For safety-critical applications---aviation, medical devices, autonomous vehicles---“unknown” isn’t acceptable. You need definitive answers: either the property holds, or here’s a counterexample proving it doesn’t.

Complete verification guarantees exactly this: given enough time and memory, SMT and MILP solvers will find the answer. The tradeoff is computational cost---these methods have exponential worst-case complexity. But when absolute certainty matters more than speed, complete verification is the right tool.

Complete Verification Guarantee

A complete verification method always returns one of two answers:

  • VERIFIED: The property provably holds for all inputs in the specified region
  • FALSIFIED: A counterexample exists (and is provided)

Unlike incomplete methods, complete verifiers never return “unknown.” Given sufficient computational resources, they always reach a definitive answer.

SMT-Based Verification

SMT solvers determine satisfiability of logical formulas involving theories beyond pure propositional logic. For neural networks, the relevant theory is linear real arithmetic combined with Boolean logic for modeling ReLU activations.

Encoding Neural Networks as SMT Formulas

A neural network can be encoded as a conjunction of constraints:

Linear layers: For weight matrix WW, bias bb, and layer computation z=Wx+bz = Wx + b:

zi=jWijxj+biiz_i = \sum_j W_{ij} x_j + b_i \quad \forall i

This is a linear equality constraint, directly expressible in SMT.

ReLU activations: For y=ReLU(z)y = \text{ReLU}(z), we need to capture the piecewise behavior:

y={zif z00if z<0y = \begin{cases} z & \text{if } z \geq 0 \\ 0 & \text{if } z < 0 \end{cases}

This is encoded as a disjunction:

(z0y=z)(z<0y=0)(z \geq 0 \land y = z) \lor (z < 0 \land y = 0)

Input constraints: For \ell_\infty perturbation ball with radius ϵ\epsilon:

x0ϵxx0+ϵx_0 - \epsilon \leq x \leq x_0 + \epsilon

Output property: For robustness, require the true class y0y_0 to score highest:

f(x)y0>f(x)ccy0f(x)_{y_0} > f(x)_c \quad \forall c \neq y_0

Complete encoding: Combine all constraints:

layers(linear constraints)ReLUs(ReLU disjunctions)(input constraints)¬(output property)\bigwedge_{\text{layers}} \text{(linear constraints)} \land \bigwedge_{\text{ReLUs}} \text{(ReLU disjunctions)} \land \text{(input constraints)} \land \neg\text{(output property)}

The SMT solver searches for a satisfying assignment. If one exists, it’s a counterexample. If none exists (unsatisfiable), the property is verified.

SMT Encoding Strategy: Verification reduces to checking: Is there an input in the allowed region where the property fails?

This is a satisfiability problem: find xx satisfying input constraints and violating the property. If no such xx exists (unsatisfiable), the property holds.

Encoding Theory: Why Complete Verification is Exponential

Understanding why complete verification has exponential complexity requires analyzing the formal encoding structure. The fundamental issue is that ReLU networks partition input space into exponentially many linear regions.

Formal SAT/LP Encoding Definition:

For a ReLU network with nn neurons, the complete encoding requires representing all possible activation patterns---assignments of active/inactive states to each ReLU.

Definition (Activation Pattern): For a network with ReLU neurons r1,,rnr_1, \ldots, r_n, an activation pattern σ{0,1}n\sigma \in \{0,1\}^n assigns:

σi={1if ReLU ri is active0if ReLU ri is inactive\sigma_i = \begin{cases} 1 & \text{if ReLU } r_i \text{ is active} \\ 0 & \text{if ReLU } r_i \text{ is inactive} \end{cases}

Each pattern σ\sigma defines a linear region where the network function is piecewise linear.

Theorem (Exponential Number of Activation Patterns): For a ReLU network with nn neurons, there are at most 2n2^n distinct activation patterns.

Proof:

Each of nn ReLU neurons can be in one of 2 states (active or inactive). By the multiplication principle, there are 2×2××2=2n2 \times 2 \times \cdots \times 2 = 2^n possible combinations. \square

Why 2n2^n CNF Formulas:

SAT/LP complete encoding must enumerate all activation patterns to find the global optimum (or prove unsatisfiability).

Encoding structure:

For each activation pattern σ{0,1}n\sigma \in \{0,1\}^n:

Φσ=i=1n[σi=1(yi=xixi0)][σi=0(yi=0xi<0)]\Phi_\sigma = \bigwedge_{i=1}^n \left[ \sigma_i = 1 \Rightarrow (y_i = x_i \land x_i \geq 0) \right] \land \left[ \sigma_i = 0 \Rightarrow (y_i = 0 \land x_i < 0) \right]

The complete formula is the disjunction over all patterns:

Φcomplete=σ{0,1}nΦσ\Phi_{\text{complete}} = \bigvee_{\sigma \in \{0,1\}^n} \Phi_\sigma

Converting to CNF (Conjunctive Normal Form) for SAT solving yields O(2n)O(2^n) clauses in the worst case.

Consequence: SAT/SMT solvers must explore (in the worst case) all 2n2^n activation patterns to verify a property or find a counterexample. This is the source of exponential complexity.

Connection to Operation Decomposition:

ReLU neurons are unary operations that introduce non-linearity (see operation decomposition). Each ReLU creates a binary choice (active/inactive), leading to 2n2^n regions. The binary operations (affine layers) are exact and don’t contribute to exponential complexity---only the unary ReLU operations do.

MILP Big-M Encoding Theory:

MILP avoids explicit enumeration of 2n2^n patterns by using binary integer variables and big-M constraints.

Encoding: For ReLU neuron ii with pre-activation xix_i and post-activation yiy_i, introduce binary variable δi{0,1}\delta_i \in \{0,1\}:

yi0yixiyixili(1δi)yiuiδi\begin{aligned} y_i &\geq 0 \\ y_i &\geq x_i \\ y_i &\leq x_i - l_i(1 - \delta_i) \\ y_i &\leq u_i \delta_i \end{aligned}

where [li,ui][l_i, u_i] are bounds on xix_i (from incomplete methods).

When δi=1\delta_i = 1 (active):

yixiyixili0=xiyiui\begin{aligned} y_i &\geq x_i \\ y_i &\leq x_i - l_i \cdot 0 = x_i \\ y_i &\leq u_i \end{aligned}

Since xi[li,ui]x_i \in [l_i, u_i] and yi0y_i \geq 0, this forces yi=xiy_i = x_i (active ReLU).

When δi=0\delta_i = 0 (inactive):

yi0yixiyixili=xi+liyi0\begin{aligned} y_i &\geq 0 \\ y_i &\geq x_i \\ y_i &\leq x_i - l_i = x_i + |l_i| \\ y_i &\leq 0 \end{aligned}

This forces yi=0y_i = 0 (inactive ReLU).

Theorem (MILP Encoding Correctness): The big-M encoding above exactly captures the ReLU function: yi=max(0,xi)y_i = \max(0, x_i) for all possible values of xi[li,ui]x_i \in [l_i, u_i].

Proof:

Case analysis on xix_i:

  1. If xi0x_i \geq 0: ReLU should be active (yi=xiy_i = x_i). Setting δi=1\delta_i = 1 satisfies all constraints with yi=xiy_i = x_i.
  2. If xi<0x_i < 0: ReLU should be inactive (yi=0y_i = 0). Setting δi=0\delta_i = 0 forces yi=0y_i = 0.

The encoding is complete (covers all cases) and sound (forces correct ReLU behavior). \square

Why MILP Avoids Explicit Enumeration:

MILP solvers use branch-and-bound to implicitly explore the 2n2^n space of δ\delta assignments. Instead of enumerating all patterns upfront, the solver:

  1. Solves LP relaxation (continuous δi[0,1]\delta_i \in [0,1]) → polynomial time
  2. Branches on fractional δi\delta_i → creates two subproblems (0 and 1)
  3. Bounds each subproblem using LP relaxation → prunes impossible branches
  4. Implicit enumeration → only explores promising branches, not all 2n2^n

Worst-case complexity: Still O(2n)O(2^n) (must explore all branches if LP relaxation is weak).

Average-case: Much better---commercial solvers (Gurobi, CPLEX) use sophisticated heuristics to prune branches early.

Soundness and Completeness Proofs:

Theorem (SMT/MILP Soundness): If an SMT/MILP verifier returns “VERIFIED”, the property truly holds for all inputs in the specified region.

Proof Sketch:

  1. Encoding is sound: Each constraint in the encoding (linear layers, ReLU, input region, property) is a sound over-approximation of the true network behavior (for MILP) or exact logical equivalence (for SMT).
  2. Solver is sound: SMT/MILP solvers are mathematically proven to return correct results (satisfiable/unsatisfiable for SMT, feasible/infeasible for MILP).
  3. Composition: Sound encoding + sound solver → sound overall verification.

Theorem (SMT/MILP Completeness): If a property truly holds for all inputs, an SMT/MILP verifier will (given sufficient time and memory) return “VERIFIED”.

Proof Sketch:

  1. Encoding is complete: The encoding captures all possible behaviors of the network (all 2n2^n activation patterns are representable).
  2. Solver is complete: SMT/MILP solvers are complete decision procedures---they always terminate with a definitive answer (for decidable theories like linear arithmetic).
  3. Composition: Complete encoding + complete solver → complete overall verification.

Consequence: SMT/MILP methods are sound and complete. They never give false positives (sound) and never give “unknown” (complete). The cost is exponential worst-case complexity.

SMT Solver Mechanics

Modern SMT solvers like Z3 use sophisticated algorithms:

DPLL(T): Combines Boolean satisfiability (SAT) solving with theory-specific reasoning. The solver:

  1. Abstracts theory constraints to Boolean variables
  2. Searches for Boolean satisfying assignment
  3. Checks if assignment is consistent with theory
  4. Learns conflicts when inconsistent, backtracks

Theory solvers: For linear real arithmetic, use simplex-based solvers. For Boolean structure (ReLU disjunctions), use Boolean reasoning.

Conflict-driven learning: When a branch leads to contradiction, learn a clause preventing similar contradictions. This prunes the search space effectively.

Strengths: SMT solvers are general-purpose, mature, and highly optimized. They handle arbitrary logical combinations of constraints.

Limitations: Exponential worst-case complexity. As network size grows, the number of ReLU disjunctions grows exponentially, making the problem intractable.

MILP-Based Verification

Mixed-Integer Linear Programming formulates verification as optimization with both continuous and integer variables.

Encoding Neural Networks as MILP

Linear layers: Same as SMT---linear equalities are native to MILP:

z=Wx+bz = Wx + b

ReLU activations: Use binary indicator variables a{0,1}a \in \{0, 1\} and big-M constraints:

yzy0yz+M(1a)yMa\begin{aligned} y &\geq z \\ y &\geq 0 \\ y &\leq z + M(1 - a) \\ y &\leq Ma \end{aligned}

When a=1a = 1: yzy \geq z and yMy \leq M (ReLU active, y=zy = z)

When a=0a = 0: yz+My \leq z + M and y0y \leq 0 (ReLU inactive, y=0y = 0)

The big-M constant MM must be large enough to not constrain the active case but small enough for numerical stability.

Objective: Minimize the margin to find adversarial examples:

minf(x)y0maxcy0f(x)c\min \quad f(x)_{y_0} - \max_{c \neq y_0} f(x)_c

If the minimum is negative, a counterexample exists. If non-negative, the property holds.

Tighter encoding: Use bounds from incomplete methods to tighten big-M values, reducing the feasible region and speeding up solving.

MILP Solver Mechanics

MILP solvers like Gurobi use branch-and-bound:

LP relaxation: Relax integer constraints (a[0,1]a \in [0, 1] instead of a{0,1}a \in \{0, 1\}), solve linear program efficiently.

Branching: Select fractional integer variable, create two subproblems (fixing a=0a = 0 and a=1a = 1).

Bounding: Use LP relaxation to bound optimal value. Prune branches where bound is worse than best-known solution.

Cutting planes: Add linear inequalities (cuts) that tighten the LP relaxation without removing integer solutions.

Strengths: MILP is well-studied with commercial-grade solvers. Supports optimization objectives (find minimum adversarial perturbation).

Limitations: Integer variables (one per ReLU) create exponential branching. Scales poorly to large networks.

AspectSMTMILP
EncodingLogical formulas (disjunctions)Linear inequalities + binary variables
Solver TypeDPLL(T) with theory solversBranch-and-bound with LP relaxation
Objective SupportSatisfiability onlyOptimization (minimize margin)
MaturityGeneral-purpose SMT solvers (Z3)Commercial MILP solvers (Gurobi)
ScalabilityHundreds of neuronsHundreds of neurons
Main ChallengeReLU disjunctionsInteger variables per ReLU

QCQP and SDP Encodings: Non-Linear Relaxations

Beyond linear encodings (SMT, MILP), we can use non-linear relaxations that provide tighter bounds at the cost of increased computational complexity. Quadratically Constrained Quadratic Programming (QCQP) and Semidefinite Programming (SDP) offer such relaxations.

QCQP Encoding for ReLU

Quadratic Constraint Encoding:

Instead of linear big-M constraints, QCQP encodes ReLU using quadratic equality constraints:

For y=max(0,x)y = \max(0, x) with bounds x[l,u]x \in [l, u] where l<0<ul < 0 < u:

y0yxy(yx)=0\begin{aligned} y &\geq 0 \\ y &\geq x \\ y(y - x) &= 0 \end{aligned}

Interpretation:

The quadratic constraint y(yx)=0y(y - x) = 0 forces either:

  • y=0y = 0 (inactive ReLU), OR
  • y=xy = x (active ReLU)

This is exact for unstable ReLUs without requiring integer variables!

Problem: The quadratic equality constraint is non-convex. QCQP with non-convex constraints is NP-hard (no easier than MILP).

Solution: Relax the equality to an inequality:

y(yx)0y(y - x) \leq 0

This creates a convex QCQP. However, it’s no longer exact---it’s an over-approximation.

Theorem (QCQP Convex Relaxation Tightness): The convex QCQP relaxation is tighter than the linear triangle relaxation but looser than the exact ReLU constraint.

Proof Sketch:

  1. Tighter than triangle: The quadratic constraint y(yx)0y(y-x) \leq 0 captures curvature information that linear constraints cannot.
  2. Looser than exact: The relaxation allows points where 0<y(yx)<00 < y(y-x) < 0, which are not on the true ReLU curve.

SDP Encoding: Semidefinite Programming

Semidefinite Relaxation:

SDP provides an even tighter relaxation by lifting the problem to a higher-dimensional space and enforcing positive semidefiniteness.

Matrix Lifting:

Introduce matrix variable X0X \succeq 0 (positive semidefinite) where:

X=[1zTzZ]X = \begin{bmatrix} 1 & z^T \\ z & Z \end{bmatrix}

Here zz is the vector of neuron activations, and ZZ represents second-order moments (outer products).

ReLU Constraints in SDP:

For y=max(0,x)y = \max(0, x), SDP encoding adds linear matrix inequalities (LMIs):

y0yxyuul(xl)(triangle upper bound)plus second-order cone constraints on X\begin{aligned} y &\geq 0 \\ y &\geq x \\ y &\leq \frac{u}{u - l}(x - l) \quad \text{(triangle upper bound)} \\ \text{plus } & \text{second-order cone constraints on } X \end{aligned}

The key advantage: SDP can encode correlations between multiple neurons through the matrix ZZ, capturing multi-neuron dependencies.

Theorem (SDP Tightness Hierarchy):

For single-neuron ReLU relaxation:

Triangle (Linear)QCQPSDPExact\text{Triangle (Linear)} \sqsubseteq \text{QCQP} \sqsubseteq \text{SDP} \sqsubseteq \text{Exact}

where \sqsubseteq denotes “looser than or equal to.”

Proof:

SDP relaxation includes all linear constraints (triangle relaxation) plus additional semidefinite constraints. By including more constraints, the feasible region is smaller (tighter). \square

Multi-Neuron SDP:

SDP’s real power is multi-neuron analysis. For kk neurons, the matrix XX is (k+1)×(k+1)(k+1) \times (k+1), capturing correlations:

Xijzizj(second-order moment)X_{ij} \approx z_i \cdot z_j \quad \text{(second-order moment)}

This allows SDP to reason about joint distributions of neuron activations, something linear and MILP methods cannot do efficiently.

Complexity Analysis:

MethodVariablesConstraintsComplexity (per iteration)Complete?
SMT/SATO(n)O(n) Boolean + continuousO(n)O(n) disjunctionsExponential (worst-case)Yes
MILPO(n)O(n) binary + continuousO(n)O(n) linearO(2n)O(2^n) (branch-and-bound)Yes
QCQPO(n)O(n) continuousO(n)O(n) quadraticO(n3.5)O(n^{3.5}) (interior-point)No (relaxation)
SDPO(n2)O(n^2) matrix entriesO(n2)O(n^2) LMIO(n6)O(n^6) (interior-point)No (relaxation)

Why SDP is Exponentially Slower than LP:

SDP solvers use interior-point methods operating on the matrix space Rn×n\mathbb{R}^{n \times n}:

  • Each iteration: O(n6)O(n^6) (matrix operations + eigenvalue decomposition)
  • Number of iterations: O(log(1/ϵ))O(\log(1/\epsilon)) (for ϵ\epsilon-optimal solution)

Compare to LP: O(n2.5)O(n^{2.5}) per iteration.

Practical Implication:

  • LP: Networks with thousands of neurons (tractable)
  • SDP: Networks with tens of neurons (already challenging)

SDP is orders of magnitude slower than LP, limiting it to very small networks or local analysis.

When to Use QCQP/SDP:

Use QCQP/SDP when:

  1. Need tighter bounds than LP: Triangle relaxation too loose for verification
  2. Multi-neuron correlations matter: Network has strong neuron dependencies
  3. Network is tiny: fewer than 100 neurons (SDP tractable)
  4. Research setting: Exploring tightness frontiers

Don’t use when:

  1. Network is large: >100 neurons for SDP, >1000 for QCQP
  2. Need complete verification: QCQP/SDP are incomplete (relaxations)
  3. Speed matters: LP-based methods are 10-1000x faster
  4. Standard benchmarks: CROWN/DeepPoly already sufficient

Tightness vs Completeness Tradeoff:

Incomplete MethodsComplete Methods(Tightness hierarchy)(Exactness)Interval<LP<QCQP<SDPSMT/MILP (Exact)Tighter, slowerComplete, exponential\begin{aligned} &\text{Incomplete Methods} \quad &&\text{Complete Methods} \\ &\text{(Tightness hierarchy)} \quad &&\text{(Exactness)} \\ &\text{Interval} < \text{LP} < \text{QCQP} < \text{SDP} \quad &&\text{SMT/MILP (Exact)} \\ &\uparrow \text{Tighter, slower} \quad &&\uparrow \text{Complete, exponential} \end{aligned}

Connection to Theoretical Barriers:

Single-neuron linear relaxation (LP triangle) is optimal within the linear constraint class (see theoretical barriers). QCQP/SDP overcome this barrier by using non-linear constraints, achieving tightness beyond the convex barrier at the cost of increased complexity.

Practical Considerations

When to Use Complete Methods

Use SMT/MILP when:

  • Network is small (hundreds to low thousands of neurons)
  • Absolute certainty is required (safety-critical applications)
  • Finding tight counterexamples is valuable (adversarial training)
  • Computational budget allows hours or days per property
  • Regulatory compliance demands complete verification

Don’t use when:

  • Network has millions of parameters (incomplete methods required)
  • Rapid iteration is needed (complete methods too slow)
  • “Unknown” is acceptable (incomplete methods suffice)
  • Only need robustness estimates, not proofs

Improving Performance

Preprocessing with incomplete methods: Use bound propagation to compute bounds on activations. These tighten big-M constants in MILP and prune branches in SMT.

Incremental solving: When verifying multiple similar properties, reuse solver state. Many SMT solvers support incremental mode where learned clauses carry over.

Parallelization: Verify multiple properties in parallel. Each property is independent, allowing trivial parallelization.

Timeout and best-effort: Set timeouts. If complete verification times out, fall back to incomplete methods or partial results.

Hybrid approaches: Use complete methods only on small uncertain regions identified by incomplete methods. Incomplete methods handle bulk verification, complete methods resolve edge cases.

Limitations and Challenges

Scalability barrier: The fundamental NP-completeness means exponential worst-case complexity is unavoidable. No algorithmic breakthrough can make complete verification polynomial-time (unless P=NP).

Network size: Current complete verifiers handle networks with thousands of neurons. Modern production networks have millions to billions of parameters---orders of magnitude beyond current capabilities.

Numerical issues: MILP requires careful tuning of big-M values. Too large causes numerical instability; too small gives incorrect results. Bounds from incomplete methods help but don’t eliminate the issue.

Memory consumption: Branch-and-bound explores exponentially many branches. Each branch requires memory. Large networks exhaust memory before finding answers.

Scalability Reality Check: Complete verification works well for small networks (e.g., ACAS Xu with ~300 neurons). For larger networks, expect:

  • Networks with thousands of neurons: feasible with good hardware and patience
  • Networks with tens of thousands: very challenging, often timeout
  • Networks with millions: currently intractable for complete methods

Comparison with Incomplete Methods

Incomplete methods (bound propagation, abstract interpretation) trade completeness for scalability:

Advantages of incomplete methods:

  • Polynomial time complexity (scale to large networks)
  • Fast verification (seconds to minutes)
  • GPU acceleration available

Advantages of complete methods:

  • Definitive answers (no “unknown”)
  • Find tight counterexamples
  • Required for regulatory compliance in some domains

Hybrid is often best: Use incomplete methods for initial screening, complete methods for critical cases. This balances speed and certainty.

Current State and Tools

Available tools:

  • Marabou: SMT-based verifier with specialized ReLU handling (Reluplex algorithm)
  • Gurobi/CPLEX: Commercial MILP solvers, require encoding network as MILP
  • Z3: General-purpose SMT solver, requires manual encoding

Research directions:

  • Tighter encodings that reduce solver search space
  • Specialized solvers exploiting network structure (convolution, residual connections)
  • Better preprocessing with incomplete methods to guide complete solvers
  • Parallelization and distributed solving for large instances

Final Thoughts

SMT and MILP solvers provide the gold standard for neural network verification: complete, definitive answers. When you need absolute certainty---for safety-critical systems, regulatory compliance, or finding counterexamples---complete verification is essential.

The cost is computational. Complete methods scale to small networks but struggle with large ones. Understanding this tradeoff helps you choose appropriately: use complete methods where certainty matters and network size permits, use incomplete methods where scale matters more than completeness.

As verification research advances, the boundary between tractable and intractable shifts. Better encodings, faster solvers, and hybrid approaches push complete verification to larger networks. But the fundamental NP-completeness barrier remains---complete verification will always involve exponential worst-case complexity.

Further Reading

This guide provides comprehensive coverage of SMT and MILP approaches to complete neural network verification. For readers interested in diving deeper, we recommend the following resources organized by topic:

SMT-Based Verification:

The foundational work on SMT-based verification established both the approach and its limitations. Reluplex extended the simplex algorithm to handle ReLU constraints directly, providing the first practical SMT-based verifier for neural networks. The same work proved NP-completeness of ReLU network verification, establishing fundamental computational barriers. Z3 provides a general-purpose SMT solver capable of encoding neural network verification problems.

MILP-Based Verification:

MILP formulations encode neural networks as mixed-integer programs, leveraging mature commercial solvers. The key challenge is encoding ReLU activations with binary variables and big-M constraints while maintaining numerical stability and tight bounds. Preprocessing with incomplete methods tightens big-M values, significantly improving MILP solver performance.

Complexity and Limitations:

The NP-completeness result proves that complete verification has worst-case exponential time complexity. This isn’t an engineering limitation---it’s a fundamental mathematical barrier. Understanding this complexity helps set realistic expectations about what complete methods can achieve.

Incomplete Methods for Comparison:

Complete methods are often contrasted with incomplete approaches. Bound propagation and abstract interpretation trade completeness for polynomial-time complexity, scaling to large networks by accepting “unknown” results. These methods are often used to preprocess and guide complete solvers.

Hybrid Approaches:

State-of-the-art verification combines complete and incomplete methods. alpha-beta-CROWN uses incomplete bound propagation for initial verification, applying branch-and-bound (a complete method) selectively where needed. This hybrid strategy balances the strengths of both approaches. GPU acceleration enables practical implementation of these hybrid strategies.

Specialized Complete Verifiers:

Marabou extends Reluplex with additional features and optimizations, representing the current state-of-the-art in SMT-based neural network verification. It handles networks with thousands of neurons through careful encoding and pruning strategies.

Related Topics:

For understanding the theoretical foundations that SMT and MILP methods are built upon, see verification problem. For the guarantees that complete methods provide, see soundness and completeness. For specialized handling of ReLU constraints, see Marabou and Reluplex. For hybrid approaches combining complete and incomplete methods, see branch and bound.

Next Guide: Continue to Marabou and Reluplex to learn about the Reluplex algorithm, a specialized SMT solver optimized for ReLU neural networks.