Falsifiers and Verifiers: The Verification Toolkit#

You’ve built a neural network and need to understand its robustness. Your choices are clear on the surface: run attacks on it or verify it formally. But beneath these broad categories lies a spectrum of approaches, each with distinct strengths and purposes. The choice between falsification and verification—and how to combine them—determines the efficiency and reliability of your testing strategy.

This guide explores the practical toolkit for assessing neural network robustness: the falsifiers that hunt for bugs and the verifiers that prove guarantees. Understanding when to use each, and how they complement each other, is essential for building trustworthy systems.

Falsifiers: The Bug Hunters#

Falsifiers search for counterexamples—inputs that violate the property you claim to hold. They don’t prove anything; they find concrete failures.

What Falsifiers Do

A falsifier answers one question: “Can you find an input that breaks this property?” If the answer is yes, you get a concrete counterexample. If the answer is no (after exhaustive search), you’ve tested thoroughly but learned nothing about actual robustness.

Falsifiers are fundamentally optimizers. They search the input space, guided by loss gradients or other heuristics, looking for failures.

How Falsifiers Work

Gradient-based attacks are the most common falsification approach. Given an input, compute gradients of the loss with respect to the input, then take steps to maximize loss (or change the prediction). Popular attacks include:

  • FGSM (Fast Gradient Sign Method): Single gradient step in the direction that increases loss

  • PGD (Projected Gradient Descent): Multiple gradient steps, projecting back into the allowed perturbation region after each step

  • C&W (Carlini & Wagner): Optimization-based attack formulating robustness as a constraint satisfaction problem

The falsification objective is:

\[\text{Find } x' \text{ such that } f(x') \neq f(x) \text{ and } \|x' - x\|_\infty \leq \epsilon\]

In words: find a perturbed input that causes misclassification, staying within the perturbation budget.

Beyond gradient-based attacks, search-based falsifiers use genetic algorithms, local search, or coverage-guided strategies to explore the input space. These methods don’t require gradients and can sometimes find failures gradient-based methods miss.

Strengths of Falsification

  • Speed: Attack-based methods run in seconds, not hours

  • Scalability: Work on networks of arbitrary size

  • Concrete bugs: Return actual adversarial examples, not just “unsafe”

  • Intuitive feedback: Seeing real failures helps guide improvements

  • Popular libraries: Foolbox, Adversarial Robustness Toolbox (ART), CleverHans make attacks easy to apply

Limitations of Falsification

  • No guarantees: Failing to find adversarial examples doesn’t mean they don’t exist

  • Attack-dependent: Different attacks find different adversarial examples; stronger attacks might succeed where weaker ones fail

  • No absence proofs: Falsification can only prove unsafe; it cannot prove safe

  • Search heuristics: Quality depends on the attack’s ability to navigate the loss landscape

When Falsification is Enough

Use falsification-only approaches when: you’re in active development (speed matters), you’re debugging and want concrete failures, you’re testing for obvious vulnerabilities, or soundness guarantees aren’t required. Combined with rapid retraining, falsification-driven development is the fastest path to improvements.

Tip

Choosing attack strength and diversity: Run multiple attacks (FGSM, PGD, C&W) with various hyperparameters. Stronger attacks (more steps, larger learning rates) are more likely to find adversarial examples but take longer. Diversity in attack types often finds failures that any single method might miss.

Verifiers: The Proof Engines#

Verifiers take a different approach: they systematically analyze whether a property can possibly be violated. Rather than searching, they reason about all possible inputs in a region.

What Verifiers Do

A verifier answers: “Can you prove this property holds for all inputs in this region?” If yes, you have a formal guarantee. If no, the verifier either found a counterexample or couldn’t determine the answer.

Verifiers are mathematicians: they build mathematical models of the network and reason about its behavior exhaustively.

How Verifiers Work

Sound verification relies on over-approximation. Instead of analyzing the exact network behavior, verifiers compute an approximation that contains all possible behaviors, then prove properties of that approximation.

Common verification approaches include:

Bound Propagation (CROWN, DeepPoly, auto_LiRPA): Propagate bounds on neuron outputs through layers. For each activation, maintain upper and lower linear bounds on its output. Propagate these bounds through the network to get output bounds.

SMT/MILP Solvers (Marabou, Neurify): Encode the network and property as logical constraints, then use SAT/SMT solvers or integer programming solvers to determine satisfiability.

Abstract Interpretation (ERAN): Use abstract domains to represent sets of states and compute fixed points of the abstract transfer functions.

The verification objective is:

\[\forall x' \in \text{Ball}(x, \epsilon), \text{ prove } f(x') = f(x)\]

In words: prove that all perturbed inputs produce the same classification as the original.

Strengths of Verification

  • Provable guarantees: “Verified” means actually verified; no hidden bugs remain in the certified region

  • Exhaustive coverage: Analyzes all inputs in the region, not just sampled ones

  • Sound methods: Mathematical guarantees about correctness

  • No attack-dependency: Result doesn’t depend on which attack you tried

  • Sophisticated tools: alpha-beta-CROWN, Marabou, ERAN provide state-of-the-art capabilities

Limitations of Verification

  • Computational cost: Verification can take hours for networks that attacks analyze in seconds

  • Scalability challenges: Works well on small to moderate networks; struggles with very large networks

  • “Unknown” results: Incomplete methods may return “unknown,” providing no information

  • Tight bounds needed: Conservative approximations might prove nothing useful

  • Complex property specifications: Verifying non-standard properties requires expert encoding

When Verification is Necessary

Use verification when: safety matters (autonomous systems, medical devices), you need formal guarantees for compliance, you want to certify robustness at deployment, you’re making robustness claims that others will rely on, or you’re in advanced stages of development where quality matters more than speed.

Tip

Strategy: Start with incomplete verification (CROWN, DeepPoly) which is fast and sound. If it returns “verified,” you’re done. If “unknown,” either try tighter methods or accept the limitation for that region. Only resort to complete methods (branch-and-bound, SMT) for critical regions or final certification.

The Verification Spectrum: From Falsification to Complete Verification#

Falsification and verification aren’t binary choices; they’re ends of a spectrum with approaches in between.

Pure Falsification

Gradient-based and search-based attacks. No guarantees, but very fast. Good for finding bugs during development.

Statistical Falsification

Randomized testing that samples many inputs randomly and checks for failures. Probabilistic coverage but no formal guarantees.

Incomplete Verification

Sound but may return “unknown.” CROWN and DeepPoly occupy this space. Fast enough for practical use, sound guarantees, but conservative approximations might not prove what’s actually safe.

Complete Verification

Sound and always decides. Branch-and-bound with bound propagation or SMT-based methods. Guaranteed correctness but potentially very slow.

Hybrid Approaches

Combine falsification and verification: use attacks to find obvious bugs, then verify the remaining regions. Or use verification to identify weak regions, then target attacks there.

Approach

Guarantee

Speed

Scalability

When to Use

Pure falsification

None

Very fast

Excellent

Development, debugging

Statistical falsification

Probabilistic

Fast

Good

Coverage testing

Incomplete verification

Sound (incomplete)

Moderate

Good

Production testing, pre-deployment

Complete verification

Sound + complete

Slow

Limited

Critical certification

The Spectrum is a Continuum

These aren’t discrete boxes but points on a spectrum. Modern tools blur the lines: some attacks use verification bounds to guide search (verification-guided falsification). Some verifiers use heuristics that aren’t perfectly sound (sound-ish verification). Understanding where each tool sits helps you choose appropriately.

Choosing Between Falsification and Verification#

When should you falsify, and when should you verify?

Development Phase: Falsify First

Early in development, falsification is your friend. Adversarial attacks quickly find bugs, giving concrete feedback on network weaknesses. The goal is iteration speed, not guarantees. Run attacks frequently, find vulnerabilities, improve the model, repeat.

Pre-Deployment Phase: Verify Increasingly

As your network approaches deployment, shift toward verification. Use incomplete verifiers (CROWN, DeepPoly) to certify robustness on representative samples. These give sound guarantees while remaining practical.

Critical Systems: Complete Verification

For safety-critical systems, use complete verification on the final model. The computational cost is worth the absolute certainty.

Debugging: Falsify to Find Failures

When your verifier returns “unknown,” use falsification to investigate. Can you find a concrete adversarial example in that region? If yes, the network is genuinely unsafe. If no, the verifier is being overly conservative.

Certification: Verify for Proof

When you need to make claims about robustness to others—regulators, customers, users—verification provides proof. Falsification only shows you tested hard; verification shows it’s mathematically guaranteed.

Phase

Primary Tool

Secondary Tool

Goal

Research/development

Falsification

Informal testing

Speed, finding bugs

Pre-production

Incomplete verification

Falsification

Quality assurance

Production/deployment

Incomplete verification

Complete verification for critical regions

Guarantees, certification

Debugging verification

Falsification

Tighter verification methods

Understanding “unknown” results

Combining Falsifiers and Verifiers: The Hybrid Workflow#

The most effective robustness testing combines both approaches in an iterative workflow:

Step 1: Falsify to Find Obvious Bugs

Run adversarial attacks on your network. If attacks find many failures, the network has serious problems—improve it before doing anything else.

Step 2: Fix Bugs and Retrain

Use the adversarial examples attacks found to improve your training. Retrain with these examples, or use adversarial training techniques to build robustness.

Step 3: Verify to Get Guarantees

Once obvious bugs are fixed, run incomplete verifiers on representative test samples. This gives you assurance that the network is genuinely robust within the verified regions.

Step 4: Investigate “Unknown” Results

When verification returns “unknown,” use falsification to investigate. Try attacks on that input region. If you find counterexamples, you found a real vulnerability. If you don’t, the verifier is being conservative—either accept that limitation or invest in complete verification.

Step 5: Iterate

For inputs you couldn’t verify, retrain focusing on those regions, then reverify. Each iteration shrinks the “unknown” regions and builds confidence in robustness.

Example: MNIST Classifier Testing

Start with a simple MNIST classifier trained on clean data. Attack it and find it’s easily fooled—plenty of failures. Retrain with adversarial training. Now attacks struggle more. Run CROWN verification—verify most samples. A few samples return “unknown.” Attack those specific regions—find adversarial examples in some, but not others. Retrain focusing on uncertain regions. Reverify—more samples now verified. Repeat until satisfied with coverage.

This hybrid approach is faster than pure verification (which would struggle on an untrained network) and gives better guarantees than pure falsification (which never proves absence of bugs).

Complementary Strengths, Not Competition

Falsifiers and verifiers aren’t rivals; they’re teammates. Falsification excels at finding bugs quickly. Verification excels at proving safety exhaustively. Together, they provide the fastest path to high confidence in robustness. Use falsification when you need to find things quickly, verification when you need to prove things thoroughly, and both in iteration.

Tip

Automation and CI/CD integration: Integrate both into your testing pipeline. Run falsifiers on every commit (catches obvious regressions quickly). Run verifiers daily or before releases (provides formal assurance). Automated workflows let you catch problems early without manual effort.

Tool Landscape: Falsifiers and Verifiers#

Falsification Tools

Several libraries provide easy access to adversarial attacks:

  • Foolbox: Unified interface to many attacks across frameworks (PyTorch, TensorFlow, JAX). Easy to use for quick attack evaluation.

  • Adversarial Robustness Toolbox (ART): IBM’s comprehensive toolkit for adversarial machine learning, including attacks, defenses, robustness metrics, and explanations.

  • CleverHans: Developed at Google, one of the original and still popular attack libraries. Good educational tool.

  • DeepXplore: Coverage-guided falsification for finding failures that maximize neuron coverage, not just input coverage.

  • DeepTest: Systematic falsification using synthetic scenarios (weather, lighting, etc.) for safety-critical applications.

Verification Tools

The verification landscape has powerful open-source and academic tools:

  • alpha-beta-CROWN: State-of-the-art incomplete and complete verifier using bound propagation plus branch-and-bound. Fast, scalable, tight bounds.

  • auto_LiRPA: Library implementing automatic differentiation-based verification, making bound propagation efficient. Foundation for alpha-beta-CROWN.

  • Marabou: SMT-based complete verifier from Stanford. Slower but provides definitive answers. Good for small networks or final certification.

  • ERAN: ETH’s framework providing multiple abstract interpretation domains (zonotopes, DeepPoly, etc.). Good for understanding different verification approaches.

  • DeepPoly: Abstract interpretation approach providing good tightness-speed tradeoff.

Practical Performance Characteristics

Falsifiers (attacks) typically complete in seconds to minutes, even on large networks. Verifiers vary widely:

  • Incomplete verifiers (CROWN, DeepPoly): Seconds to minutes for reasonable networks

  • Complete verifiers (branch-and-bound, SMT): Minutes to hours for small networks, may timeout on large ones

Tool selection depends on your network size, required guarantees, and computational budget.

Tool

Type

Approach

Speed

Guarantee

Best For

Foolbox/ART

Falsifier

Gradient attacks

Very fast

None

Quick iteration

alpha-beta-CROWN

Verifier

Bound propagation

Moderate

Sound

Practical verification

Marabou

Verifier

SMT-based

Slow

Sound + complete

Final certification

ERAN

Verifier

Abstract interpretation

Moderate

Sound

Research, education

auto_LiRPA

Verifier library

Bound propagation

Fast

Sound

Custom verification

Understanding Tool Outputs#

Different tools return different types of results. Knowing what each means helps you act appropriately.

Falsifier Outputs

Successful falsification returns a concrete adversarial example—a specific perturbed input that causes misclassification. You get the exact perturbation, the original prediction, and the new (incorrect) prediction. This concreteness is valuable for debugging.

Failed falsification means “I couldn’t find an adversarial example,” which tells you nothing about whether one exists. You might need stronger attacks, longer search, or different initialization.

Verifier Outputs

Sound verifiers return one of three results:

  • Verified: The property holds for all inputs in the region. Provably safe.

  • Counterexample found: The verifier found an input violating the property. Like falsification output, but verified to be correct.

  • Unknown/Timeout (incomplete verifiers only): Couldn’t determine the answer with the current approximations or time budget.

Interpreting “Unknown” Results

An “unknown” from an incomplete verifier is not a failure. It means the verifier’s approximations were too conservative, or the computational budget ran out. The property might actually hold, but you need:

  • Tighter verification methods (use CROWN instead of DeepPoly, or branch-and-bound)

  • A smaller input region (reduce epsilon)

  • More computational resources

  • Or acceptance that this region isn’t certifiable with your current approach

What to Do With Each Result

If falsification finds adversarial examples: 1. Document them as test cases 2. Retrain including these examples 3. Reverify after retraining

If verification says “verified”: 1. Confidence in robustness for that input/region 2. No further action needed 3. Document the certification details

If verification says “unknown”: 1. Run falsification on that specific region 2. If falsification finds examples: real vulnerability, retrain 3. If falsification finds nothing: verifier is conservative, accept or try tighter method

Hint

Iterative refinement based on results: Each “unknown” that doesn’t yield counterexamples points to verifier conservatism, not network unsafety. Track these—they guide where to focus training improvements (overly conservative verifiers) versus where to accept uncertainty (truly hard regions).

Common Pitfalls and Best Practices#

Pitfalls to Avoid

  1. Relying solely on falsification for critical systems: Absence of attacks doesn’t imply safety. Always use verification for systems where safety matters.

  2. Expecting complete verification to finish quickly: Complete verification can take unbounded time. Have realistic timeout expectations and accept “unknown” or the best answer found.

  3. Misinterpreting “no attack found” as “verified”: A strong attack that fails to find adversarial examples is not verification. Use actual verifiers for guarantees.

  4. Not testing falsifiers with known vulnerable inputs: Validate that your attacks work by testing on networks you know are vulnerable. Attack parameterization and initialization matter.

  5. Ignoring computational budgets: Verification has real costs. Plan for it—don’t expect instant results on large networks.

Best Practices

  1. Use falsifiers early and often in development: Run attacks frequently during training to identify and fix obvious vulnerabilities.

  2. Reserve complete verification for final validation: Complete methods are expensive; use incomplete sound methods during development, complete methods for final certification.

  3. Document everything: Record which tools, settings, and parameters were used for each verification. Reproducibility matters for compliance and future improvements.

  4. Maintain a test suite of known adversarial examples: Build a library of examples you know fool the network. Use these to validate that your testing pipeline catches real problems.

  5. Automate and integrate: Set up CI/CD pipelines that run falsification on every commit and verification before releases. Automation catches problems without manual effort.

Final Thoughts#

Falsifiers and verifiers are complementary tools addressing different questions:

  • Falsifiers answer: “Can I break this network?”

  • Verifiers answer: “Can I prove this network is safe?”

In development, falsifiers provide speed and concrete feedback. In production, verifiers provide guarantees and certifiable robustness. Together, they form a complete testing strategy.

The choice between them depends on your phase of development, the importance of guarantees, and your computational budget. During active development, falsifiers guide improvements. Before deployment, verifiers provide assurance. In critical systems, both ensure comprehensive coverage.

The verification tooling landscape continues to improve. Faster verifiers make formal guarantees more practical. Smarter attacks find harder-to-detect vulnerabilities. The gap between them narrows, but the distinction remains: falsification finds bugs, verification proves safety.

Build your testing strategy using both. Start with falsifiers for speed and feedback. Add verifiers for assurance and guarantees. Combine them iteratively for the fastest path to genuinely robust networks.

Note

Further reading: