Formal Specifications for Neural Network Properties#

You know you want to verify robustness, but how do you write that formally? The gap between intuition (“my network should be robust”) and mathematical precision is surprisingly large. Specifications are how you bridge that gap. A poorly written specification might verify something you didn’t intend, or miss important requirements entirely. This guide teaches you to think formally about properties and write specifications that verifiers can actually check.

Why Specifications Matter#

Natural language is ambiguous. “The network should be robust” could mean different things: robust to small pixel perturbations? Robust to semantic transformations? Robust only on correctly classified inputs? These differences matter enormously for verification.

Formal specifications remove ambiguity. They’re mathematical statements about what the network must do. A verifier can either prove the specification holds or find a counterexample. But first, you must express your requirement as a specification.

This is harder than it sounds. Common mistakes include:

  • Specifications that are vacuous: So weak they’re trivially true, proving nothing meaningful.

  • Specifications that are unsatisfiable: So strict no network could ever satisfy them.

  • Specifications that are too narrow: Verifying only special cases while missing the real requirement.

  • Specifications that don’t match your intent: Formally correct but not what you actually wanted to check.

Specification as Contract

A specification is a contract between you and the verifier. You’re saying: “If you prove this property, I’ll trust the result.” Write specifications carefully, as they determine whether verification confirms what you care about.

The Epsilon-Ball Specification#

The simplest and most common specification is local robustness. For an input \(x\), the network is locally robust if all inputs within some distance \(\varepsilon\) produce the same classification.

\[\forall x' : \|x' - x\| \leq \varepsilon \Rightarrow f(x') = f(x)\]

This says: for input \(x\), if another input \(x'\) is within distance \(\varepsilon\), then \(f(x')\) must produce the same class as \(f(x)\).

The norm matters. Common choices are:

  • L∞ norm: \(\|x' - x\|_\infty = \max_i |x'_i - x_i|\). Used for bounded pixel perturbations.

  • L2 norm: \(\|x' - x\|_2 = \sqrt{\sum_i (x'_i - x_i)^2}\). Used for Euclidean distance constraints.

  • L1 norm: \(\|x' - x\|_1 = \sum_i |x'_i - x_i|\). Used for sparse perturbations.

Norm

Use Case

Intuition

L∞

Pixel perturbations, bounded changes per dimension

Maximum change in any single dimension

L2

Overall perturbation budget

Total Euclidean distance allowed

L1

Sparse perturbations, changing few features

Total absolute change across dimensions

Choosing the right norm is essential. For image adversarial robustness, L∞ or L2 are standard. For other domains, the choice depends on what perturbations are meaningful. Don’t default to L∞ without thinking about your application.

Output Constraint Specifications#

Beyond fixed output classification, you might need specifications about output values or ranges. For example, a confidence score should stay above a threshold, or a regression output should stay within safe bounds.

\[\forall x' : \|x' - x\| \leq \varepsilon \Rightarrow f(x') \in [L, U]\]

This says: for all perturbed inputs, the output stays within bounds \([L, U]\).

Multi-output constraints can be specified independently:

\[\forall x' : \|x' - x\| \leq \varepsilon \Rightarrow (f_1(x') \geq \text{threshold}_1 \land f_2(x') \leq \text{threshold}_2)\]

Combine constraints using logical operators: AND (both hold), OR (at least one holds), IMPLIES (conditional constraints).

Example: For autonomous vehicles, you might specify that the network’s steering angle should stay within safe bounds and that the confidence in lane detection should exceed a minimum threshold, even under sensor noise.

Hint

Output constraints are underutilized. Don’t just verify classification robustness—specify and verify that safety-critical outputs stay in safe ranges even under perturbation.

Relational Properties#

Some properties involve relationships between outputs on different inputs, not just robustness of a single input.

Monotonicity: If input A is “more severe” than input B, the network should rank them accordingly. For example, in risk assessment, a higher risk input should score higher.

\[\text{If } x_A \geq x_B \text{ (elementwise), then } f(x_A) \geq f(x_B)\]

Fairness properties: The network should treat similar inputs similarly. If two inputs differ only in a protected attribute, they should have similar outputs.

\[\text{If } x_1 = x_2 \text{ except for protected attribute, then } |f(x_1) - f(x_2)| \leq \delta\]

Consistency: The network should make consistent decisions for semantically equivalent inputs. For vision, rotated or slightly scaled images should get the same classification.

These properties are harder to specify precisely than simple robustness, but they’re critical for real-world deployment.

Temporal and Sequential Properties#

For networks that process sequences (RNNs, transformers), properties might involve behavior over time.

Eventually property: The network eventually enters a safe state.

Always property: The network maintains a safety invariant throughout sequence processing.

\[\text{For all timesteps } t: \text{hidden\_state}[t] \text{ satisfies constraint}\]

Example: In medical monitoring, a recurrent network should eventually raise an alarm if measurements exceed safe bounds, and should maintain memory of critical events.

These properties are complex to specify and verify. Many tools don’t yet support them well. When working with sequential models, keep specifications simpler and fall back to testing for complex temporal properties.

Specification Languages and Formats#

Different tools use different specification syntax. Understanding common formats helps you write specifications that tools can parse.

VNNLIB format: Used by most modern verifiers (CROWN, Marabou, etc.). Specifies linear constraints on inputs and outputs using first-order logic over intervals.

SMT-LIB format: The standard input language for SMT solvers. Expressive and precise but verbose. Used by Marabou, Z3, and other solvers.

ACAS Xu property format: A domain-specific format for collision avoidance verification. Precise property definitions for the ACAS Xu benchmark.

Tool-specific formats: Some verifiers have their own syntax. alpha-beta-CROWN uses its own property definition format, for instance.

Format

Expressiveness

Ease of Use

Tools

VNNLIB

Linear constraints, Boolean logic

Moderate (readable)

Most verifiers

SMT-LIB

Arbitrary first-order logic

Hard (verbose)

SMT solvers, Marabou

ACAS Xu format

Domain-specific collision properties

Easy (specialized)

ACAS Xu benchmarks

Tool-specific

Varies

Varies

Individual tools

When starting with a new verifier, check its documentation for specification examples. Copy and modify existing properties rather than writing from scratch.

From Domain Requirements to Formal Specifications#

The real skill is translating domain requirements into formal specifications. This requires understanding both your domain and verification concepts.

Start with a requirement: “The network should recognize faces under lighting variations.”

Break it into verifiable components: 1. It should correctly classify normal lighting. 2. It should correctly classify darkened images (simulate with bounded pixel reduction). 3. It should correctly classify brightened images (simulate with bounded pixel increase).

For each component, write a formal specification using an epsilon-ball robustness property with appropriate epsilon value. Verify each separately.

Tip

Iterative refinement works better than trying to write the perfect specification upfront. Write a simple specification, verify it, and refine based on results. If verification succeeds but you don’t trust the result, the specification was too weak. If it times out, the specification was too broad.

Common Pitfalls#

Vacuous specifications: A specification that’s trivially true provides no confidence. Example: “the network’s output is a vector of numbers.” Technically true, but meaningless. All specifications should be constraining.

Unsatisfiable specifications: Specifications that are impossible to satisfy. Example: “the network must classify all MNIST images correctly with adversarial perturbations of epsilon=255 (full input range).” No network can achieve this. Unsatisfiable specs waste verification effort.

Over-specification: Requiring more than necessary. Example: specifying the exact output values instead of just the classification. The verifier then needs to prove tighter properties than required.

Mismatch with intent: A formally correct specification that doesn’t match your actual requirement. Example: verifying robustness for one image when you meant to verify a representative sample. Always double-check that the specification captures what you care about.

Hint

Test your specifications. For simple ones, manually verify that they capture your intent on a few examples. For complex ones, run your network on test inputs and check whether the specification would have been satisfied. This sanity-checks the specification before verification.

Common Specification Patterns#

Certain patterns appear frequently. Building a library of patterns helps you write specifications faster.

Local robustness: Epsilon-ball around specific points.

Regional properties: Constraints on outputs for entire input regions. Example: if pixel values are in a certain range, output classification should be X.

Safety thresholds: Output confidence must exceed minimum or be within bounds.

Fairness constraints: Outputs should be similar for inputs differing only in protected attributes.

Stability constraints: Similar inputs should produce similar outputs (Lipschitz-style properties).

Once you recognize a pattern, you can adapt previous specifications rather than starting from scratch.

Final Thoughts#

Formal specifications are the interface between your requirements and the verifier. Good specifications are precise, meaningful, and achievable. Writing them requires both domain knowledge (what matters in your application) and verification knowledge (what verifiers can actually check).

Start with simple specifications. Verify them. Refine based on what you learn. Over time, you’ll develop intuition for which specifications are realistic and which are too ambitious.

Remember: a verified specification is only as valuable as the specification itself. Verify something meaningful, and verification becomes a powerful tool for confidence. Verify something vacuous, and you’ve wasted effort on a meaningless guarantee.

Note

Further reading: