# Onchain Proofs & zkML

## Overview

The exploit detection algorithms used by the Safe Sequencer are deterministic over the inputs posted to L1, just like a rollup. In fact, the Safe Sequencer itself is a rollup that proves the proper inclusion (or exclusion) of transactions rather than the result of their execution.

## Deterministic Exploit Detection

By performing dynamic analysis before a transaction's state change is committed to a block, the Safe Sequencer can delineate malicious exploiters from users programmatically. This dynamic analysis is deterministic, not arbitrarily decided by a human, and allows us to prove if a Safe Sequencer begins arbitrarily censoring transactions. This determinism also allows us to prove the performance of our algorithms without revealing their exact heuristics to exploiters.

### Detection Algorithms

#### Hand-Written Patterns: Reentrancy

Reentrancy exploit patterns are blocked at a global level using hand-crafted heuristics. These heuristics were inspired by many research papers, including the execution property graph paper.

#### A.I. Models

A.I. models within the sequencer use granular execution data collected during a transaction to determine whether a transaction is malicious or not. These models also use historical information that the sequencer maintains for protected contracts, building up the model's own idea of normal behaviour for a specific contract over time.

## Accuracy Proof (zkML)

The Safe Sequencer uses an accuracy proof to prove the detection metrics of a committed model without revealing the exact model weights or algorithm. We use zkML to prove the precision and recall of a model over a public dataset to our users without revealing the exact model to exploiters, making it harder for attackers to skirt the model's detection.

To learn more about how zkML, and the ability to prove the accuracy of a model without sharing the exact model itself, check out these resources:

## Censorship Proof

The Safe Sequencer commits to a set of detection models whose accuracy metrics are shared publicly and proven using an accuracy proof. A hash for each chosen model is posted onchain, representing the set of models that the Safe Sequencer has committed to run.

The Safe Sequencer MUST only censor a transaction if one of the committed models detects an exploit for that transaction. If the Safe Sequencer censors a transaction that was not detected as an exploit by the pre-committed detection algorithms then it must have it's power removed from the customer rollup and, if there is economic stake, be slashed.

In order to prove that a Safe Sequencer is invalidly censoring, we need an inclusion rule whereby a transaction whose data is posted to L1 (by any user) MUST be attempted to be included by the Safe Sequencer. If that transaction is not included within a batch posted by the Safe Sequencer in the allotted window, then the Safe Sequencer must prove it was detected as an exploit by one of the pre-committed detection algorithms. If the Safe Sequencer cannot prove the transaction tripped a pre-committed detection algorithm, then the successful censorship proof is used to automatically remove the Safe Sequencer from power and slash any associated economic stake to reimburse users.