Using ZKML for creating ML AI models execution proofs — Balance AI POC

Balance AI
6 min readFeb 29, 2024

The complexity of machine learning (ML) deployments is rising in tandem with the expanding scope and accuracy of ML technologies. To manage this complexity, numerous organizations are opting for “ML-as-a-service” (MLaaS). They run intricate, proprietary ML models. However, as the adoption of these services grows, they pose challenges in terms of comprehension and audibility.

To address those issues and in order to decentralize the ML-as-a-service, cryptographic protocol has to be introduced that would allow to proof that the request was processed by the “advertised” model.

At Balance AI we are doing research with various techniques. Currently the most promising is ZK-SNARKs technology. You can read more about them at: [1]


A trustless system is constructed to authenticate ML model predictions for models intended for production-level use. This system utilizes a cryptographic technique known as ZK-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge), enabling a prover to demonstrate the outcome of a computation without disclosing any details about the inputs or intermediate steps involved. ZK-SNARKs empower an MLaaS provider to validate the accurate execution of the model retroactively, thereby enabling model consumers to scrutinize predictions at their discretion. [1][2]

The system would look like in the following diagram:

We have successfully experimented with ZKML ( framework. [3]

Verification Example

We have used GPT-2 model as an example POC to prove its execution using ZKML. ZKML framework provides this as an example. [3]

First step is to verify the circuit using their verified application.

./target/release/verify_circuit examples/nlp/gpt-2/config.msgpack examples/nlp/gpt-2/vkey examples/nlp/gpt-2/proof examples/nlp/gpt-2/public_vals kzg

Then we can generate the proof using:

./target/release/time_circuit examples/nlp/gpt-2/model.msgpack examples/nlp/gpt-2/inp.msgpack kzg

This will produce public_vals, pkey, vkey, and proof. You should be able to use these to verify the proof.


ZKML library allows to create verification of execution for any ML models in TFLite (TensorFlow Lite format) [4]

The input request (to the ML model) has to be converted to NPY ( format.

And running the following script (for example):

./target/release/test_circuit examples/mnist/converted_model.msgpack examples/mnist/example_inp.msgpack kzg

We can generate proof for any ML model.

How it works

A neural network can be represented by a matrix of weights through its architecture and connections between neurons. In a neural network, each neuron in a given layer is connected to every neuron in the subsequent layer. These connections are associated with weights that determine the strength of the connection between neurons.

In summary, by representing the neural network architecture with matrices of weights, we can efficiently compute the outputs of each layer through matrix multiplication and activation functions, enabling complex computations in neural networks.

The described ZK solution uses KZG polynomials.

KZG is a popular polynomial commitment scheme widely used in zero knowledge protocols. The core idea is that a prover would be able to commit a polynomial, and later on prove to a verifier the value of the polynomial at a given point. This can be done without revealing the underlying polynomial. The reason that it is so useful is that for anything that can be encoded into a polynomial, can now be easily selective disclosed. [6][7]

As you may guess, we can encode the matrix of weights (together with the input) into the KZG polynomial. That allows us to use KZG ZK solution to proof the execution of the specific model.

First, consider the setting where a model provider (MP) has a model they wish to serve to a model consumer (MC). The MC wants to verify the model accuracy to ensure that the MP is not malicious, lazy, and or erroneous (i.e., has bugs in the serving code). To verify model accuracy, the model provider (MP) will commit to a model by hashing its weights. The model consumer (MC) will then send a test set to the MP, on which the MP will provide outputs and ZK-SNARK proofs of correct execution. By verifying ZK-SNARKs on the test set, MC can be confident that MP has executed the model correctly. [1]

Protocol Proposal

  1. Generate the ZK Proof Setup of Model Weights during registration of the model. It would be automated by Balance AI Wrapper SDK
  2. During accessing the model via Balance Client SDK, the proofs including model weights and input params would be generated by the wrapper code. (Prove)
  3. The proof would be stored on-chain for further verification by the protocol. (Verify)

The aforementioned protocol would provide necessary security for accessing any ML (Machine Learning) models registered on the Balance AI chain.

The zero-knowledge proof process can be broken down into three distinct


  • Proof Setup: In this stage, the prover prepares the necessary

inputs and parameters for the proof, including any private information

that will be used to construct the proof.

  • Prove: During this stage, the prover constructs the zero-knowledge

proof using the prepared inputs and parameters. The proof must be

constructed in a way that allows the verifier to verify its correctness

without learning any sensitive information about the inputs or parameters.

  • Verify: In this final stage, the verifier checks the validity of

the proof without learning anything beyond the fact that the statement

being proved is indeed true. The verifier can do this by using the

information provided in the proof to verify the correctness of the

statement, while keeping any sensitive information private and secure.

The Balance AI Wrapper SDK would have to provide automation for preparing the ML model to be able to work with ZK ML system. That would include:

  • Converting model format into format supported by ZK ML (e.g. TFlite, ONNX Runtimes)
  • Generating initial proof setup based on model weights
  • Automating proof generation (prove) on the fly from clients’ requests.
  • Storing proof (for later verification) on chain and/or in IPFS

We also think that we can integrate the proofs verification in the Balance validator Node as an off-chain worker. That would allow to validate proofs within on-chain context.


ZKML cryptography techniques are very useful in securing the decentralized ML as a service system. It can be used to provide final proof that the response was produced by the Machine Learning model that was originally registered/provided.

However, that technique has limitations as current state of research in the field provides a way to proof the execution of Machine Learning models (based on their weights). Some alternative methods has to be developed to be able to proof other AI model such as Expert System etc.

Another aspect is the resources needed to generate the proof such as memory, CPU power and time to create proof. For large models the resource constraints and/or time to create proof can be substantial. In such cases maybe the optimistic approach would be required.

Some interesting libraries that help with ZKML integration:

Based on the results of this POC, we have decided to integrate ZKML into our our V1 release. It will be part of the Balance AI Wrapper SDK.

We are currently conducting testing on a few SDK options to determine which will best complement our Balance AI Wrapper SDK.





Balance AI

We created BALANCE DAO to build and develop a safe crypto space.