Foresight Ventures:How ZK co-processor breaks smart contract data barriers

Foresight Ventures
23 min readNov 30, 2023

Author: Mike@Foresight Ventures


A simple and easy-to-understand example of the coprocessor concept is the relationship between a computer’s CPU and its graphics card (GPU). While the CPU can handle most tasks, it requires the assistance of the GPU for specific tasks where its computing power is insufficient, such as machine learning, graphic rendering, or running large-scale video games. To avoid lag or frame drops during gaming, a high-performance GPU is essential. In this scenario, the CPU acts as the processor, and the GPU serves as the coprocessor. When applied to blockchain technology, smart contracts function like the CPU, and ZK coprocessors are akin to GPUs.

The key point is to delegate specific tasks to specialized coprocessors. It’s like in a factory, where the boss knows every step of the process and could do it all alone or teach employees the entire production process. But doing so would be highly inefficient, producing one item at a time and only starting the next after finishing the previous one. So, he hires various specialized workers, each responsible for their expertise in the production chain. They work in their workshops, communicating and collaborating without interfering in each other’s tasks, focusing only on what they do best. Those with quick hands and strength handle tasks like screwing, machine operators work on machinery, and those knowledgeable in finance manage production counts and costs. This asynchronous collaboration maximizes work efficiency.

During the Industrial Revolution, capitalists discovered that this model maximized factory output. However, due to technological or other reasons, if a step in the production chain hits a barrier, it might need to be outsourced to a specialized manufacturer. For instance, in a company that produces mobile phones, the chips might be manufactured by a separate, specialized chip company. Here, the phone company acts as the central processor, and the chip company is the coprocessor. Coprocessors can effortlessly handle specific, challenging, and tedious tasks asynchronously, tasks that the central processor itself might struggle with.

In a broad sense, ZK coprocessors have a wide range of interpretations in the Web3 domain. Some projects refer to themselves as coprocessors, while others use the term ZKVM, but the underlying idea is the same: they allow smart contract developers to prove off-chain computations in a stateless manner based on existing data. Put simply, this means offloading some of the on-chain computational work to off-chain processes to reduce costs and improve efficiency.

At the same time, ZK technology is used to ensure the reliability of these computations and to protect the privacy of specific data. In the data-driven world of blockchain, this becomes particularly crucial. This approach mirrors the concept of specialized tasks in a factory setting, where certain complex or data-intensive operations are delegated to specialized units (the ZK coprocessors), akin to outsourcing to a specialized manufacturer in the industrial analogy. This delegation not only optimizes efficiency but also maintains data integrity and privacy, much like how different departments in a factory handle different aspects of production without compromising the overall workflow.

Why we need ZK-coprocessors?

One of the biggest challenges facing smart contract developers is the high cost associated with on-chain computation. Since every operation on the blockchain consumes gas, the costs for complex application logic can quickly become prohibitively expensive. Although archival nodes in the blockchain’s Data Availability (DA) layer can store historical data — which is why off-chain analytics applications like Dune Analytics, Nansen, 0xscope, and Etherscan have access to extensive data from the blockchain that can be traced back a long time — it’s not straightforward for smart contracts to access all this data. They can easily access data stored in the virtual machine’s state, recent block data, and other public smart contract data. However, accessing more extensive data can be a laborious task for smart contracts:

  • Smart contracts in the Ethereum Virtual Machine (EVM) can access the block header hashes of the most recent 256 blocks. These block headers contain all activity information in the blockchain up to the current block, compressed into 32-byte hash values using Merkle trees and the Keccak hash algorithm.
  • Although this data is hashed and packaged, it is theoretically decompressible — but not easily so. For instance, if you want to trustlessly access specific data from a previous block using a recent block header, it involves a series of complex steps. You would need to obtain off-chain data from an archival node, then construct a Merkle tree and a proof of validity for the block to verify the authenticity of that data on the blockchain. The EVM then processes these validity proofs for verification and interpretation, which is not only cumbersome and time-consuming but also expensive in terms of gas.

The fundamental challenge here is that blockchain virtual machines like the EVM are not inherently designed for handling large amounts of data and computation-intensive tasks, such as the aforementioned decompression work. The EVM is engineered to execute smart contract code while ensuring security and decentralization, not to process large-scale data or perform complex computational tasks. Therefore, when it comes to tasks requiring significant computational resources, alternative solutions are often needed, such as off-chain computation or other scalability techniques. This is where ZK coprocessors come into play, offering a viable solution for handling these computation-intensive tasks efficiently while maintaining the integrity and security of the blockchain.

ZK rollups are actually the earliest ZK coprocessors, supporting the same type of computation used on L1 at a much larger scale and number. This processor was at the protocol level, now we are talking about the ZK coprocessor at the dapp level. The ZK coprocessor enhances the scalability of smart contracts by allowing them to delegate historical on-chain data access and computation without trust using ZK proofs. Instead of performing all operations in the EVM, developers can move expensive operations to the ZK coprocessor and simply use the results on-chain. By separating data access and computation from blockchain consensus, it offers a novel approach to scaling smart contracts.

The ZK coprocessor introduces a new design paradigm for on-chain applications that removes the limitation that computation must be done in a blockchain virtual machine. This allows applications to access more data and run at a larger scale than before while controlling the cost of gas, increasing the scalability and efficiency of smart contracts without compromising decentralization and security.

Technical Implementation

This section will explain how the ZK coprocessor, exemplified by Axiom’s architecture, solves specific problems, focusing on two core areas: data fetching and computation. In both processes, ZK ensures both efficiency and privacy.

Data Fetching

A crucial aspect of computation on ZK coprocessors is ensuring correct access to all input data from the blockchain history. As mentioned earlier, this is challenging because smart contracts can only access current blockchain state data within their code, and even this access is one of the most expensive parts of the on-chain computation. This means historical on-chain data like transaction records or previous balances, which are interesting inputs for computations, can’t be used locally by smart contracts to verify the coprocessor’s results.

There are three ways ZK coprocessors address this issue, balancing cost, security, and development complexity:

  1. Storing additional data in the blockchain state and using EVM storage reads to validate all data used by the coprocessor on-chain. This method is quite expensive, especially for massive data.
  2. Relying on trusted Oracles or a network of signers to verify the coprocessor’s input data. This requires users to trust the Oracle or multi-signature providers, reducing security.
  3. Using ZK proofs to check whether any on-chain data used in the coprocessor is committed in the blockchain history. Every block in the blockchain commits to all past blocks, thus committing any historical data and providing cryptographic assurance for data validity without needing additional trust assumptions from users.


Executing off-chain computation in ZK coprocessors involves transforming traditional computer programs into ZK circuits. Currently, all methods for achieving this significantly impact performance, with ZK-proof overheads ranging from 10,000 to 1,000,000 times that of local program execution. Additionally, the computational model of ZK circuits differs from standard computer architectures (e.g., all variables must be encoded modulo a large cryptographic prime, and execution might be non-deterministic), making it difficult for developers to write them directly.

Thus, three main methods exist for specifying computations in ZK coprocessors, each balancing performance, flexibility, and development difficulty:

  1. Custom Circuits: Developers write their own circuits for each application. This method has the highest performance potential but requires significant effort from developers.
  2. eDSL/DSL for Circuits: Developers write circuits for each application within a framework that abstracts ZK-specific issues (similar to using PyTorch for neural networks). However, this slightly reduces performance.
  3. zkVM: Developers write circuits within an existing virtual machine and validate their execution in ZK. Using existing virtual machines provides the simplest experience for developers, but performance and flexibility are lower due to differences in computational models between virtual machines and ZK.


ZK coprocessors have a wide range of applications, theoretically covering all scenarios that Dapps can. As long as tasks are related to data and computation, ZK coprocessors can reduce costs, increase efficiency, and protect privacy. Below, we explore what ZK processors can specifically do in different application areas.



Taking Uniswap V4’s hook as an example:

Hooks allow developers to execute specified operations at any critical point in the lifecycle of a liquidity pool, such as before or after token trades, or before or after changes in LP positions. This customization includes:

  • Time-Weighted Average Market Makers (TWAMM).
  • Dynamic fees are based on volatility or other inputs.
  • On-chain limit orders.
  • Depositing excess liquidity into lending protocols.
  • Customized on-chain oracles, like geometric mean oracles.
  • Automatically compounding LP fees into LP positions.
  • Distributing Uniswap’s MEV profits to LPs.
  • Loyalty discount programs for LPs or traders.

In essence, developers can use any on-chain historical data to customize mechanisms within Uniswap’s pools. The emergence of hooks brings more composability and higher capital efficiency to on-chain transactions. However, complex code logic can bring substantial gas burdens to users and developers, which is where ZK coprocessors become invaluable, reducing gas costs while increasing efficiency.

From a long-term perspective, ZK coprocessors are speeding up the convergence of DEX and CEX. Since 2022, we’ve seen DEX and CEX becoming functionally similar. Major CEXs are embracing this reality, adopting Web3 wallets, building EVM L2, and using infrastructures like Lightning Network to tap into on-chain liquidity. ZK coprocessors play a crucial role here, enabling DEXs to offer functionalities like grid trading, copy trading, fast lending, and user data utilization, previously only possible on CEXs. DeFi’s composability and freedom, along with trading of smaller on-chain tokens, are areas where traditional CEXs struggle. Plus, ZK technology ensures privacy during execution.


For airdrops, smart contracts need to query address history without exposing user addresses. Projects in DeFi lending, like Aave, Compound, Fraxlend, and Spark, can use interaction volume for airdrop criteria. ZK coprocessors, with their data fetching and privacy features, can effortlessly meet these needs.


Another exciting area for ZK coprocessors is machine learning. Granting smart contracts the power of off-chain computation opens doors for efficient on-chain machine learning. ZK coprocessors are essential for ZKML data input and computation, extracting necessary inputs from on-chain/off-chain historical data for machine learning, and then converting these computations into ZK circuits for on-chain execution.


KYC is a big business. As the Web3 world moves towards compliance, ZK coprocessors can fetch any off-chain data provided by users and create verifiable proofs for smart contracts without revealing excess user information. Projects like Uniswap’s KYC hook are already using ZK coprocessors to fetch off-chain data trustlessly. Asset, education, travel, driving, law enforcement, gaming, and transaction proofs — all kinds of on-chain and off-chain historical behaviors can be packaged into strong ZK proofs for the blockchain, maintaining user privacy.


At, the speculative aspect is stronger than the social. Imagine adding a hook to its bonding curve, allowing users to customize it. For instance, smoothing out the curve after the trading frenzy ends, lowering the entry barrier for true fans, and boosting private traffic growth. Or enabling smart contracts to access users’ on-chain/off-chain social graphs for easy follows across social Dapps. Or establishing private clubs like the Degen club, which only addresses meeting certain historic Gas consumption can enter.


In traditional Web2 games, user data like purchase behavior and gaming style is crucial for operations and improving user experience, like MOBA’s ELO matching or skin purchase frequency. However, capturing this data on the blockchain has been challenging for smart contracts, often resorting to centralized solutions or being ignored. The emergence of ZK coprocessors makes decentralized solutions feasible.

Key Players

In this field, several key players have emerged, all sharing a similar approach: they generate ZK (Zero-Knowledge) proofs through storage proof or consensus and then deploy them on the blockchain. However, each has its own unique technical features and functionalities.


As a leader in ZK coprocessors, Axiom focuses on enabling smart contracts to access the entire Ethereum history and any ZK-verified computation without trust. Developers can submit on-chain queries to Axiom, which then processes these queries through ZK verification and returns the results to the developers’ smart contracts in a trustless manner. This allows developers to build more complex on-chain applications without relying on additional trust assumptions.

To execute these queries, Axiom follows three steps:

  1. Reading: Axiom uses ZK proofs to trustlessly read data from Ethereum’s historical block headers, states, transactions, and receipts. Since all Ethereum on-chain data is encoded in these formats, Axiom can access everything that archival nodes can. Axiom validates all input data to the ZK coprocessor using ZK proofs of Merkle-Patricia trees and block header hash chains. Although challenging to develop, this method provides the best security and cost for end-users, as it ensures that all results returned by Axiom are cryptographically equivalent to on-chain computations in the EVM.
  2. Computing: After ingesting the data, Axiom applies verified computations. Developers can specify their computational logic in a JavaScript frontend, and each computation’s validity is confirmed in the ZK proof.
  3. Verifying: Axiom provides a ZK proof of validity for each query result, ensuring that (1) input data is correctly extracted from the chain, and (2) the computations are correctly applied. These ZK proofs are verified on-chain in the Axiom smart contract, ensuring that the final results can be reliably used in users’ smart contracts.

As Axiom’s results are verified through ZK proofs, they are cryptographically as secure as Ethereum’s results. This approach does not rely on any assumptions about cryptoeconomics, incentive mechanisms, or game theory. Axiom believes this will provide the highest possible level of assurance for smart contract applications. The Axiom team, in close cooperation with the Uniswap Foundation, has received Uniswap Grants to build a trustless oracle on Uniswap.

Risc Zero

Bonsai: In 2023, RISC Zero launched Bonsai, a proving service that allows on-chain and off-chain applications to request and receive zkVM proofs. Bonsai is a general-purpose zero-knowledge proving service that enables any chain, protocol, or application to leverage ZK proofs. It is highly parallel, programmable, and performant.

Bonsai allows direct integration of zero-knowledge proofs into any smart contract without custom circuits. This enables ZK to be directly integrated into dApps on any EVM chain and potentially supports any other ecosystem.

The zkVM is the foundation of Bonsai, supporting a wide range of language compatibility, including provable Rust code and potentially zero-knowledge provable code in any language that compiles to RISC-V, such as C++, Rust, Go, and more. With a combination of recursive proofs, a bespoke circuit compiler, state continuations, and continuous improvements to the proving algorithm, Bonsai enables anyone to generate high-performance ZK proofs for a variety of applications.

RISC Zero zkVM: First released in April 2022, the RISC Zero zkVM can prove the correct execution of arbitrary code, enabling developers to build ZK applications in mature languages like Rust and C++. This release was a significant breakthrough in ZK software development: the zkVM made it possible to build a ZK application without having to build a circuit and without using a custom language.

In addition to being easier to build, RISC Zero delivers on performance. zkVM features GPU acceleration for CUDA and Metal, and parallel proofing of large programs through continuation.

Previously, Risc Zero received $40 million in Series A funding from Galaxy Digital, IOSG, RockawayX, Maven 11, Fenbushi Capital, Delphi Digital, Algaé Ventures, IOBC and others.


Celer Network’s Brevis, on the other hand, focuses on multi-chain historical data fetching, which empowers smart contracts to read their complete historical data from any chain and perform fully trustless customized computations, with major support for ethereum POS, Comos Tendermint, and BSC at the moment.

Application Interface: Brevis’ current system supports efficient and concise ZK proofs, providing the following ZK-proven source chain information to decentralized application (dApp) contracts connecting to the blockchain:

  1. the block hash and associated state, transaction, and receipt root of any block on the source chain.
  2. the slot value and associated metadata for any particular block, contract, or slot on the source chain.
  3. the transaction receipt and associated metadata for any transaction on the source chain.
  4. transaction inputs and associated metadata for any transaction on the source chain.
  5. arbitrary messages sent by any entity on the source chain to any entity on the target chain.

Architecture Overview: The Brevis architecture consists of three main components:

  1. Repeater Network: it synchronizes block headers and on-chain messages from different blockchains and forwards them to the Validator Network to generate proofs of validity. Afterwards, it submits the validated information and its associated proofs to the connected blockchains.
  2. Prover network: implements circuits for each blockchain’s light client protocol, block updates, and generates proofs of requested slot values, transactions, receipts, and integrated application logic. To minimize proof time, cost, and on-chain verification costs, the Provers Network can aggregate simultaneously generated distributed proofs. In addition, it can utilize gas pedals such as GPUs, FPGAs, and ASICs to increase efficiency.
  3. Connecting Prover Contracts on the Blockchain: receives zk-validated data and associated proofs generated by the Prover Network and then feeds the validated information to the dApp contract.

This integrated architecture allows Brevis to deliver cross-chain data and computation with high efficiency and security, thus enabling dApp developers to fully utilize the potential of the blockchain. With this modular architecture, Brevis can provide completely trust-free, flexible, and efficient data access and computation capabilities for on-chain smart contracts across all supported chains. This provides a completely new paradigm for dApp development. brevis has a wide range of use cases such as data-driven DeFi, zkBridges, on-chain user acquisition, zkDID, social account abstraction, and many more, increasing data interoperability.


Langrange and Brevis share a similar vision and aim to enhance interoperability between multiple chains through the ZK Big Data Stack, which enables the creation of universal proofs of state across all major blockchains. By integrating with the Langrange protocol, applications are able to submit aggregated proofs of the multi-chain state that can subsequently be verified by contracts on other chains in a non-interactive manner.

Unlike traditional bridging and messaging protocols, the Langrange protocol does not rely on a specific group of nodes to deliver information. Instead, it utilizes cryptography to coordinate proofs of cross-chain state in real-time, which includes those submitted by untrusted users. Under this mechanism, the use of cryptography ensures the validity and security of proofs even if the source of the information is untrustworthy.

The Langrange protocol will initially be compatible with all EVM-compatible L1 and L2 rollups. In addition, Langrange also plans to support non-EVM compatible chains in the near future, including but not limited to Solana, Sui, Aptos, and popular public chains based on the Cosmos SDK.

Differences between the Langrange protocol and traditional bridging and messaging protocols:

Traditional bridging and messaging protocols are primarily used to transfer assets or messages between a specific pair of chains. These protocols typically rely on a set of intermediate nodes to confirm the source chain’s latest block header on the target chain. This model is primarily optimized for single-to-single chain relationships, based on the current state of the two chains. In contrast, the Langrange protocol provides a more general and flexible approach to cross-chain interactions, enabling applications to interact across a broader blockchain ecosystem than just a single chain-to-chain relationship.

The Langrange protocol is specifically optimized for mechanisms used to prove the state of inter-chain contracts, rather than being limited to the transfer of information or assets. This feature allows the Langrange protocol to efficiently handle complex analyses involving current and historical contract states that may span multiple chains. This capability enables Langrange to support a range of complex cross-chain application scenarios, such as calculating moving averages of asset prices on multi-chain decentralized exchanges (DEXs) or analyzing the volatility of money market interest rates across multiple different chains.

Thus, Langrange state proofs can be viewed as an optimization of many-to-one (n-to-1) chain relationships. In this cross-chain relationship, a decentralized application (DApp) on one chain relies on the aggregation of real-time and historical state data from multiple other chains (n). This feature greatly expands the functionality and efficiency of DApps, enabling them to aggregate and analyze data from multiple different blockchains, which in turn provides deeper, more comprehensive insights. This approach is significantly different from traditional single-chain or one-to-one chain relationships, and offers a much broader potential and range of applications for blockchain applications.

Langrange has previously received investments from 1kx, Maven11, Lattice, CMT Digital and Gumi Crypto.


Herodotus aims to provide smart contracts with synchronized on-chain data access from other Ether layers. They believe that Proof of Storage can unify the state of multiple Rollups and even allow synchronized reads between Ether layers. Simply put, it’s a data crawl across EVM main chains and between rollups. Currently supported are ETH mainnet, Starknet, Zksync, OP, Arbitrum and Polygon.

Storage Proof, as defined by Herodotus, is a composite proof that can be used to verify the validity of one or more elements in a large dataset, such as data in the entire Ether blockchain.

The process of generating a Storage Proof is roughly divided into three steps:

Step 1: Obtaining a Blockhead Storage Accumulator for Verifiable Commitments

  • This step is to get a “promise” that we can verify the proof against. If the accumulator does not already contain the latest block header we need to prove, we first need to prove chain continuity to ensure that we cover the range of blocks that contain our target data. For example, if the data we need to prove is in block 1,000,001 and the smart contract stored in the block header only covers up to block 1,000,000, then we need to update the header store.
  • If the target block is already in the accumulator, then you can proceed directly to the next step.

Step 2: Proving the existence of a specific account

  • This step involves generating a proof of inclusion from the State Tree made up of all accounts in the Ethernet network. The State Root is an important part of deriving the Block Commitment Hash and is part of the header store. Note that the block header hash in the accumulator may be different from the actual hash of the block, as different hash processing methods may have been used for efficiency.

Step 3: Proving specific data in the account tree

  • In this step, proofs of inclusion can be generated for data such as nonce, balance, storage root, or codeHash. Each Ethernet account has a storage triad (Merkle Patricia Tree) that holds the account’s storage data. If the data we want to prove is in the account storage, then additional containment proofs need to be generated for specific data points in that storage.

After generating all the necessary proofs of inclusion and proofs of computation, a complete proof of storage is formed. This proof is then sent to the chain where it is verified based on a single initial promise (e.g., blockhash) or the MMR root of the header store. This process ensures the authenticity and integrity of the data while maintaining the efficiency of the system.

Herodotus has been backed by Geometry, Fabric Ventures, Lambda Class, and Starkware.


Hyper Oracle is designed specifically for programmable zero-knowledge predictors and is designed to keep the blockchain secure and decentralized. Hyper Oracle makes on-chain data and on-chain equivalence calculations both practical and verifiable with fast termination through its zkGraph standard. It offers developers a new way to interact with the blockchain.

Hyper Oracle’s zkOracle node consists of two main components: zkPoS and zkWASM.

  1. zkPoS: This component is responsible for obtaining the block header and data root of the Ether blockchain via zero-knowledge (zk) proofs to ensure the correctness of the Ether consensus. zkPoS also serves as the external circuitry for zkWASM.
  2. zkWASM: it uses the data obtained from zkPoS as a key input for running zkGraphs. zkWASM is responsible for running customized data mappings defined by zkGraphs and generating zero-knowledge certificates for these operations. zkOracle node operators can choose the number of zkGraphs that they want to run, which can be anything from one to all deployed zkGraphs. the process of generating zk proofs can be delegated to a distributed network of provers.

The output of zkOracle is off-chain data, which can be used by developers through Hyper Oracle’s zkGraph standard. These data are also accompanied by zk proofs to validate the validity and computation of the data.

To maintain network security, only one zkOracle node is required for a Hyper Oracle network. However, multiple zkOracle nodes can exist in the network, operating against the zkPoS and each zkGraph. This allows zk proofs to be generated in parallel, which can significantly improve performance. Overall, Hyper Oracle provides developers with an efficient and secure platform for blockchain interaction by combining advanced zk technology with a flexible node architecture.

In January 2023, Hyper Oracle announced a $3 million pre-seed round of funding from Dao5, Sequoia China, Foresight Ventures, and FutureMoney Group.


Pado is a special case in the ZK coprocessor. While other coprocessors focus on crawling on-chain data, Pado provides a path to crawl off-chain data, aiming to bring all the Internet data into the smart contract, which replaces the function of the predicate machine to a certain extent while guaranteeing the privacy and not needing to trust the external data source.

Comparison of ZK coprocessor and Oracles

  • Latency: Oracles are asynchronous and therefore have longer latency to access flat data than ZK coprocessors.
  • Cost: While many coprocessors do not require proofs of computation and are therefore less expensive, they are less secure. Storage proofs are more expensive but more secure.
  • Security: The maximum security of a data transfer is capped at the security level of Oracle itself. In contrast, the ZK coprocessor matches the security of the chain. In addition, due to the use of out-of-chain proofs, the Oracle is vulnerable to manipulation attacks.

The following figure illustrates the workflow of Pado:

Pado uses crypto nodes as back-end provers. To reduce trust assumptions, the Pado team will adopt an evolutionary strategy to gradually improve the decentralization of the prover service. The provers actively participate in the user data retrieval and sharing process, while proving the authenticity of user data obtained from network data sources. Uniquely, Pado utilizes MPC-TLS (Transport Layer Secure Multi-Party Computing) and IZK (Interactive Zero Knowledge Proof) to enable the provers to prove data “blindly”. This means that the prover does not see any of the original data, including public and private user information. However, the prover can still ensure the data origin of any transmitted TLS data through encryption methods.

1. MPC-TLS: TLS is a security protocol used to protect the privacy and data integrity of Internet communications. When you visit a Web site and see a “lock” icon and “https” on the URL, it means that your visit is protected by TLS. MPC-TLS mimics the functionality of a TLS client, enabling Pado’s authenticator to collaborate with the TLS client to perform able to collaborate with the TLS client to perform the following tasks:

  • Establishing a TLS connection, including calculating the primary key, session key, and authentication information.
  • Perform queries in a TLS channel, including generating encryption requests and decrypting server responses.

Note that these TLS-related operations are performed between the client and the authenticator via the two-party computation (2PC) protocol.The design of MPC-TLS relies on a number of cryptographic techniques, such as obfuscated circuits (GCs), oblivious transmission (OT), IZK, etc.

2. IZK: Interactive Zero Knowledge Proof is a type of zero-knowledge proof in which the prover and verifier can interact. In the IZK protocol, the verifier results in accepting or rejecting the prover’s statement. The IZK protocol has several advantages over simple NIZKs (e.g., zk-STARKs or zk-SNARKs), such as high scalability for large statements, low computational cost, no need for trustworthy setups, and minimized memory usage.

Pado is actively developing KYC hook for Uniswap, seeking more scenarios for data on-chain applications, and has been selected for the first Consensys Fellowship program.

Future Vision

ZK coprocessor makes it possible for blockchain to allow smart contracts to grab more data and gain access to off-chain computing resources at a lower cost without hurting decentralization while decoupling the workflow of smart contracts and increasing scalability and efficiency.

From the demand side alone, the ZK coprocessor is a necessity, from the DEX vertical alone, the potential of this hook is huge: if Sushiswap does not go on the hook, it will not be able to compete with uniswap. If you don’t use the ZK coprocessor on the hook, then it will be very expensive for developers and users, because the hook introduces new logic and makes smart contracts more complex, which is counterproductive. So for now, using the ZK coprocessor is the best solution. Whether from the point of view of data capture or calculation, several methods have different advantages and disadvantages, and the coprocessor applicable to a specific function is a good coprocessor. The market prospect of on-chain verifiable computing is broad, and it will reflect the new value in more fields.

In the future development of the blockchain, there is the potential to break the traditional web2 data barriers, data is no longer an island. To achieve greater interoperability, ZK coprocessors will become a strong middleware, in the case of security, privacy, and no trust for the smart contract data capture, computing, validation of cost reduction, freeing up the data network, to open up more possibilities, and boost the landing of the real intent-centric app as well as the infrastructure of AI Agent on the chain. Only your imagination sets the limit, nothing is impossible for you to achieve.

Imagine a future scenario where the high trustworthiness and privacy of ZK for data verification are utilized. Rideshare drivers could establish an aggregated network beyond their own platforms. This data network could encompass Uber, Lyft, Didi, Bolt, and others. Each driver could contribute their platform data — a piece from you, a piece from me — and assemble it on the blockchain. Gradually, a network independent of their own platforms is established, aggregating all the driver data to become a major aggregator for rideshare information. At the same time, this system could allow drivers to remain anonymous, protecting their privacy.


Thanks Yiping Lu for the guidance and advice

About Foresight Ventures

Foresight Ventures is dedicated to backing the disruptive innovation of blockchain for the next few decades. We manage multiple funds: a VC fund, an actively-managed secondary fund, a multi-strategy FOF, and a private market secondary fund, with AUM exceeding $400 million. Foresight Ventures adheres to the belief of “Unique, Independent, Aggressive, Long-Term mindset” and provides extensive support for portfolio companies within a growing ecosystem. Our team is composed of veterans from top financial and technology companies like Sequoia Capital, CICC, Google, Bitmain and many others.









Foresight Ventures

Foresight Ventures is a blockchain technology-focused investment firm, focusing on identifying disruptive innovation opportunities that will change the industry