Dfinity explained: serverless + blockchain

msfew@Foresight Ventures

Overview of Dfinity

Dfinity Foundation

Internet Computer ICP

Overview of ICP

ICP expands the capabilities of the Internet, enabling itself to host back-end software, transforming the entire ICP network into a global computing platform.

Using ICP, developers can create applications and services by quickly deploying code directly on the ICP network, without the need for tedious server computer deployment and commercial cloud service purchases.

In short, ICP solves the deployment, architecture, and expansion issues of software development. What all application developers have to do is to write code.

1. Product positioning

  • Against Cloud platform
    The ICP’s statement that “the cloud platform is too centralized” may not be entirely correct. For a single cloud platform, centralization is inevitable. Still, many open source projects such as Terraform (https://github.com/hashicorp/terraform) or various libraries and plug-ins such as Serverless Framework can be utilized for partially connecting to cloud platforms to achieve unified operation and maintenance and deployment. By using multiple cloud platforms simultaneously, the problem of excessive centralization of cloud platforms can be partially solved. But when you choose to use a specific cloud platform service, it will indeed cause difficulties in switching the platform. ICP also has this problem, and it may be more serious due to ecological closure. The decentralization emphasized by ICP is actually still the consensus in the characteristics of the blockchain and the decentralization of nodes.
  • Against Ethereum
    To the development process, there is essentially no significant difference between developing on ICP and developing on Ethereum. It can sometimes even be more difficult (the documentation and community support are relatively minor). As new developers, developers need more reasons to persuade themselves to choose ICP. Since the audience on Ethereum is larger and developers can find more help from the community, are the advantages of development and publishing to ICP and even other public chains greater? This is a question that every “Ethereum Killer” public chain should think about. However, ICP wisely chose to avoid competing with Ethereum head-on, instead of competing with Serverless on the cloud platform.

2. Programming languages

  1. Motoko:
    Dfinity’s programming language. It is similar to Ethereum’s Solidity. Motoko has many application-specific optimizations (in-depth analysis will be done later in this passage).
    https://github.com/dfinity/motoko
  2. Rust:
    ICP provides SDK for Rust. It is more efficient for running in a WASM container.
  3. Since other languages do not have SDK and official development documents, Motoko or Rust may still be needed as glue to realize the part of direct interaction with ICP, so the development can only choose Motoko or Rust.

3. Ecosystem

Consensus protocol

Key features

Consensus process

Node preparation before starting

  1. The node creates a private key and a public key to establish an anonymous permanent identity.
  2. Nodes joining the network need to mortgage a fixed token as staking.
  3. Nodes randomly form a threshold group with other nodes (completely random, a node can exist in multiple threshold groups)
  4. In the threshold group, running distributed key agreement (DKG), each node obtains the group’s “verification signature” key (different from the personal key, a group of private keys is mathematically split).
  5. The system still generates the standard public key of the threshold group according to the DKG and registers the threshold group.
  6. Start waiting for participation in consensus.

Consensus happens:

  1. Choose this round of committee groups *1 *4
  2. Proposal committee packs out blocks
  3. The Notary Committee continues to receive and verify blocks
  4. The random number beacon collects signatures; waits for the threshold, and produces notarization and random numbers *2
  5. R+1 step0 Synchronize the correct block, start R+1 round, go back to step1 *3
  • *1
    The key is non-interactive.
    First, a block group of 400 clients is selected publicly by random numbers to package transactions and generate blocks. Each client will generate a block, and there is a set of chosen verifiers by random numbers simultaneously. They will accept the block and run a protocol that judges the weight of the block based on the random number. The verifier only signs the node with the highest weight. No interaction, no Byzantine consensus to send signature data to each other. It is mainly to constantly search for the block with the highest weight in a fixed block time. After a block has received signatures from more than 50% of the verifiers (signed separately, not jointly signed together), the system will automatically aggregate the signatures on the block and confirm that the block is unique. Once the client observes When the aggregated signature is reached, it will enter the next round of consensus.
    It can be seen that the Byzantine agreement was not carried out in the whole process, but the three principles were followed: The client signs the block in accordance with the principle of the highest authority. The higher the weight, the more the chain will be confirmed. The system follows the principle of generating random number beacons with more than 50% signature. Everyone follows the principle of entering the next round of consensus as soon as they see the new random number beacon. The three principles eliminate redundant invalid blocks and obtain a unique block, thus reaching a consensus (the approximate reason is that there may be two notarized blocks at the same time). The entire communication process is almost zero. In a network that broadcasts the gossip protocol, a network with 400 nodes only needs to forward about 20KB of communication data to generate a threshold signature. The generation of the distributed signature key of a group is distributed when the group is created, and it does not need to be generated during the consensus phase. It is generated once and used multiple times.
    An analogy is Algorand, which is very similar but interacts with two rounds of Byzantine consensus. Algo’s random number lottery process is secretive, which means that a node only knows whether it is selected or not, but it does not know how many nodes in the entire network have been selected. Therefore, before the Algo consensus, it is necessary to traverse all the networks and perform Byzantium once to know all the selected verification groups, so the delay time and bandwidth usage here are very high. Coupled with the aforementioned problem of the Byzantine communication rounds and signature data of the super large verification group (2000 to 4000), bandwidth usage under the Algo consensus is very critical, and such people do not have the ability to participate.
  • *2
    The key is the random number algorithm with high performance and security.
    The random number algorithm used by Dfinity is VRF. VRF involves a lot of mathematical calculations, we can think of it as a black box, one section is the input and the other section is the output. The input is a set of client signatures, and the output is an accurate random number. Only after obtaining enough client signatures can the black box output random numbers. Before this, no client can know or predict its output. The threshold for “enough” signatures is 50%, so this VRF process is also called “threshold signature”.
    This VRF has three characteristics: Verifiable: Once the random number is output, everyone can verify it with the client’s signature. The V of VRF is reflected here. Unique certainty: Once more than 50% of the clients have sent a signature, the black box will receive a unique definite random number after receiving it. This is because the private key signature algorithm used is unique, that is, the results of multiple signatures of the unified data with the unified key are different, and only one can be legally verified. Non-interactive: In the process of generating random numbers, although the black box needs to collect everyone’s signatures, there is no need to communicate between clients, and there is no way to interfere with the generation of random numbers. Among the known cryptographic algorithms, only the BLS algorithm can do the above three points, and one of the proponents of the BLS algorithm “L” Lynn is a senior engineer of DFINITY. Other random number schemes are either extremely difficult to verify (continuous hashing), or uniqueness cannot be guaranteed, or there is no threshold design and must be interacted. The existence of the “last participant” can indirectly affect the random number deviation (Ethereum’s RANDAO and VDF).
    Of course, this VRF is still a problem. If more than 50% of the selected group of consensus participants are controlled by the attacker, then he can indirectly interfere with the generation of random numbers. Of course, it is basically impossible to predict random numbers, and there is no way to directly control them. The attacker can also stop the random number generation process without sending the signature, thereby bringing the entire system down. (In fact, no consensus agreement can withstand this)
  • *3
    The key is ultra-fast final confirmation.
    DFINITY’s consensus is carried out in rounds. The beginning and end of each round of consensus are marked by the observation of the random number beacon to generate a new random number, and this random number is updated at the same time as the system aggregates the signature to generate the notarization. Therefore, the block height of DFINITY must be consistent with the round. The blocks produced in each round must quote the notarized signature of the previous round, otherwise, it will be considered illegal. At the same time, the fair group will only sign the blocks generated in this round, and will not sign the blocks in the previous round.
    Summarized as two key parts: Only the blocks released in this round can be notarized; Only quoting the block of the current round that was notarized in the previous round is legal. This ensures that the two processes of block production and notarization cannot be maliciously detained. Therefore, the attacker cannot secretly prepare a shadow chain longer than the main chain to perform a double-spending attack, because the first shadow chain Blocks are not legal.
    Because there is the above-mentioned notarization process of “signature by the verifier group individually, and the system aggregates the signatures to generate the notarization”, it is basically possible to make a unique confirmation after each round. However, there may be cases where two or more blocks past the notarization at the same time, so the final confirmation cannot be achieved after the end of the round, and then it is necessary to continue the judgment in the next round. At this time, wait for the block production process to complete, because the block producer may choose to continue production after the block that was notarized at the same time in the previous round, so there are several forks at the same time.
    The verifier will calculate the weight to determine the unique block, and the chain with the higher weight will be the only confirmation chain, and then the verifier will sign him. Therefore, when a new random number appears in this round, it means that the fork has been pruned, and the blocks in the previous round, including the transactions in it, have been finally confirmed.
    Quick confirmation not only improves performance, but also cuts off forks, reduces the redundancy of the system, and allows the client to not store all the historical block data, any newly added block, as long as the most recently confirmed block starts That’s it.
  • *4
    The key is flexible expansion performance.
    Excellent random numbers bring almost unlimited expansion possibilities to the DFINITY network, because the entire random number output, including block generation and notarization, is executed by a fixed number of committee groups, and the addition of new nodes on the client-side is not possible. DFINITY randomly generates multiple threshold groups, so multiple groups run in parallel to achieve fragmentation, which is quite easy. The sharding method of Ethereum 2.0 is also very similar. However, Dfinity’s storage and network scalability also need to be expanded. In this regard, the transmission between nodes and nodes and storage is also lossy. Bandwidth may not be enough. If this aspect cannot be expanded, it may just be an unneeded enhancement.

Computation and architecture

Application architecture

Starting from the bottom: P2P layer (collecting and distributing data) → consensus layer (organizing messages, writing to blocks after verification) → message routing layer (transmitting information to the destination) → application execution layer (computing through the WASM security sandbox environment)

During the development phase, Dfinity’s developer tools will abstract all levels and copy them to a local version to facilitate development.

How ICP application works

https://zhuanlan.zhihu.com/p/372441370

Canister

Storage

A phrase often mentioned on Dfinity’s blog is orthogonal persistence. It still refers to the characteristics of Serverless. Developers don’t have to worry about data loss or where the data is located. This shows that ICP and centralized cloud platforms are similar, and there are operations such as disaster recovery and backup.

We can look at the hardware requirements of the node server provided by Dfinity.

We can see that the node server requires 16 32GB memory and a 3.2TB SSD. Compared with the 4GB RAM and 290GB SSD (https://nimbus.guide/hardware.html) hardware requirements of the Ethereum verification node, it is quite overwhelming. Of course, for storage, the more striking is Filecoin, which requires 1TB memory and 16TB SSD configuration (https://zhuanlan.zhihu.com/p/337597732).

The calculation and state storage of ICP basically run on memory (similar to, for example, the HANA of the centralized cloud platform SAP). The hard disk may only function as mirror storage, so the memory requirements are relatively high. This is similar to the relationship between a game server and a web server. The game server (similar to ICP and traditional centralized applications) needs to handle countless applications (chat, equipment, damage, blood volume, etc.); web servers (similar to Ethereum Application) In contrast, it is relatively stateless, and it may be more often that you read different data every time you visit the website. Compared with Filecoin, ICP is not focused on storage, but Serverless. The stored data may be regular application data, application status, and application code itself, so there is no need for such excessive storage requirements.

On-chain application implementation

  • Front end: frameworks such as React or Vue on the Web, React Native, or Flutter on the mobile
  • Backend: Motoko (a programming language developed by Dfinity) or any other language that can be packaged and compiled into WASM (such as Rust)
  • Data structure: Canister (Dfinity developed a JSON-like interface description language Candid for this)

1. Cancan (basically a Tik Tok on ICP)

Cancan is similar to Tik Tok on the ICP platform. Cancan’s front-end uses the Web client React framework, and the back-end uses Dfinity’s Motoko language. Motoko also uses advanced features such as Motoko Package Manager Vessel. In addition, some APIs of the OS is used, including testing and continuous integration, and the comments are also very detailed. Cancan can be said to have implemented a very standardized ICP full-stack application in a small amount of code, which is worth learning for ICP developers.

The state of the entire application uses Canister containers and ICP to replace servers, CDNs, databases, etc.

  • Front end: The resources of the React framework are all in a separate Canister (https://github.com/dfinity/cancan/tree/main/src/utils/canister).
  • Backend and database: Video data and others are all defined in Canister’s type (https://github.com/dfinity/cancan/blob/main/backend/State.mo). At the same time to deal with millions of user-level access, Cancan uses an advanced data type in Motoko: distributed hash table. Because it is a Serverless-like architecture, Cancan does not need to operate traditional front-end and back-end interactions but is similar to the get and post methods that can be directly performed on the database (similar to Google’s Firebase).

In short, from Cancan’s example, after learning Motoko and being proficient in this language, development on ICP will be extremely efficient and there is no need to worry about the most annoying deployment and other issues.

2. Portal (video streaming platform)

Portal is a relatively unique live streaming platform on ICP that allows you to earn while watching and broadcasting. It is currently undergoing Alpha Test. The source code of Portal is temporarily unavailable, but it can be seen that the front-end uses the React framework. After communicating with developers, we get to know that the user or token data of Portal is all on the ICP, and the storage and distribution of data such as video streaming media use the Livepeer protocol.

  • Front-end: React framework, the client is relatively rudimentary at present.
  • Database: The data that is not that complicated is on the ICP, and the most complicated data such as streaming video is not on the ICP. For the difficult part, Livepeer is used. Livepeer claims to be a live video streaming platform based on Ethereum. It is essentially a video streaming solution provider with distributed nodes, but the economic system is based on Ethereum. Portal’s use of Livepeer is like using the Filecoin platform for cold storage, and it does not reflect a particularly large technological innovation.

All in all, Portal is a live broadcast platform, and the most critical and complicated video distribution and storage are Livepeer, which has nothing to do with ICP. The relationship between Portal and ICP is only part of simple data storage and database interaction using ICP. This is actually that Portal wants to be carried by ICP. While promoting its ecosystem, it can also label itself as the first live broadcast website of the ICP platform.

Is ICP really that good?

From the user’s POV

From the developer’s POV

  • Writing data: Because it needs to reach a consensus, it takes more time than reading. Currently, it is usually 2–5 seconds. Compared with BTC or ETH, this is much faster. Compared with centralized cloud platforms, this may seem slow, but in fact, this speed is acceptable.
  • Canister is currently single-threaded. If Canister is upgraded to multi-thread, the speed of reading and writing can also be greatly improved. From the perspective of application development, this speed is not fast, but it is definitely sufficient for making an ordinary WebApp.

As a blockchain

Shortage of ICP

Canister optimization

Another point is that the current Canister execution is single-threaded. Although Canister can “package” some instructions to execute, it will greatly improve performance if it supports multi-threading. However, these updates are closely related to other parts of the ecosystem. For example, the Rust SDK supported by ICP is also closely related to the ecosystem development of the language itself, so technically it may require multiple efforts to improve it.

Custom domain name

No killer apps

From Dfinity’s repo, it can be seen that Dfinity’s ecosystem is still not so prosperous, and there is no killer application that is familiar. Although ICP’s technology is very strong, there are no super popular products appearing on this platform. The incomplete ecosystem is actually related to the fact that some standards have not been promoted, such as the token standard mentioned in the next point.

Token Standard

Conclusion

References:

--

--

Foresight Ventures is a blockchain technology-focused investment firm, focusing on identifying disruptive innovation opportunities that will change the industry

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Foresight Ventures

Foresight Ventures is a blockchain technology-focused investment firm, focusing on identifying disruptive innovation opportunities that will change the industry