EigenLayer: Re-staking introduces a trust revolution in middleware

uncategorized1年前 (2023)发布 wyatt
117 0 0
EigenLayer is expected to complete the mainnet launch early next year and launch its flagship product EigenDA.

EigenLayer:再质押引入中间件的信任革命

Last November, we wrote this articleEigenLayer: Bringing Ethereum-level trust to middleware"In the past year, EigenLayer has released their white paper, completed a $50 million Series A financing, and launched the first phase of the mainnet.CommunityThere has been extensive discussion around EigenLayer and its use cases. This article will track and sort out these discussions.

background

In the Ethereum ecosystem, some middleware services (such as oracles) do not fully rely on on-chain logic, so they cannot directly use Ethereum’s consensus andSafety, the trust network needs to be re-guided. Usually, the project owner will operate it first, and then introduceTokenIncentives attract system participants and gradually achieve decentralization.

There are at least two difficulties in doing this. First, introducing an incentive mechanism requires additional costs: participants purchase tokens to participate in staking.Opportunity Cost, and the project party maintains the value of the tokenOperating costsSecond, even if the above costs are paid and a decentralized network is built,SafetyThe sustainability and sustainability of the project are still unknown, and these two issues are particularly difficult for start-up projects.

EigenLayer 的想法是:由现有的以太坊质押者进行再次质押 (Restaking),从而为这些中间件 (Actively Validated Services, AVS) 提供经济Safety。如果这些再质押者诚实工作可以获得奖励,若作恶则会导致原有的以太坊质押敞口被罚没。

The benefits of doing so are: first, the project party does not need to guide the new trust network by itself, but outsources it to Ethereum validators, which reduces the capital cost as much as possible; second, the economic security of the Ethereum validator set is very solid, so security is also guaranteed to a certain extent. From the perspective of Ethereum stakers, re-staking provides them with additional benefits. As long as there is no subjective malicious intent, the overall risk is controllable.

Sreeram, the founder of EigenLayer, mentioned three use cases and trust models of EigenLayer on Twitter and in podcasts:

  • Economic Trust. That is, the reuse of Ethereum staked exposure, the staking of higher value tokens means more robust economic security, as discussed above.

  • Decentralized TrustMalicious behavior in some services (such as secret sharing) may not be attributable and cannot rely on slashing mechanisms. A sufficiently decentralized and independent group of people doing something is needed to prevent collusion and conspiracy risks.

  • Ethereum Validator CommitmentBlock producers use their staked exposure as collateral to make certain credible commitments. Below we will give some examples to further illustrate this.

EigenLayer:再质押引入中间件的信任革命

System Participants

EigenLayer:再质押引入中间件的信任革命

EigenLayer acts as an open market, connecting the three major players.

  • Re-pledgeeIf you have Ethereum staking exposure, you can participate in re-staking by transferring your withdrawal voucher to EigenLayer, or simply deposit LST such as stETH to participate. If the re-staker cannot run the AVS node himself, he can also delegate his exposure to the operator.

  • OperatorOperators accept the entrustment of re-stakers and run AVS nodes. They are free to choose which AVS to provide services for. Once they provide services for AVS, they need to accept the penalty rules defined by them.

  • AVSAs a demander/consumer, AVS needs to pay the re-stakers and obtain the economic security they provide.

With these basic concepts, let's look at the specific use cases of EigenLayer.

EigenDA

EigenDA is the flagship product launched by EigenLayer. Its solution is derived from Danksharding, the Ethereum scaling solution. Data Availability Sampling (DAS) is also widely used in DA projects such as Celestia and Avail. In this chapter, we will quickly introduce DAS, and then look at the implementation of EigenDA and its innovations.

  • DAS

EigenLayer:再质押引入中间件的信任革命

As a pre-solution for Danksharding, EIP-4844 introduces "Blob-carrying Transaction", where each transaction will carry an additional 125kb of data. In the context of data sharding expansion, the additional data will undoubtedly increase the burden on the nodes. So, is there a way to make the nodesDownload only a small portion of the data, or verify that all the data is available?

DAS allows nodes to randomly sample a small portion of data multiple times. Each successful sampling increases the probability that the node considers the data to be available.ConfidenceOnce a certainPreset Level, the data is considered available. However, it is still possible for an attacker to hide a small portion of the data - we also need some kind of fault tolerance mechanism.

DAS uses erasure coding. The main idea of erasure coding is to divide data into multiple blocks, and then encode these blocks to generate additional redundant blocks. These redundant blocks contain part of the information of the original data blocks, so that when some data blocks are lost or damaged,Lost data blocks can be restored through redundant blocksIn this way, erasure coding provides redundancy and reliability for DAS.

In addition, we also need to verify whether the redundant blocks are correctly encoded, because the original data cannot be reconstructed using the wrong redundant blocks. Danksharding uses KZG (Kate-Zaverucha-Goldberg) commitment. KZG commitment is a method for verifying polynomials, which can prove that the value of the polynomial at a specific position is consistent with the specified value.

The prover chooses a polynomial p(x) and uses p(x) to compute commitments to each data block, called C1, C2, …, Cm. The prover publishes the commitments along with the data block. To verify the encoding, the verifier can randomly sample t points x1, x2, …, xt and ask the prover to open the commitments at these points: p(x1), p(x2), …, p(xt). Using Lagrange interpolation, the verifier can reconstruct the polynomial p(x) from these t points. The verifier can now recompute the commitments C1', C2', …, Cm' using the reconstructed polynomial p(x) and the data block and verify that they match the published commitments C1, C2, …, Cm.

In short, using KZG commitment,The verifier only needs a small number of points to verify the correctness of the entire encoding. In this way, we get the complete DAS.

  • How

EigenLayer:再质押引入中间件的信任革命

EigenLayer borrows the idea of DAS and applies it to EigenDA.

1. First, the EigenDA node is in the EigenLayer contractRe-pledge and register in.

2. After getting the data, the Sequencer divides the data into multiple blocks, uses erasure codes to generate redundant blocks, and calculates the KZG commitment corresponding to each data block. The Sequencer publishes the KZG commitments one by one to the EigenDA contract as witnesses.

3. Subsequently, the Sequencer distributes the data block together with its KZG commitment to each EigenDA node one by one. After the node gets the KZG commitment, it compares it with the KZG commitment on the EigenDA contract, stores the data block after confirming that it is correct, and signs it.

4. The Sequencer then collects these signatures, generates an aggregate signature and publishes it to the EigenDA contract, which then verifies the signature. Once the signature is verified, the entire process is complete.

In the above process, since the EigenDA node only claims that it has stored the data block through signing, we also need a way to ensure that the EigenDA node is not lying. EigenDA uses Proof of Custody.

The idea of proof of custody is to put a "bomb" in the data. Once the node signs it, it will be confiscated. In order to implement proof of custody, it is necessary to design: a secret value to distinguish different DA nodes and prevent cheating; a function specific to the DA node, which takes the DA data and the secret value as input and the presence or absence of the bomb as output. If the node does not store the complete data it should store, it cannot calculate this function. Dankrad once shared more details of proof of custody on his blog.

EigenLayer:再质押引入中间件的信任革命

If a lazy node appears, anyone can submit a proof to the EigenDA contract, which will verify the proof. If the proof passes, the lazy node will be fined.

In terms of hardware requirements, KZG promises that calculating 32 MB of data in 1 second requires approximately 32-64 core CPUs, but this requirement is only for the Sequencer side and does not impose a burden on the EigenDA node. In the EigenDA testnet, the throughput of 100 EigenDA nodes reached 15 MB/s, while the node download bandwidth requirement was only 0.3 MB/s (much lower than the requirements for running Ethereum validators).

In summary, we can see that EigenDA has achieved the decoupling of data availability and consensus, and the propagation of data blocks is no longer limited by the bottleneck of consensus protocols and low P2P network throughput. This is because EigenDA is equivalent to taking a ride on the Ethereum consensus: the process of Sequencer issuing KZG commitments and aggregated signatures, smart contracts verifying signatures, and slashing malicious nodes all occurs on Ethereum, and Ethereum provides consensus guarantees, so there is no need to reboot the trust network.

  • Problems of DAS

Currently, DAS as a technology itself has some limitations. We need to assume that malicious adversaries will take every possible means to fool light nodes into accepting false data. Sreeram once explained the following in his tweet.

In order to makeSingle NodeTo have a high enough probability that the data is available, the following requirements must be met:

  • Random Sampling: Requires each nodeIndependently and randomlyA bunch of samples are selected for sampling, and the adversary does not know who requested which samples. In this way, the adversary cannot change its strategy accordingly to deceive the node.

  • Concurrent Sampling: Requires DAS to consist of multiple nodesSimultaneous, making it impossible for an attacker to distinguish the sampling of one node from the sampling of other nodes.

  • Private IP Sampling: This means using an anonymous IP for each queried data block. Otherwise, the adversary can distinguish different nodes for sampling and selectively provide the nodes with the parts they have queried, but not the data of other parts.

We can have multiple light nodes perform random sampling to satisfy concurrency and randomness, but there is currently no good way to satisfy private IP sampling. Therefore, there are still attack vectors against DAS, which currently only provides weak guarantees. These issues are still being actively addressed.

EigenLayer & MEV

EigenLayer:再质押引入中间件的信任革命

Sreeram talked about the application of EigenLayer in the MEV stack at the MEVconomics Summit. Proposers can implement the following four features around the cryptoeconomic primitives of staking and forfeiture, which is the third point mentioned above - the validator commitment use case.

Event-driven Activation

Protocols such as Gelato can react to specific on-chain events, that is, continuously monitor on-chain events, and once an event occurs, trigger some predefined actions, which are usually completed by third-party listeners/executors.

The reason why it is called a "third party" is that there is no connection between the listener/executor and the proposer who actually processes the block space. If a listener/executor triggers a transaction, but (for some reason) it is not included in the block by the proposer, this cannot be attributed and therefore cannot bring deterministic economic guarantees.

If this service is provided by proposers who participate in the re-staking, they can make a credible commitment to trigger the operation, and if these transactions are not included in the block, the proposer is slashed. This provides stronger guarantees than third-party listeners/executors.

In practical applications (such as lending protocols), one of the purposes of setting the excess collateral ratio is to cover price fluctuations within a certain time frame. This is related to the time window before liquidation.Higher overcollateralization ratio means longer buffer periodIf a large fraction of transactions adopt an event-driven response strategy and have strong guarantees provided by the proposer, then (for highly liquid assets) the volatility of the over-collateralization ratio may be limited to a few block intervals, thereby reducing the over-collateralization ratio and improving capital efficiency.

Partial Block Auction

In the current design of MEV-Boost, proposers completely outsource block space to builders and can only passively receive and propose entire blocks submitted by builders. Compared with more widely distributed proposers, there are only a few builders, and they may collude to censor and blackmail specific transactions - because proposers cannot include the transactions they want in MEV-Boost.

EigenLayer:再质押引入中间件的信任革命

EigenLayer proposed MEV-Boost++ to upgrade MEV-Boost, introducing Proposer-part in the block, where the proposer can include any transaction. The proposer can also build an alternative block B-alt at the same time and propose this alternative block B-alt when the relay does not release Builder_part. This flexibility ensures anti-censorship and solves the problem of relay activity at the same time.

EigenLayer:再质押引入中间件的信任革命

This is consistent with the purpose of crList proposed by ePBS, a protocol layer design, which is that we need to ensure that a wide range of proposers can participate in deciding the composition of blocks to achieve censorship resistance.

Threshold Encryption

In the MEV solution based on threshold encryption, a group of distributed nodes manage encryption and decryption keys. Users encrypt transactions, which will be decrypted and executed only after they are included in the block.

However, threshold encryption relies on the assumption of majority honesty. If the majority of nodes act maliciously, it is possible that the decrypted transaction is not included in the block. The proposer who re-pledges can make a credible commitment to the encrypted transaction to ensure that it is included in the block. If the proposer does not include the decrypted transaction, it will be fined. Of course, if the malicious majority of nodes do not release the decryption key, the proposer can propose an empty block.

Long-term Blockspace Auction

Long-term blockspace auctions allow blockspace buyers to reserve future blockspace for a validator in advance. Validators who participate in re-staking can make credible commitments, and if the transaction that does not include the buyer expires, they will be fined. This guarantee of blockspace has some practical use cases. For example, the oracle needs to feed prices at a certain time period; Arbitrum publishes to Ethereum L1 every 1-3 minutes, and Optimism publishes every 30 seconds to 1 minute. L2 Data and so on.

PEPC

EigenLayer:再质押引入中间件的信任革命

Let’s talk about PEPC (Protocol-enforced Proposer Commitment) which has been widely discussed in the Ethereum community recently. PEPC is actually the promotion or generalization of ePBS.

Let's break down this logic chain one by one.

  • First, take the out-of-protocol PBS MEV-Boost as an example. Currently, MEV-Boost relies on the slashing mechanism at the Ethereum protocol level, that is, if the proposer signs two different block headers at the same block height, they will be fined. Because the proposer needs to sign the block header submitted by the relay, it is equivalent to binding the block header and the proposer, so the relay has reason to believe that the builder's block will be proposed. Otherwise, the proposer can only be forced to give up this slot, or propose a different block (which will result in slashing). At this time, the proposer's commitment is guaranteed by the economic security of staking/slashing.

  • Similarly, an important principle in designing ePBS is "honest builder publication safety", which ensures that blocks published by honest builders will be proposed. As an in-protocol PBS, ePBS will be incorporated into the consensus layer of Ethereum and guaranteed by the protocol.

  • PEPC is a further extension of ePBS. ePBS promises that "builders' blocks will be proposed". If this is extended to partial block auctions, parallel block auctions, future block auctions, etc., we can let proposers do more things - and the protocol layer will ensure that these things are executed correctly.

There is a subtle relationship between PEPC and EigenLayer. It is not difficult to find that there are some similarities between the use cases of PEPC mentioned above and the block producer use cases of EigenLayer. However, an important difference between EigenLayer and PEPC is that proposers who participate in re-staking can still theoretically break their promises, although they will be financially punished for doing so; while the focus of PEPC is"Protocol-enforced", that is, enforcement is implemented at the protocol layer. If the commitment cannot be executed, the block is invalid.

(PS: At a glance, it is easy to find that EigenDA is similar to Danksharding and MEV-Boost++ is similar to ePBS. These two services are like opt-in versions designed at the protocol layer. Compared with the protocol layer, they are solutions that are launched to the market faster, keep pace with what Ethereum will do in the future, and maintain Ethereum Alignment through re-staking).

Don't Overload Ethereum Consensus?

A few months ago, Vitalik’s article Don’t Overload Ethereum Consensus was considered by most people to be a criticism of Restaking.Maintaining social consensusThe emphasis of the reminder or warning is on social consensus rather than the denial of re-pledge.

In the infancy of Ethereum, The DAO The attack incident caused great controversy, and the community had a heated discussion on whether to hard fork. Today, the Ethereum ecosystem, including Rollup, has already carried a huge number of applications. Therefore, it is very important to avoid causing great disagreements within the community and maintain consistency in social consensus.

Hermione creates a successful layer 2 and argues that because her layer 2 is the largest, it is inherently the most secure, because if there is a bug that causes funds to be stolen, the losses will be so large that the community will have no choice but to fork to recover the users' funds. High-risk.

The above quote from the original article is a good example. Today, the total TVL of L2 exceeds 10 billion US dollars. If there is a problem, it will involve a huge amount of consequences. At this time, if the community proposes to implement a hard fork and roll back the status, it will inevitably cause huge controversy. Suppose you and I have a large amount of funds on it, what will we choose - take back the money or be in aweBlockchainVitalik’s point is: projects that rely on Ethereum should properly manage risks and should not try to win over Ethereum’s social consensus and strongly bind the survival of the project to Ethereum.

Returning to the discussion of EigenLayer, the key to managing risk is AVS needs to define objective, on-chain traceable, and attributable slashing rules to avoid disagreementsFor example, double signing blocks on Ethereum; signing invalid blocks on another chain in a cross-chain bridge based on light nodes; the EigenDA custody proof discussed above, etc. These are all clear penalty rules.

Conclusion

EigenLayer:再质押引入中间件的信任革命

EigenLayer is expected to complete the mainnet launch early next year and launch its flagship product EigenDA. Many infrastructure projects have announced their cooperation with EigenLayer. We discussed EigenDA, MEV and PEPC above, and many interesting discussions are still ongoing around different use cases. Re-staking is becoming one of the mainstream narratives in the market. We will continue to follow EigenLayer's progress and share any views!

The article comes from the Internet:EigenLayer: Re-staking introduces a trust revolution in middleware

share to
© 版权声明

相关文章