Home > web3.0 > body text

How to understand Vitalik's new article's thoughts on Ethereum expansion?

WBOY
Release: 2024-03-31 21:16:16
forward
565 people have browsed it

如何理解 Vitalik 新文对以太坊扩容的思考?

How to understand Vitalik Buterin’s new article’s thoughts on the expansion of Ethereum? Some people say that Vitalik’s order for Blob Inscription is outrageous. So how do blob packets work? Why is the blob space not being used efficiently after the upgrade in Cancun? DAS data availability sampling in preparation for sharding?

In my opinion, the performance of Cancun is usable after the upgrade, and Vitalik is worried about the development of Rollup. Why? Next, let me talk about my understanding:

Before explaining many times, Blob is a temporary data packet that can be directly retrieved by the consensus layer. The direct benefit is that the EVM can not Access Blob data, thus incurring lower execution layer computing overhead.

A series of factors on the current platform. The size of a Blob is 128kb. A batch of transactions to the main network can carry up to two blobs. Considering the situation, the ultimate goal of a main network block is to carry 16MB and about 128 blobs. data pack.

In order to make the Rollup project have the best cost performance, Blob space storage cost, TPS transaction capacity, Blob main network node storage and other factors can be used as the main considerations to achieve maximum benefits.

Taking Optimism as an example, there are currently about 500,000 transactions a day. On average, every 2 cents Batch transaction is sent to the main network, carrying 1 Blob data packet at a time. Why bring so many Blobs that you can’t use them all? Of course, you can also carry two. Then the capacity of each Blob will not be full, but it will increase the storage cost. It is unnecessary.

What should we do when the volume of transactions off the Rollup chain increases, for example, 50 million transactions are processed every day? 1. Compress compresses the transaction volume of each Batch and allows as much transactions as possible in the Blob space; 2. Increases the number of Blobs; 3. Shortens the frequency of Batch transactions;

Due to the amount of data carried by the main network block Affected by Gas Limit and storage costs, 128 Blobs per Slot block is the ideal state, and currently we do not use that many. Optimism only generates one every 2 minutes, leaving a lot of room for the layer2 project to improve TPS and expand the number of market users and ecological prosperity.

Therefore, for a period of time after the Cancun upgrade, Rollup was not "volatile" in terms of the number and frequency of Blobs used, as well as the use of Blob space bidding.

The reason why Vitalik mentions Blobscription inscriptions is because this type of inscription can temporarily increase the transaction volume, which will lead to an increase in the demand for Blob usage, so it will expand the size. Using inscriptions as an example can provide a deeper understanding of the working mechanism of Blobs. Vitalik really What I want to express has nothing to do with the inscription.

Because in theory, if there is a layer2 project party that performs high-frequency and high-capacity batch transactions to the main network, and fills up the Blob block every time, as long as it is willing to bear the high cost of forged transaction batches. It will affect the normal use of Blob by other layer 2, but under the current situation, just like someone buying computing power to conduct a 51% hard fork attack on BTC, it is theoretically feasible, but in practice there is no profit motive.

The purpose of introducing Blob is to reduce the burden on EVM and improve the operation and maintenance capabilities of nodes, which is undoubtedly a tailor-made solution for Rollup. Obviously, it is not being used efficiently at the moment, and the gas fee for the second layer will be stable in the "lower" range for a long time. This will give the layer 2 market a long-term golden development window to “increase troops and gather food”.

3) So, what if one day the layer2 market prospers to a certain extent, and the number of transactions from Batch to the mainnet reaches a huge amount every day, and the current Blob data packets are not enough? Ethereum has already provided a solution: using data availability sampling technology (DAS):

A simple understanding is that the data that originally needs to be stored in one node can be distributed among multiple nodes at the same time, for example, each node To store 1/8 of all Blob data, 8 nodes form a group to meet the DA capability, which is equivalent to expanding the current Blob storage capacity by 8 times. This is actually what Sharding will do in the future sharding stage.

But now Vitalik has reiterated this many times, very charmingly, and seems to be warning the majority of layer2 project parties: Don’t always complain about the expensive DA capacity of Ethereum. With your current TPS capacity, you have not developed the capacity of Blob data packets. To the extreme, hurry up and increase your firepower to develop the ecosystem and expand users and transaction volume. Don’t always think about DA running away to engage in one-click chain creation.

Later, Vitalik added that among the current core rollups, only Arbitrum has reached Stage 1. Although DeGate, Fuel, etc. have reached Stage 2, they have not yet been familiar with the wider community. Stage 2 is the ultimate goal of rollup security. Very few rollups have reached Stage 1, and most rollups are in Stage 0. It can be seen that the development of the rollup industry really worries Vitalik.

4) In fact, in terms of the expansion bottleneck problem, there is still a lot of room for the Rollup layer2 solution to improve performance.

1. Use Blob space more efficiently through data compression. OP-Rollup currently has a dedicated Compressor component to perform this work. ZK-Rollup's own off-chain compression SNARK/STARK proves that submitting to the main network is " Compression";

2. Reduce layer2's dependence on the main network as much as possible, and only use optimistic proof technology to ensure L2 security under special circumstances. For example, most of Plasma's data is on the chain, but when depositing and withdrawing All scenarios occur on the main network, so the main network can promise its security.

This means that layer2 should only consider important operations such as deposits and withdrawals to be strongly related to the main network, which not only reduces the burden on the main network, but also enhances L2's own performance. The "parallel processing" capability of Sequencer mentioned before when talking about parallel EVM can filter, classify and pre-process a large number of transactions off-chain, as well as the hybrid rollup promoted by Metis. Normal transactions go through OP-Rollup, special withdrawal requests go through ZK Route, etc. All have similar considerations.

The above is the detailed content of How to understand Vitalik's new article's thoughts on Ethereum expansion?. For more information, please follow other related articles on the PHP Chinese website!

source:panewslab.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!