From Storing the Past to Computing the Future: AO Super-Parallel Computer

All articles8个月前更新 wyatt
73 0 0
What exactly is AO and where does the logic that supports its performance come from?

Author: YBB Capital Researcher Zeke

Preface

Web3 has now differentiated into two mainstreamsBlockchainArchitectural design has inevitably made people feel a little aesthetically fatigued. Whether it is the rampant modular public chain or the new L1 that always emphasizes performance but does not show performance advantages, its ecology can be said to be a replica or slight improvement of the Ethereum ecology. The highly homogeneous experience has long made users lose their sense of freshness. However, the AO protocol newly proposed by Arweave is eye-catching, achieving ultra-high performance computing on the storage public chain and even achieving a quasi-Web2 experience. This seems to be a huge difference from the expansion methods and architectural designs we are currently familiar with. So what exactly is AO? Where does the logic that supports its performance come from?

How to understand AO

AO is named after the abbreviation of Actor Oriented, a programming paradigm in the concurrent computing model Actor Model. Its overall design concept originates from the extension of Smart Weave, and also follows the concept of Actor Model with message passing as the core. Simply put, we can understand AO as a "super-parallel computer" running on the Arweave network through a modular architecture. From the implementation point of view, AO is not actually the modular execution layer that we commonly see today, but a communication protocol that regulates message passing and data processing. The core goal of this protocol is to achieve collaboration between different "roles" within the network through information transmission, thereby realizing a computing layer with infinitely superimposed performance, and ultimately enabling Arweave, a "giant hard drive", to have centralized cloud-level speed, scalable computing power and scalability in a decentralized trust environment.

从存储过去到计算未来:AO超并行计算机

AO Architecture

The concept of AO seems to be similar to the "Core Time" splitting and recombining proposed by Gavin Wood at the Polkadot Decoded conference last year. Both are to achieve the so-called "high-performance world computer" through the scheduling and coordination of computing resources. But there are actually some differences between the two in essence. Exotic Scheduling is the deconstruction and reorganization of the relay chain block space resources. It does not change the architecture of Polkadot much. Although the computing performance has broken through the limitation of a single parallel chain under the slot model, the upper limit is still limited by the maximum number of idle cores in Polkadot. In theory, AO can provide nearly unlimited computing power (in actual circumstances, it should depend on the level of network incentives) and higher degrees of freedom through the horizontal expansion of nodes. From an architectural point of view, AO standardizes the data processing method and the expression of messages, and completes the sorting, scheduling and calculation of information through three network units (subnets). Its standardization method and the functions of different units can be summarized as follows according to official data analysis:

Process:A process can be considered as a collection of instructions executed in AO. When a process is initialized, it can define the computing environment it needs, including virtual machines, schedulers, memory requirements, and necessary extensions. These processes maintain a "holographic" state (each process data can be independently stored in the Arweave message log, and the holographic state will be explained in detail in the "Verifiable Issues" section below). The holographic state means that the process can work independently and the execution is dynamic and can be executed by the appropriate computing unit. In addition to the userwalletIn addition to receiving messages, processes can also forward messages from other processes through the messenger unit;

从存储过去到计算未来:AO超并行计算机

information:Each interaction between a user (or other process) and a process is represented by a message, which must conform to Arweave's native ANS-104 data items to keep the native structure consistent so that Arweave can save information. From a more understandable perspective, messages are a bit like traditionalBlockchainThe transaction ID (TX ID) in , but the two are not exactly the same;

从存储过去到计算未来:AO超并行计算机

Messenger Unit (MU):MU relays messages through a process called 'cranking' and is responsible for the delivery of communications within the system to ensure seamless interactions. Once a message is sent, the MU routes it to the appropriate destination (SU) within the network, coordinates the interaction and recursively processes any generated outbox messages. This process continues until all messages have been processed. In addition to message relaying, the MU also provides various functions, including managing process subscriptions and handling timed cron interactions;

Scheduler Unit (SU):When a message is received, the SU initiates a series of critical operations to maintain the continuity and integrity of the process. Upon receiving a message, the SU assigns a unique incremental nonce to ensure order relative to other messages in the same process. This assignment process is formalized through cryptographic signatures to ensure authenticity and sequence integrity. To further improve the reliability of the process, the SU uploads the signature assignment and message to the Arweave data layer. This ensures the availability and immutability of the message and prevents data tampering or loss;

Compute Unit (CU):CUs compete with each other in a peer-to-peer computing market to complete the service of solving the computing process status between users and SUs. Once the status calculation is completed, CU will return a signed proof with a specific message result to the caller. In addition, CU can also generate and publish signed status proofs that can be loaded by other nodes, of course, this also requires a certain percentage of fees.

从存储过去到计算未来:AO超并行计算机

Operating System AOS

AOS can be regarded as an operating system or terminal tool in the AO protocol, which can be used to download, run and manage threads. It provides an environment in which developers can develop, deploy and run applications. On AOS, developers can use the AO protocol to develop and deploy applications and interact with the AO network.

Operation Logic

The Actor Model advocates a philosophical viewpoint called "everything is an actor". All components and entities in the model can be regarded as "actors". Each actor has its own state, behavior and mailbox. They transmit messages and collaborate through asynchronous communication, so that the entire system can be organized and operated in a distributed and concurrent manner. The operating logic of the AO network is the same. Components and even users can be abstracted as "actors" and communicate with each other through the message passing layer, so that processes are linked to each other. A distributed working system that can perform parallel computing and non-shared states is established in the interweaving.

从存储过去到计算未来:AO超并行计算机

The following is a brief description of the steps in the information transfer flowchart:

1. Message origination:

○ Users or processes create messages to send requests to other processes.

○ MU (Messenger Unit) receives the message and sends it to other services using POST request.

2. Message processing and forwarding:

○ MU processes the POST request and forwards the message to SU (Scheduling Unit).

○ SU interacts with the Arweave storage or data layer to store the messages.

3. Retrieve results by message ID:

○ CU (Compute) receives the GET request, retrieves the result based on the message ID, and evaluates the message on the process. It is able to return the result based on a single message identifier.

4. Retrieve Information:

○ SU receives the GET request and retrieves the message information according to the given time range and process ID.

5. Push outbox message:

○ The last step is to push all outbox messages.

○ This step involves examining the messages and productions in the result objects.

○ Depending on the results of this check, steps 2, 3, and 4 can be repeated for each relevant message or generation.

What has AO changed? 「1」

Differences from common networks:

1. Parallel processing capabilities: Unlike networks such as Ethereum, where the base layer and each Rollup actually run as a single process, AO supports any number of processes running in parallel while ensuring that the verifiability of the computation remains intact. In addition, these networks operate in a globally synchronized state, while AO processes maintain their own independent state. This independence enables AO processes to handle a higher number of interactions and computational scalability, making them particularly suitable for applications that require high performance and reliability;

2. Verifiable Reproducibility: While some decentralized networks, such as Akash and the peer-to-peer system Urbit, do provide large-scale computing power, unlike AO, they do not provide verifiable reproducibility of interactions or rely on non-permanent storage solutions to preserve their interaction logs.

AO's node network is different from the traditional computing environment:

● Compatibility: AO supports various forms of threads. Whether based on WASM or EVM, they can be connected to AO through certain technical means.

● Content co-creation projects: AO also supports content co-creation projects. You can publish atomic NFT on AO, upload data and combine it with UDL to build NFT on AO.

● Data composability: NFTs on AR and AO can achieve data composability, allowing an article or content to be shared and displayed on multiple platforms while maintaining the consistency and original properties of the data source. When content is updated, the AO network can broadcast these updated states to all relevant platforms to ensure content synchronization and the dissemination of the latest state.

● Value feedback and ownership: Content creators can sell their works as NFTs and transfer ownership information through the AO network to achieve value feedback for the content.

Support for the project:

1. Built on Arweave: AO leverages the features of Arweave to eliminate the vulnerabilities associated with centralized providers, such as single points of failure, data leakage, and censorship. Computations on AO are transparent and can be verified through decentralized trust minimization features and reproducible message logs stored on Arweave;

2. Decentralized foundation: The decentralized foundation of AO helps overcome the scalability limitations imposed by physical infrastructure. Anyone can easily create an AO process from their terminal without specialized knowledge, tools or infrastructure, ensuring that even individuals and small-scale entities can have global influence and participation.

AO’s Verifiability Problem

After we understand the framework and logic of AO, there is usually a common question. AO does not seem to have the global characteristics of traditional decentralized protocols or chains. Can verifiability and decentralization be achieved only by uploading some data to Arweave? ? In fact, this is the mystery of AO design. AO itself is an off-chain implementation, and it does not solve the problem of verifiability or change the consensus. The idea of the AR team is to separate the functions of AO and Arweave, and then connect them modularly. AO only communicates and calculates, and Arweave only provides storage and verification. The relationship between the two is more like a mapping. AO only needs to ensure that the interaction log is stored on Arweave, and its state can be projected to Arweave to create a hologram. This holographic state projection ensures the consistency, reliability, and certainty of the output when calculating the state. In addition, the message log on Arweave can also reversely trigger the AO process to perform specific operations (it can wake up by itself according to preset conditions and schedules, and perform corresponding dynamic operations).

从存储过去到计算未来:AO超并行计算机

According to Hill and Outprog, if we simplify the verification logic, we can imagine AO asoneInscription computing framework based on hyper-parallel indexer. We all know that the Bitcoin inscription indexer verifies the inscription, needs to extract JSON information from the inscription, and records the balance information in the off-chain database, and completes the verification through a set of indexing rules. Although the indexer is an off-chain verification, users can verify the inscription by replacing multiple indexers or running the index themselves, so there is no need to worry about the indexer doing evil. We mentioned above that the order of messages and the holographic state of the process and other data are uploaded to Arweave. Then, based on the SCP paradigm (storage consensus paradigm, here it can be simply understood that SCP is the indexer of the indexing rules on the chain, and it is also worth noting that SCP appeared much earlier than the indexer), anyone can restore AO or any thread on AO through the holographic data on Arweave. Users do not need to run a full node to verify the trusted state. Just like changing the index, users only need to make a query request to a single or multiple CU nodes through SU. Arweave has high storage capacity and low cost, so under this logic, AO developers can achieve a supercomputing layer that far exceeds the function of Bitcoin inscriptions.

AO and ICP

Let's summarize AO's features with some keywords: giant native hard disk, unlimited parallelism, unlimited computing, modular overall architecture, and holographic state process. All of this sounds great, but the familiarBlockchainFriends who are interested in various public chain projects may find that AO is particularly similar to a "death-level" project, namely the once popular "Internet Computer" ICP.

ICP was once known asBlockchainThe world's last superstar project, which was highly sought after by top institutions, also reached a FDV of 200 billion US dollars in the crazy bull market of 21 years. But as the wave receded, ICP'sTokenThe value also plummeted. Until the bear market ICP in 23 yearsTokenCompared with its historical high, the value has fallen by nearly 260 times. However, if we ignore the performance of the token price, even if we re-examine ICP at this time, its technical features still have many unique features. Many of the amazing advantages of AO today were also possessed by ICP in the past, so will AO fail like ICP? Let's first understand why the two are so similar. Both ICP and AO are designed based on the Actor Model and focus on locally running blockchains, so there are many common features between the two. The ICP subnet blockchain is formed by a number of independently owned and controlled high-performance hardware devices (node machines) that run the Internet Computer Protocol (ICP). The Internet Computer Protocol is implemented by many software components, which are a copy as a bundle because they replicate states and calculations on all nodes in the subnet blockchain.

The ICP replication architecture can be divided into four layers from top to bottom:

Peer-to-Peer (P2P) Network Layer:Used to collect and announce messages from users, other nodes in their subnet blockchain, and other subnet blockchains. Messages received by the peer layer will be replicated to all nodes in the subnet to ensureSafetyreliability, reliability and resilience;

Consensus layer:Select and order the messages received from users and different subnets to create blockchain blocks that can be notarized and finalized through a Byzantine Fault Tolerant consensus that forms an evolving blockchain. These finalized blocks are passed to the message routing layer;

Message Routing Layer: Used to route user and system-generated messages between subnets, manage Dapp input and output queues, and schedule message execution;

Execution environment layer:Computes execution intelligence by processing messages received from the message routing layercontractThe deterministic calculations involved.

从存储过去到计算未来:AO超并行计算机

Subnet blockchain

A so-called subnet is a collection of interacting replicas that run separate instances of the consensus mechanism in order to create its own blockchain on which a set of "containers" can run. Each subnet can communicate with other subnets and is controlled by a root subnet, which delegates its authority to individual subnets using chain key cryptography. ICP uses subnets to allow it to scale infinitely. The problem with traditional blockchains (and individual subnets) is that they are limited by the computing power of a single node machine, because each node must run everything that happens on the blockchain in order to participate in the consensus algorithm. Running multiple independent subnets in parallel allows ICP to break through this single-machine barrier.

Why it failed

As mentioned above, the purpose of the ICP architecture is, in simple terms, a decentralized cloud server. A few years ago, this concept was as shocking as AO, but why did it fail? Simply put, it was neither high nor low, and there was no good balance between Web3 and its own concept. Ultimately, the project was not as good as Web3 nor as easy to use as a centralized cloud. In summary, there are three problems. First, ICP's program system Canister, which is the "container" mentioned above, is actually a bit similar to AOS and processes in AO, but the two are not the same. ICP's program is encapsulated and implemented by Canister, which is not visible to the outside world and requires specific interfaces to access data. For the DeFi protocol under asynchronous communicationcontractThe call is not friendly, so in DeFi Summer, ICP did not capture the corresponding financial value.

从存储过去到计算未来:AO超并行计算机

The second point is that the hardware requirements are extremely high, which makes the project not decentralized. The figure below is the minimum hardware configuration diagram of the node given by ICP at the time. Even now, it is very exaggerated, far exceeding Solana's configuration, and even the storage requirements are higher than the storage public chain.

从存储过去到计算未来:AO超并行计算机

The third point is the lack of ecology. ICP is still a high-performance public chain even now. If there is no DeFi application, what about other applications? Sorry, ICP has not produced a killer application since its birth. Its ecology has neither captured Web2 users nor Web3 users. After all, with such a low degree of decentralization, why not directly use rich and mature centralized applications? But in the end, it is undeniable that ICP's technology is still top-notch. Its advantages of reverse Gas, high compatibility, and unlimited expansion are still necessary to attract the next billion users, and in the currentAIIn this wave, if ICP can make good use of its own architectural advantages, it may still have a chance to turn around.

So back to the question above, will AO fail like ICP? I personally think that AO will not repeat the same mistakes. First of all, the last two points that led to the failure of ICP are not a problem for AO. Arweave already has a good ecological foundation, and holographic state projection also solves the problem of centralization.Xiaobai NavigationAO is also more flexible in terms of compatibility. More challenges may focus on the design of economic models, support for DeFi, and a century-old problem: in the non-financial and storage fields, how should Web3 be presented?

Web3 shouldn’t just be about narrative

Web3的世界中出现频率最高的词语必然是“叙事”,我们甚至已经习惯用叙事的角度去衡量大部分Token的价值。这自然是源于Web3大部分项目愿景很伟大,用起来却很尴尬的窘境。相较之下Arweave已经有很多完全落地的应用,并且对标的都是Web2级别的体验。譬如Mirror、ArDrive,你如果使用过这些项目应该很难感受到与传统应用的差别。但Arweave作为存储公链的价值捕获依然存在很大的局限性,计算也许是必由之路。尤其在如今的外部世界中,AIIt is an inevitable trend. There are still many natural barriers in the integration of Web3 at this stage, which we have also talked about in previous articles. Now Arweave's AO uses a non-Ethereum modular solution architecture to give Web3 x AIA great new infrastructure. From the Library of Alexandria to super-parallel computers, Arweave is following a paradigm of its own.

References

1. AO Quick Start: Introduction to Super Parallel Computers

2. X Space Event Recording | Is AO the Ethereum Killer? How will it promote the new narrative of blockchain?

3. ICP White Paper

4. AOCookBook

5. AO — The most parallel computer you can imagine

6. Analysis of the reasons for the decline of ICP from multiple perspectives: unique technology and a thin ecosystem

The article comes from the Internet:From Storing the Past to Computing the Future: AO Super-Parallel Computer

Related recommendation: Why does Wood think the final price of BTC can reach 3.5 million US dollars?

Bitcoin still has a long way to go. Source: bitcoinist Compilation: Blockchain Knight Ark Invest CEO Cathie Wood reiterated her bullish stance on BTC. The investment tycoon recently revealed that she allocated 25% of her wealth to BTC and predicted that Crypto assets…

share to
© 版权声明

相关文章