AI on AO Conference Transcript | AO Protocol’s Three Major AI Technology Breakthroughs: Building a Decentralized Large Language Model

All articles8个月前更新 wyatt
52 0 0
This is the beginning of introducing market intelligence into the decentralized execution environment.

Written by Kyle

Review: Lemon

Source: Content Guild – News

Thanks to everyone for coming today. We have a bunch of super exciting developments to share with you about AO technology. We're going to start with a demo, and then Nick and I are going to try to build a AI Agent, this agent will be in the smartcontractWe use large language models in our system to buy and sell based on the sentiment of the chat in the system you are about to hear. We will build it from scratch live today, so hopefully it goes well.

Yes, you will see how to do it all yourself.

The technological advancement here really makes AO far beyond other intelligentcontract系统。这在以前已经是事实,现在它越来越像一个去中心化的超级计算机,而不是传统的智能contract网络。但它具备了智能合约网络的所有特性。因此,我们非常激动地与大家分享这一切。事不宜迟,让我们开始演示,然后我们会进行讨论,并且我们将一起现场构建一些东西。

Hello everyone, and thanks for joining us today. We are very excited to announce three major technical updates to the AO Protocol. Together, they achieve a big goal of supporting large language models running in a decentralized environment as part of smart contracts. These are not just toy models, small models, or models that are compiled into their own binary.

This is a complete system that allows you to run almost all the major models that are currently open source and available. For example, Llama 3 runs in smart contracts on the chain, the same is true for GPT, and Apple's model, etc. This is the result of the joint efforts of the entire ecosystem, and there are three major technological advances that are also part of this system. So I am very excited to introduce all of this to you.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

The general situation is that now LLM (Large Language Model) can be run in smart contracts. You may have heard many times that decentralization AI and AI cryptocurrencyIn fact, with the exception of one system we are going to discuss today, almost all of these systems are AI acting as oracles, that is, running AI off-chain and then putting the execution results on-chain for some downstream use.

We're not talking about that. We're talking about doing large language model reasoning as part of smart contract state execution. This is all made possible by the AO hard disks that we have and the hyper-parallel processing that AO does, which means you can run a lot of computations and it won't affect the different processes that I'm using. We think that this will allow us to create a very rich decentralized autonomous agent financial system.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

So far in DeFi, we have basically been able to make the execution of raw transactions trustless. Interactions in different economic games, such as lending and exchanging, are trustless. This is only one side of the problem. If you think about the global financial market.

Yes, there are all sorts of different economic primitives that play out in different ways. There are bonds, stocks, commodities, derivatives, and so on. But when we really talk about markets, it's not just that, it's actually the intelligence layer. It's the people who decide to buy, sell, borrow, or play various financial games.

So far, in the decentralized finance ecosystem, we have successfully moved all of these primitives into a trustless state. So you can make an exchange on Uniswap without trusting the operator of Uniswap. In fact, fundamentally, there is no operator. The intelligence layer of the market is left off-chain. So if you want to participatecryptocurrencyTo invest, without having to do all the research and involvement yourself, you have to find a fund.

You can trust them with your money, and then they go and execute intelligent decisions and pass it downstream into the underlying primitive execution of the network itself. We think that in AO, we actually have the ability to move the intelligent part of the market, the intelligence that leads to the decision, to the network itself. So an easy way to understand this might be to imagine this.

A hedge fund or portfolio management application that you can trust can execute a set of intelligent instructions within the network, thus transferring the trustlessness of the network to the decision-making process. This means that an anonymous account, such as Yolo 420 Trader Number One (a bold, random trader), can create a new interesting strategy and deploy it to the network, and you can invest capital in it without actually trusting it.

You can now build autonomous agents that interact with large statistical models. And the most common large statistical models are large language models that can process and generate text. This means you can put these models into smart contracts as part of a strategy developed by someone with a novel idea and execute them intelligently in the network.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

You can imagine doing some basic sentiment analysis. Like you read the news and decide this is a good time to buy or sell this derivative. This is a good time to do this or that. You can have human-like decision making done in a trustless way. This is not just theory. We created a fun meme coin called Llama Fed. Basically the idea is that it's a fiat currency simulator where a herd of llamas is represented by a Llama 3 model.

They are like a combination of a llama and the chairman of the Federal Reserve, you can go to them and ask them to give you someToken, they evaluate your request. The big language model itself operates the monetary policy, completely autonomous and trustless. We built it, but we don't control it. They operate the monetary policy and decide who should getToken, and who shouldn’t. This is a really fun little application of this technology that will hopefully inspire all the other possible applications in the ecosystem.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

To achieve this, we had to create three new fundamental capabilities for AO, some at the base protocol level and some at the application level. This is not only useful for executing large language models, but is also much more broadly and exciting for AO developers. So I’m very excited to introduce these to you today.

The first of these new technologies is Web Assembly 64-bit support. It sounds a bit like technical jargon, but I have a way of making everyone understand what it means. Basically, Web Assembly 64 support allows developers to create applications that use more than 4GB of memory. We'll get to the new limits later, but they are pretty amazing.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

If you’re not a developer, think of it this way: someone asks you to write a book, and you’re excited about the idea, but they tell you to only write 100 pages. No more, no less. You can express the ideas in the book, but you can’t do it in a natural and normal way because there’s an external constraint, and you have to cater to it and change the way you write to fit it.

In the smart contract ecosystem, this is more than just a 100 page limit. I would say it's a bit like building in an early version of AO. Ethereum has a 48KB memory limit, which is like someone asking you to write a book that is only one sentence long, and you can only use the top 200 most popular English words. It's extremely difficult to build really exciting applications in this system.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

And then with Solana, you have access to 10MB of working memory. That's obviously an improvement, but we're basically talking about a page of paper. ICP, the Internet Computer Protocol, allowed for 3GB of memory. It was theoretically complete, but they had to go down to 3GB. Now with 3GB of memory, you can run a lot of different applications, but you certainly can't run large AI applications. They need to load a lot of data into main memory for fast access. That can't be done efficiently in 3GB of memory.

When we released AO in February of this year, we also had a 4GB memory limit, and this limit was actually caused by the 32-bit version of Web Assembly. Now, this memory limit has completely disappeared at the protocol level. Instead, the memory limit at the protocol level is 18EB (exabytes). This is a huge amount of storage.

It will take quite some time until this is done in memory for computation instead of long-term storage. At the implementation level, the computational units in the AO network are now able to access 16GB of memory, but it will be relatively easy to replace it with a larger capacity in the future without changing the protocol. 16GB is enough to run large language model computations, which means you can download and execute a 16GB model on AO today. For example, the unquantized version of Llama 3 in the Falcon series and many other models.

This is a core component that is necessary to build intelligent language-based computing systems. Now it is fully supported on-chain as part of smart contracts, which we think is very, very exciting.

This removes a major computational limitation of AO and subsequent smart contract systems. When we released AO in February of this year, you may have noticed in the video that we mentioned several times that you have unlimited computing power, but there is a limit, which is that you cannot exceed 4GB of memory. This is the removal of that limit. We think this is a very exciting progress. 16GB is enough to run almost all the models you want to run in the current AI field.

We were able to lift the 16GB limit without changing the protocol, which will be relatively easy in the future, and it's a big improvement over when we were originally running Web Assembly 64. So that in itself is a huge improvement in the capabilities of the system. The second major technology that enables large language models to run on AO is WeaveDrive.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

WeaveDrive allows you to access Arweave data in AO like a local hard drive. This means you can open any transaction ID in AO that has been authenticated by the dispatch unit and upload it to the network. Of course, you can access this data and read it into your program, just like a file on a local hard drive.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

As we all know, there are currently about 6 billion transaction data stored on Arweave, so this is a huge starting point for a dataset. This also means that when building applications in the future, the motivation to upload data to Arweave will increase, because this data can also be used in AO programs. For example, when we made a large language model run on Arweave, we uploaded about $1,000 worth of models to the network. But this is just the beginning.

With a network of smart contracts with a local file system, the number of applications you can build is enormous. So that's very exciting. Even better, the system we built allows you to stream data into the execution environment. It's a technical nuance, but you can imagine going back to the book analogy.

Someone says to you, I want to access a number from your book. I want to get a graph from this book. In a simple system, or even in the current smart contract network, this would be a huge improvement, you would give me the whole book. However, this is obviously not efficient, especially if that book is a large statistical model with thousands of pages.

That's extremely inefficient. Instead, what we do in AO is we allow you to read bytes directly. You go directly to the location of the graph in the book, just copy the graph into your application and execute it. This makes the system extremely efficient. This is not only a minimum viable product (MVP), it is a fully functional, well-built data access mechanism. So you have an infinite computing system and an infinite hard drive, combine them together, and you have a supercomputer.

This has never been built before, and now it's available to everyone at minimal cost. That's what's happening with AO, and we're very excited about it. The implementation of this system is also at the operating system level. So we made WeaveDrive a subprotocol of AO, which is a compute unit extension that anyone can load. It's interesting because it's the first of its kind.

AO has always had the ability for you to add extensions to the execution environment. Just like if you have a computer and you want to plug in more memory, or plug in a graphics card, you physically put a unit into the system. You can do this with AO's compute units, and that's what we're doing here. So at the OS level, you now have a hard drive, which is just a file system that represents data storage.

What that means is that not only can you access this data in AO, building applications the way you normally would, but you can actually access it from any application that you bring to the web. So it's a broadly applicable capability that's accessible to everyone building on the system, no matter what language they're writing it in, Rust, C, Lua, Solidity, whatever, it can all access it as if it were native to the system. It also forced me, in the process of building this system, toXiaobai NavigationWe created sub-protocols to create ways for other computational units to extend so that others can build exciting things in the future as well.

Now that we have the ability to run computations in arbitrarily sized memory sets and can load data from the network into processes within the AO, the next question is how to perform inference itself.

Since we chose to build AO on Web Assembly as its primary VM, it was relatively easy to compile and run existing code in that environment. Since we built WeaveDrive to expose it as an OS-level file system, it was actually relatively easy to run Llama.cpp (an open source large-scale language model inference engine) on the system.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

This is very exciting because it means that you can run not only this inference engine, but many other engines easily. So the last component to make big language models run inside AO is the big language model inference engine itself. We ported a system called Llama.cpp, which sounds a bit mysterious, but it is actually the current leading open source model execution environment.

Running this directly within the AO smart contract, it's actually relatively easy once we have the ability to have an arbitrary amount of data in the system and then load an arbitrary amount of data from Arweave.

To enable this, we're also working with something called SIMD (Single Instruction Multiple Data) compute extensions, which allow you to run these models faster. So we've enabled that as well. What that means is that currently these models run on the CPU, but they're pretty fast. If you have asynchronous compute, it should work for your use case. Things like reading news signals and then deciding which trades to execute, that works pretty well with the current system. But we also have some exciting upgrades that we'll talk about soon, around other acceleration mechanisms, like using GPUs to accelerate inference on large language models.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

Llama.cpp allows you to load not only Meta's leading model, Llama 3, but many other models, in fact about 90% above the open source model site Hugging Face can run inside the system, from GPT-2 if you want, to 253 and Monet, Apple's own large language model system and many other models. So we now have the framework to upload any model from Arweave, use the hard drive to upload the model I want to run in the system. You upload them, they're just normal data, and then you can load them into AO's process and execute, get the results and work with it in any way you like. We think this is a package that enables applications that were not possible in the smart contract ecosystem before, and even if it is possible now, the amount of architectural changes in existing systems like Solana are just unpredictable and not on their roadmap. So in order to show you this and make it real and easy to understand, we created a simulator, Llama Fed. The basic idea is that we get a committee of Fed members, they are llama, both in terms of being a meta llama 3 model and in terms of being the chairman of the Fed.

We also tell them they are llama, like Alan Greenspan or the chairman of the Federal Reserve. You can go into this little environment.

Some of you will be familiar with this environment, it is actually like the Gather we worked on today, you can talk to the llama and ask them to give you someTokenIt's a very interesting project, and they'll decide whether to give you tokens based on your request. So you burn some Arweave tokens, wAR tokens (provided by the AOX team), and they'll give you tokens based on whether they think your proposal is good or not. So it's a meme coin, and the monetary policy is completely autonomous and intelligent. It's a simple form of intelligence, but it's still interesting. It will evaluate your proposals and other people's proposals and run the monetary policy. By analyzing news headlines and making intelligent decisions or interacting with customer support and returning value, all of this can now be implemented inside a smart contract. Elliot will show you now.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

Hi everyone, my name is Elliot, and today I’m going to show you Llama Land, an on-chain autonomous world running inside AO, powered by Meta’s open source Llama 3 model.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

The conversations we see here aren't just between players, but with fully autonomous digital llama.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

For example, this llama is a human being.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

But this llama is an on-chain AI.

This building contains the Llama fed. It's like the Federal Reserve, but for the llama.

Llama fed runs the world's first AI-driven monetary policy and mints Llama tokens.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

This guy is the Llama King. You can offer him wrapped Arweave tokens (wAR) and write a request to get some Llama tokens.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

The Llama King AI evaluates and decides whether to award Llama tokens. Llamafed's monetary policy is completely autonomous with no human oversight. Every agent and every room in the world is itself an on-chain process on the AO.

It looks like King Llama has granted us some tokens, and if I look at my ArConnect wallet, I can see they are already there. Not bad. Llama Land is just the first AI-driven world to be implemented on AO. It is a framework for a new protocol that allows anyone to build their own autonomous world, the only limit is your imagination. All of this is implemented on the 100% chain, which is only possible on AO.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

Thank you, Elliot. What you just saw was not only a large language model participating in financial decision making and running an autonomous monetary policy system. There were no backdoors, we couldn't control it, all of this was run by the AI itself. You also saw a little universe, a place where you can walk in physical space, and you can go to that place and interact with financial infrastructure. We think that this is more than just an interesting little demo.

There’s actually some really interesting stuff here, these places that bring together different people who use financial products. We’ve seen in the DeFi ecosystem that if someone wants to get involved in a project, they first check it out on Twitter, then go to the website and get involved with the basic primitives in the game.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

Then they join a Telegram group or a Discord channel or talk to other users on Twitter. The experience is very fragmented, and we’re all jumping between different applications. One interesting idea we’re trying is if you have a UI for these DeFi apps, let theirCommunitycan come together and co-manage this autonomous space that they collectively access, and because it’s a permanent web app, they can join the experience.

Imagine you can go to what looks like an auction house and chat with other users who like the protocol. Basically you can chat with other users when there is activity in the financial mechanism process happening on AO.CommunityAnd the social aspect is integrated with the financial part of the product.

We think this is very interesting and has even broader implications. You can build an autonomous AI agent here that wanders around in this Arweave world, interacting with different applications and users that it finds. So if you're building a metaverse, when you create an online game, the first thing you do is create NPCs (non-player characters). Here, NPCs can be generic.

You have an intelligent system that's wandering around and interacting with the environment, so you don't have the user cold start problem. You can have some autonomous agents that try tomake money, trying to make friends, and interacting with the environment like a normal DeFi user. We think this is very interesting, albeit a little weird. We will wait and see.

Looking ahead, we also see opportunities to accelerate the execution of large language models in AO. Earlier I talked about the concept of compute unit scaling. This is what we used to build WeaveDrive.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

Not just WeaveDrive, you can build any type of extension for AO's computing environment. There is a very exciting ecosystem project that is solving this problem for GPU accelerated large language model execution, which is the Apus Network. I'll let them explain.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

Hi, I’m Mateo. Today I’m excited to introduce Apus Network, a decentralized, trustless GPU network.

We provide an open source AO extension module by leveraging Arweave's permanent on-chain storage, providing a deterministic execution environment for GPUs, and providing an economic incentive model for decentralized AI using AO and APUS tokens. Apus Network will use GPU mining nodes to competitively execute the best, trustless model training running on Arweave and AO. This ensures that users can use the best AI models at the most cost-effective price. You can follow us on X (Twitter)@apus_networkPlease follow our progress on .Thank you.

AI on AO 发布会文稿|AO 协议的三大 AI 技术突破:搭建去中心化大语言模型

This is the state of AI on AO today. You can try Llama Fed and try to build your own smart contract application based on large language models. We think this is the beginning of bringing market intelligence to decentralized execution environments. We are very excited about this and look forward to seeing what happens next. Thank you all for participating today and look forward to speaking with you again.

The article comes from the Internet:AI on AO Conference Transcript | AO Protocol’s Three Major AI Technology Breakthroughs: Building a Decentralized Large Language Model

Related recommendations: Crypto market insights in May: The impact of policy factors intensifies, and buying power gathers to find a breakthrough point

Everything seems accidental, but it seems inevitable. Written by: 0xWeilan, EMC Labs *The information, opinions and judgments on markets, projects, currencies, etc. mentioned in this report are for reference only and do not constitute any investment advice. After 15 years of development, BTC and the Crypto industry have entered the stage of large-scale adoption from technology research and development and marginal market verification. From little-known and notorious…

share to
© 版权声明

相关文章