Skip to content

Three Instances When Vitalik Buterin Shared His Take on AI in Web3

Explore Vitalik Buterin's insights on AI's role in Web3, weighing advantages like code audits against risks of existential threats.

Satoshi Nakamoto, the legendary man behind Bitcoin, set a high bar in the decentralized space. However, Ethereum co-founder Vitalik Buterin, devoid of ego, has admirably followed suit, accomplishing remarkable feats in life so far. 

Also, while is known as a genius intellect in the Web3 world, he’s known for his kindness and humor. But in this article, we are exploring a different side of the Ethereum OG. As a pioneer of Web3, he's also got positive words (and some critical) for artificial intelligence (AI). 

Here are three instances when Buterin deeply talked about AI in Web3.

1. AI-Assisted Code Audits

In 2023, crypto users lost nearly $2 billion to scams, rug pulls and hacks, with Ethereum experiencing one of the highest losses. Considering this fact, Buterin suggested using AI to improve code audits and reduce bugs in blockchain projects.

“One application of AI that I am excited about is AI-assisted formal verification of code and bug finding,” Buterin said. “Right now ethereum’s biggest technical risk is probably bugs in code, and anything that could significantly change the game on that would be amazing.”

When AI is used in code audits, it can evolve and acquire knowledge from fresh data, rendering them superior to existing automated solutions. Integrating human scrutiny with AI technology can establish a robust framework for identifying vulnerabilities.

2. Categorization of AI in Crypto Use Cases

In January this year, Butering, in his blog post “The promise and challenges of crypto + AI applications” categorized the integration of AI and crypto into different categories based on their viability and potential risks. 

He is optimistic about AI acting autonomously within blockchain systems for tasks such as identifying fake accounts and optimizing prediction markets. However, he also raises concerns about the dangers of integrating opaque AI components directly into smart contracts due to the challenges in ensuring their integrity and avoiding bias​​.

“In the last three years, with the rise of much more powerful AI in the form of modern LLMs, and the rise of much more powerful crypto in the form of not just blockchain scaling solutions but also ZKPs, FHE, (two-party and N-party) MPC, I am starting to see this change,” wrote Buterin. “There are indeed some promising applications of AI inside of blockchain ecosystems, or AI together with cryptography, though it is important to be careful about how the AI is applied.”

3.  Existential Risk of AI

Buterin writes some of the insightful “monster posts” (that’s what he calls them) about technology. Now, while the Ethereum co-founder is a pro-AI (the majority of the time), he never shies away from talking about the risks. 

And in November 2023, in his monster post “My techno-optimism”, Buterin talked about the existential risk of AI, including causing the extinction of the human race. 

“One way in which AI gone wrong could make the world worse is (almost) the worst possible way: it could literally cause human extinction. This is an extreme claim: as much harm as the worst-case scenario of climate change, or an artificial pandemic or a nuclear war, might cause, there are many islands of civilization that would remain intact to pick up the pieces,” Wrote Buterin. “But a superintelligent AI, if it decides to turn against us, may well leave no survivors, and end humanity for good. Even Mars may not be safe.”

He also referenced a 2022 survey by AI Impacts that stated 5% to 10% of respondents foresee human extinction due to AI or the inability to regulate it. Buterin further said that a security-oriented open-source approach should be taken to spearhead AI advancement rather than closed, proprietary entities and venture capital funding.

Edited by Harshajit Sarmah