The combination of AI and cryptocurrency has led to the emergence of various innovative projects. However, the computation power required for ambitious projects is equally impressive. With the development of AI, the demand for high-performance GPUs remains high. In this article, we will discuss AI projects investment in computing power.
What Is an AI Chip?
An AI chip, also known as an Artificial Intelligence chip, is a specialized integrated circuit designed to efficiently perform the complex computational requirements of AI algorithms. These chips handle computational demands by executing numerous parallel mathematical operations simultaneously. They are mainly used in AI applications because they can significantly enhance speed and are energy efficient. At first, GPUs, CPUs, FPGAs (Field Programmable Gate Arrays), and ASICs (Application Specific Integrated Circuits) were used to perform these demands, but as the industry advanced, they became less useful.
Key Features of AI Chips
- Parallel processing: AI chips utilize multiple processing cores simultaneously, leveraging parallel processing to handle various computations. This allows them to handle large amounts of data and complex algorithms inherent in AI tasks.
- Energy Efficiency: Based on the computational intensity of AI tasks, energy efficiency is a top priority for AI chips. They are designed to deliver high performance while reducing energy consumption, making the AI chips suitable for a variety of devices, such as smartphones, IoT devices, and data centers.
- Specialized Architecture: AI Chips are optimized specifically for AI tasks, especially the type of calculations involved in machine learning and deep learning algorithms.
- Matrix Optimization: AI algorithms involved in deep learning rely heavily on matrix operations like matrix multiplication. The AI chips are often optimized to perform these operations more efficiently.
- Memory Optimization: AI chips are designed specifically to manage data movement and access the memory. This ensures the processor can quickly retrieve and process the large datasets typically used in AI applications.
AI Projects’ Heavy Investment in Computing Power
After OpenAI demonstrated its text-to-video technology, Sora, the value of many AI-related cryptocurrencies increased. The renewed interest in AI tokens was due to the various possibilities the new technology offered. But making this technology mainstream isn’t easy because of the huge computing power it needs. Crypto AI projects would need to buy server-grade H100 GPUs worth their market capitalization.
In the following weeks after the demonstration, many new crypto projects emerged, promising text-to-image and text-to-video generation. Behind these projects were the large number of GPUs that process large volumes of data. But the amount of GPUs required to achieve AI-generated video will require more GPUs than what manufactures like Nvidia produce in a year.
A recent report by Factorial Funds estimates a peak demand of 720K Nvidia H100 GPUs for inference. The estimate was calculated by assuming video generation using AI on platforms like YouTube (15% of all 43 million minutes) and TikTok (50% of all 17 million minutes) uploaded daily.
Based on various data, Sora requires at least 10,500 Nvidia H100 GPUs for one month and can only generate about five minutes of video per hour for an Nvidia H100 GPU. This is only the initial assumption because once the widespread adoption of Sora begins, the computing power needed to create new videos will increase dramatically and will be greater than the power needed to train the AI model initially.
To provide some context, Nvidia shipped 550,000 H100 GPUs in 2023. Based on the data from Statista, twelve of the largest customers combined possess 650,000 Nvidia H100 GPUs, of which Meta and Microsoft together own 300,000 H100 GPUs.
What is Sora?
Sora is a text-to-video generative AI developed by OpenAI. Based on the text prompt, the AI can create minute-long videos that are no different from real videos. In the past, videos created using AI have had issues like distortion, choppiness, and others, and people were easily able to identify that they were created using AI. But Sora doesn’t inherit these issues and can create intricate scenes with vivid characters and dynamic motion.
Top 3 AI Chip Manufacturers
1. Nvidia: Nvidia, a well-known GPU producer since the 1990s, has gained a major share in the AI chips industry. It is known for its AI chips, like Volta, Xavier, Nvidia DGX ™ A100, and H100 GPUs. These chips cater to various needs and enhance AI model training and development. Among the chips, the H100 GPU is popular and often used by tech giants like Meta and Google. Furthermore, it strives to innovate products like GeForce RTX SUPER GPUs aimed at improving performance and efficiency.
2. Advanced Micro Devices (AMD): AMD is one of the major players in the AI market. Its Alveo U50 data center accelerator boasts 50 billion transistors and excels in rapid data processing. In June 2023, AMD launched the MI300 for AI training and collaborated with machine learning firms like Hugging Face, emphasizing software optimization for hardware performance. AMD’s AI focus includes EPYC CPUs with Instinct accelerators and AI-enabled Radeon Instinct MI300. The Ryzen PRO 8000 series, with a dedicated AI engine, aims to enhance business computing efficiency and performance.
3. Intel: Intel, the largest player in the CPU market, has also developed in the direction of AI chips. In 2017, it became the first company to surpass $1 billion in sales of AI chips. Its Xeon CPUs are versatile, supporting various data center tasks and contributing significantly to Intel’s commercial success. In 2024, Intel released the Gaudi3 AI accelerator processor, reinforcing its commitment to high-performance AI solutions.
Intel’s Core Ultra and Xeon chips, designed for generative AI, enhance laptop and PC performance. Despite Nvidia’s dominance in AI chips, Intel’s Xeon Platinum series, with its superior memory capacity and bandwidth, has distinguished Intel in the CPU market. With CPUs running 70% of global data center inferencing, Intel aims to expand its footprint in the AI hardware market. Their newer AI offerings, including Gaudi2 for generative AI and large language model training, signify a robust push into the AI space.
Conclusion
AI crypto projects face significant challenges when it comes to affording the computing power necessary for their ambitious projects. The volume of GPUs required for advanced AI tasks like text-to-video generation simply exceeds the current production capabilities. With high demand and limited supply, securing the best GPUs for computing is both costly and competitive.