AMD has launched the Instinct™ MI300 series, a generative AI accelerator, and samples are now available to customers.

At the “Data Center and Artificial Intelligence Technology Premiere” event in San Francisco on June 13, AMD launched the AMD Instinct MI300X accelerator, the world’s most advanced generative AI accelerator. Based on the next-generation AMD CDNA™ 3 accelerator architecture, the MI300X supports up to 192 GB of HBM3 memory to deliver the compute and memory efficiency required for large language model training and inference for generative AI workloads.

With the help of AMD Instinct MI300X’s large memory, users can install large language models like Falcon-40, which has 40 billion parameters, on a single MI300X accelerator. AMD has also introduced the AMD Instinct™ platform, which integrates eight MI300X accelerators into an industry-standard design, providing the ultimate solution for AI inference and training.

During the event, Lisa Su, the Chairman and CEO of AMD, stated that the MI300X is AMD’s true product designed for generative artificial intelligence. Compared to Nvidia’s H100 chip, the MI300X offers more than 2.4 times the memory and more than 1.6 times the memory bandwidth. Generative AI and large language models require a significant increase in computational power and memory. Previously, Reuters reported that Amazon’s cloud computing business is considering using AMD’s new artificial intelligence (AI) chip.

In addition, AMD also announced that the world’s first APU accelerator for HPC and AI workloads, AMD Instinct MI300A, is now available to customers as samples. The MI300X, on the other hand, will begin providing samples to key customers in the third quarter.

Previously, according to the 61st edition of the TOP500 list of global supercomputers released in early June, NVIDIA emerged as the dominant player in terms of the number of accelerators/co-processors employed in all supercomputers. Among the top 28 accelerator/co-processor products, 20 were from NVIDIA, while AMD had only one.

Among them, NVIDIA Tesla V100, NVIDIA A100, NVIDIA A100 SXM4 40GB, NVIDIA Tesla A100 80G, NVIDIA Tesla V100 SXM2, and AMD Instinct MI250X rank among the top six, with installations of 61, 27, 18, 10, 10, and 10 units, respectively. The corresponding percentages are 12.2%, 5.4%, 3.6%, 2%, 2%, and 2%.

With the rapid development of large-scale artificial intelligence models, domestic Chinese cloud computing companies and major internet companies are increasing their investment in accelerator cards.

According to a comprehensive report by Global Network Technology, ByteDance has placed orders with NVIDIA for GPU products worth over $1 billion (approximately 7 billion Chinese yuan) this year.

According to informed sources, ByteDance has procured 100,000 units of A100 and H800 accelerator cards, with the production of H800 starting as recently as March this year. Now the procurement amount from just this company is nearing NVIDIA’s total sales of commercial GPUs in China last year.

分享 facebook
Facebook
分享 twitter
Twitter
分享 linkedin
LinkedIn

Ask For A Quick Quote

We will contact you within 8 hours, please pay attention to the email with the suffix “@icchip.store”