Design artificial intelligence chips using Samsung's 5-nanometer process * and HBM3E memory for generative AI (such as GPT-3, LLaMA)
This is a multi billion parameter, small batch, and memory intensive workload.

Group Business AI Chip Semiconductor

Introduction to NPU Computing Chip

We will use Samsung's 5-nanometer process * and their HBM3E memory to design artificial intelligence chips for generative AI (such as GPT-3, LLaMA),
This is a multi billion parameter, small batch, and memory intensive workload.

Suggested Architecture

*The process may be adjusted (5nm or 4nm) based on the service situation of the wafer fab

Comparison of NPU and GPU Chip Structures (Advantages)

In order to design AI semiconductors for servers more effectively, NPU specialized structures should be used instead of GPUs, and NPU will become mainstream in the future.*Many companies such as Google, Microsoft, Tesla, etc. are manufacturing dedic

Although GPUs are currently versatile artificial intelligence development tools, their purchase and operation costs are relatively high.
The NPU structure has higher processing speed and higher power consumption than GPUs of the same level.
(Compared to H100, the proposed product can save more than 70% of costs when using the same level of performance)
*The UXF leading the development of this research has rich experience in AI semiconductor product development based on NPU architecture.

NPU Chip: Fast, Efficient, And Cost-Effective (Advantages)

Highly optimized and flexible processor architecture, coupled with world leading HBM technology, performs excellently in generative AI workloads with performance (>30%)
The cost-effectiveness (>2 times) and power efficiency (>3 times) are both superior to the existing H100.

Application Fields of NPU Chips

1.Autonomous Driving

In autonomous vehicle, high-performance chips can quickly analyze the data collected by sensors, make real-time decisions, and ensure driving safety.

2.Artificial Intelligence and Machine Learning

Artificial intelligence and machine learning algorithms require a large amount of mathematical operations and data processing, which requires chips to have powerful computing power. With the popularization of technologies such as deep learning, computing power has become a key factor driving the development of AI.

3.Data Centers and Supercomputing

High computing power chips support data centers and supercomputing, providing powerful computing capabilities for high-performance computing tasks. For example, the world's fastest supercomputer, the Frontier Supercomputing Center, uses high computing power chips with a double precision floating-point computing power of 1.1 EOPS.

4.Mobile Applications

The application of high computing power chips in devices such as smartphones, tablets, and laptops is becoming increasingly widespread. They can handle multiple tasks simultaneously, ensuring smooth operation of the application.

5.Intelligent Internet of Things

Devices connected to the Internet need to process a large amount of data, and high computing chips can support the functions of intelligent Internet of Things devices.

(扫码立即联系)