Skip to main content

Explain the difference and connection between server CPU and GPU technology in detail

The CPU (Central Processing Unit, central processing unit) is the "brain" of the machine, the "commander-in-chief" who completes layout strategies, issues orders, and controls actions. The structure of the CPU mainly includes an arithmetic unit (ALU, Arithmetic and Logic Unit), a control unit (CU, Control Unit), a register (Register), a cache (Cache) and a bus for communicating data, control and status between them.

GPU (Graphics Processing Unit, Chinese for graphics processing unit), just like its name, GPU was originally used in personal computers, workstations, game consoles and some mobile devices (such as tablets, smartphones, etc.) to run graphics computing work on the microprocessor.

The reason why CPU and GPU are quite different is due to their different design goals. They are aimed at two different application scenarios.

The CPU needs strong versatility to process various data types, and at the same time it requires logical judgment and introduces a large number of branch jumps and interrupt processing. All these make the internal structure of the CPU extremely complicated.

The GPU is faced with a highly unified, independent, large-scale data and a pure computing environment that does not need to be interrupted. So CPU and GPU present very different architectures (schematic diagram):

From the architecture diagram, we can clearly see that the structure of GPU is relatively simple, with a large number of computing units and an ultra-long pipeline, which is especially suitable for large and uniform data (such as image data).

The main work of the GPU is 3D image processing and special effects processing, in layman's terms, it is a work of image presentation. For 2D graphics, the CPU can easily process it, but for complex 3D images, the CPU will spend a lot of resources to process, which will obviously reduce the efficiency of other aspects, so this kind of work is handed over to the GPU for processing.

Some high frame rate game pictures and high-quality special effects are also handed over to the GPU for processing, sharing the work of the CPU. In addition, GPUs are widely used in fields such as cryptography, big data processing, and financial analysis by virtue of their parallel processing capabilities.

Why is GPU so good at processing image data? This is because every pixel on the image needs to be processed, and the processes and methods of processing each pixel are very similar. Such scenes have become a natural hotbed of GPUs.

But the GPU cannot work alone, it must be controlled by the CPU to work. The CPU can act alone to process complex logical operations and different data types, but when a large amount of data with a uniform processing type is required, the GPU can be called for parallel computing.

The GPU uses a large number of computing units and a long pipeline, but only has very simple control logic and saves the Cache. The CPU not only takes up a lot of space by the Cache but also has complex control logic and many optimization circuits. In contrast, the GPU computing power is only a small part of the CPU.

The CPU is based on a low-latency design. The CPU has a powerful ALU (arithmetic operation unit), which can complete arithmetic calculations in a few clock cycles.

In contrast, GPU is based on a large throughput design, the Cache is relatively small, and the control unit is simple, but the GPU has a large number of cores, which is suitable for parallel high-throughput operations.

There are many arithmetic units ALU and few caches in the GPU. The purpose of the cache is not to save the data that needs to be accessed later (this is different from the CPU) but to provide services for the thread Thread. If there are many threads that need to access the same data, the cache will merge these accesses before accessing the memory DRAM.

All in all, the CPU and GPU are different in their initial design tasks, so there is a big difference in design. And some tasks are similar to the way the GPU was originally used to solve the problem, so GPU is used for calculation.

For example, the computing speed of GPU depends on how many elementary school students are hired, and the computing speed of the CPU depends on how powerful professors are hired. The professor's ability to handle complex tasks crushes elementary school students, but for less complex tasks, they still can't stand the crowd. Of course, the current GPU can also do some slightly complicated tasks, which is equivalent to upgrading to the level of junior high school students and high school students.

GPU is to use a lot of simple computing units to complete a large number of computing tasks, purely crowded tactics. This strategy is based on the premise that the work between primary school students is independent of each other.

This answers the question of what the GPU can do. In graphics operations and large-scale matrix operations, such as machine learning algorithms, GPUs can show their talents. In short, the CPU is good at commanding the overall situation and other complex operations, and the GPU is good at performing simple and repetitive operations on big data. The CPU is a teaching aid for complex mental work, while the GPU is a manual worker (elementary school student) who performs a large number of parallel calculations.

The work of GPU is characterized by a large amount of calculation and no technical content. It needs to be repeated many, many times. It also needs the CPU to feed the data to the mouth before it can start work. In the end, it is managed by the CPU.

Why is GPU so popular in the field of artificial intelligence? Deep learning is a mathematical network model established by simulating the human brain nervous system. The biggest feature of this model is that it requires big data for training.

Therefore, the requirement for computing power in the field of artificial intelligence is that a large number of parallel repetitive calculations are required. GPUs have this expertise, and the times make heroes, so GPUs have come out to shoulder this important task. In the field of artificial intelligence (deep learning), GPU has the following main features:

1. Provides a multi-core parallel computing infrastructure, and the number of cores is very large, which can support parallel computing of large amounts of data. Parallel computing is an algorithm that can execute multiple instructions at a time. The purpose is to increase the calculation speed and solve large and complex calculation problems by expanding the scale of problem-solving.
2. Have higher memory access bandwidth and speed.
3. Possess higher floating-point computing capabilities. Floating-point computing capability is an important indicator related to the multimedia and 3D graphics processing of the processor. In the current computer technology, due to the application of a large number of multimedia technologies, the calculation of floating-point numbers has greatly increased, such as the rendering of 3D graphics. Therefore, the ability of floating-point operations is an important indicator of the computing power of the processor.

It should be emphasized that although the GPU was born for image processing, we can find from the previous introduction that the structure of the GPU does not specifically serve image components, but only optimizes and adjusts the structure of the CPU, so now the GPU can not only In the field of image processing, it is also used for scientific computing, password cracking, numerical analysis, massive data processing (sorting, Map-Reduce, etc.), financial analysis and other fields that require large-scale parallel computing. Therefore, GPU can also be considered as a more versatile chip.

Brief summary: CPU and GPU are two different processors. CPU is the most advanced general-purpose processor for program control and sequential execution. GPU is a dedicated processor for image processing and analysis in specific fields. GPU is affected by CPU control. In many terminal devices, the CPU and GPU are often integrated into one chip, and have CPU or GPU processing capabilities.


Popular posts from this blog

AMD's GPU technology enters the mobile phone chip market for the first time

In addition to the release of the Exynos2100 processor, Samsung also confirmed a major event at this Exynos event, that is, the custom GPU that they have worked with AMD for many years will soon appear and will be used on the next flagship machine. The current Exynos2100 processor uses ARM’s Mali-G78GPU core with a total of 14 cores, so the GPU architecture developed by Samsung will be the next Exynos processor, and the GPU will be the focus. This is probably the meaning of Exynos2100’s GPU stacking. The key reason. Dr. InyupKang, president of Samsung’s LSI business, confirmed that the next-generation mobile GPU in cooperation with AMD will be used in the next flagship product, but he did not specify which product. Samsung is not talking about the next-generation flagship but the next one, so it is very likely that a new Exynos processor will be available this year, either for the GalaxyNote21 series or the new generation of folding screen GalaxyZFold3. In 2019, AMD and Samsung reached

Apple and Intel want to join the game, what happened to the GPU market?

Intel recently announced that it will launch Xe-LP GPU at the end of this year, officially entering the independent GPU market, and will hand over to TSMC for foundry. At the 2020 WWDC held not long ago, Apple also revealed that it is possible to abandon AMD's GPU and use a self-developed solution based on the ARM architecture. It will launch a self-developed GPU next year. What happened to the GPU market? Why are the giants entering the game?    Massive data calls for high-performance GPU    Why has the demand for GPUs increased so rapidly in recent years? Because we are entering an era where everything needs to be visualized. Dai Shuyu, a partner of Aiwa (Beijing) Technology Co., Ltd., told a reporter from China Electronics News that visualization requires a large amount of graphics and image computing capabilities, and a large amount of high-performance image processing capabilities are required for both the cloud and the edge.    Aiwa (Beijing) Technology Co., Ltd. is an enterp

NVIDIA officially launches RTX 30 series mobile graphics cards

In the early morning of January 13, NVIDIA officially launched the RTX30 series of mobile graphics cards at the CES2021 exhibition. Ampere-based GPUs have also reached the mobile terminal, mainly including RTX3080, RTX3070 and RTX3060 models. In addition to improving game performance, the RTX30 series of mobile graphics cards have twice the energy efficiency of the previous generation, and support the third-generation Max-Q technology, mainly supporting DynamicBoost2.0 dynamic acceleration technology, WisperMode2.0 noise control, ResizableBAR (similar to AMD’s SAM technology) and DLSS. The third-generation Max-Q technology uses AI and new system optimization to make high-performance gaming laptops faster and more powerful than ever. These technologies include: ·DynamicBoost2.0: The CPU and GPU powers of traditional gaming notebooks are fixed, while games and creative applications are dynamic, and the requirements for the system will vary with the number of frames. With DynamicBoost2.0,