The CPU (Central Processing Unit, central processing unit) is the "brain" of the machine, the "commander-in-chief" who completes layout strategies, issues orders, and controls actions. The structure of the CPU mainly includes an arithmetic unit (ALU, Arithmetic and Logic Unit), a control unit (CU, Control Unit), a register (Register), a cache (Cache) and a bus for communicating data, control and status between them.
GPU (Graphics Processing Unit, Chinese for graphics processing unit), just like its name, GPU was originally used in personal computers, workstations, game consoles and some mobile devices (such as tablets, smartphones, etc.) to run graphics computing work on the microprocessor.
The reason why CPU and GPU are quite different is due to their different design goals. They are aimed at two different application scenarios.
The CPU needs strong versatility to process various data types, and at the same time it requires logical judgment and introduces a large number of branch jumps and interrupt processing. All these make the internal structure of the CPU extremely complicated.
The GPU is faced with a highly unified, independent, large-scale data and a pure computing environment that does not need to be interrupted. So CPU and GPU present very different architectures (schematic diagram):
From the architecture diagram, we can clearly see that the structure of GPU is relatively simple, with a large number of computing units and an ultra-long pipeline, which is especially suitable for large and uniform data (such as image data).
The main work of the GPU is 3D image processing and special effects processing, in layman's terms, it is a work of image presentation. For 2D graphics, the CPU can easily process it, but for complex 3D images, the CPU will spend a lot of resources to process, which will obviously reduce the efficiency of other aspects, so this kind of work is handed over to the GPU for processing.
Some high frame rate game pictures and high-quality special effects are also handed over to the GPU for processing, sharing the work of the CPU. In addition, GPUs are widely used in fields such as cryptography, big data processing, and financial analysis by virtue of their parallel processing capabilities.
Why is GPU so good at processing image data? This is because every pixel on the image needs to be processed, and the processes and methods of processing each pixel are very similar. Such scenes have become a natural hotbed of GPUs.
But the GPU cannot work alone, it must be controlled by the CPU to work. The CPU can act alone to process complex logical operations and different data types, but when a large amount of data with a uniform processing type is required, the GPU can be called for parallel computing.
The GPU uses a large number of computing units and a long pipeline, but only has very simple control logic and saves the Cache. The CPU not only takes up a lot of space by the Cache but also has complex control logic and many optimization circuits. In contrast, the GPU computing power is only a small part of the CPU.
The CPU is based on a low-latency design. The CPU has a powerful ALU (arithmetic operation unit), which can complete arithmetic calculations in a few clock cycles.

In contrast, GPU is based on a large throughput design, the Cache is relatively small, and the control unit is simple, but the GPU has a large number of cores, which is suitable for parallel high-throughput operations.

There are many arithmetic units ALU and few caches in the GPU. The purpose of the cache is not to save the data that needs to be accessed later (this is different from the CPU) but to provide services for the thread Thread. If there are many threads that need to access the same data, the cache will merge these accesses before accessing the memory DRAM.
All in all, the CPU and GPU are different in their initial design tasks, so there is a big difference in design. And some tasks are similar to the way the GPU was originally used to solve the problem, so GPU is used for calculation.
For example, the computing speed of GPU depends on how many elementary school students are hired, and the computing speed of the CPU depends on how powerful professors are hired. The professor's ability to handle complex tasks crushes elementary school students, but for less complex tasks, they still can't stand the crowd. Of course, the current GPU can also do some slightly complicated tasks, which is equivalent to upgrading to the level of junior high school students and high school students.
GPU is to use a lot of simple computing units to complete a large number of computing tasks, purely crowded tactics. This strategy is based on the premise that the work between primary school students is independent of each other.
This answers the question of what the GPU can do. In graphics operations and large-scale matrix operations, such as machine learning algorithms, GPUs can show their talents. In short, the CPU is good at commanding the overall situation and other complex operations, and the GPU is good at performing simple and repetitive operations on big data. The CPU is a teaching aid for complex mental work, while the GPU is a manual worker (elementary school student) who performs a large number of parallel calculations.
The work of GPU is characterized by a large amount of calculation and no technical content. It needs to be repeated many, many times. It also needs the CPU to feed the data to the mouth before it can start work. In the end, it is managed by the CPU.
Why is GPU so popular in the field of artificial intelligence? Deep learning is a mathematical network model established by simulating the human brain nervous system. The biggest feature of this model is that it requires big data for training.
Therefore, the requirement for computing power in the field of artificial intelligence is that a large number of parallel repetitive calculations are required. GPUs have this expertise, and the times make heroes, so GPUs have come out to shoulder this important task. In the field of artificial intelligence (deep learning), GPU has the following main features:
1. Provides a multi-core parallel computing infrastructure, and the number of cores is very large, which can support parallel computing of large amounts of data. Parallel computing is an algorithm that can execute multiple instructions at a time. The purpose is to increase the calculation speed and solve large and complex calculation problems by expanding the scale of problem-solving.
2. Have higher memory access bandwidth and speed.
3. Possess higher floating-point computing capabilities. Floating-point computing capability is an important indicator related to the multimedia and 3D graphics processing of the processor. In the current computer technology, due to the application of a large number of multimedia technologies, the calculation of floating-point numbers has greatly increased, such as the rendering of 3D graphics. Therefore, the ability of floating-point operations is an important indicator of the computing power of the processor.
It should be emphasized that although the GPU was born for image processing, we can find from the previous introduction that the structure of the GPU does not specifically serve image components, but only optimizes and adjusts the structure of the CPU, so now the GPU can not only In the field of image processing, it is also used for scientific computing, password cracking, numerical analysis, massive data processing (sorting, Map-Reduce, etc.), financial analysis and other fields that require large-scale parallel computing. Therefore, GPU can also be considered as a more versatile chip.
Brief summary: CPU and GPU are two different processors. CPU is the most advanced general-purpose processor for program control and sequential execution. GPU is a dedicated processor for image processing and analysis in specific fields. GPU is affected by CPU control. In many terminal devices, the CPU and GPU are often integrated into one chip, and have CPU or GPU processing capabilities.
GPU (Graphics Processing Unit, Chinese for graphics processing unit), just like its name, GPU was originally used in personal computers, workstations, game consoles and some mobile devices (such as tablets, smartphones, etc.) to run graphics computing work on the microprocessor.
The reason why CPU and GPU are quite different is due to their different design goals. They are aimed at two different application scenarios.
The CPU needs strong versatility to process various data types, and at the same time it requires logical judgment and introduces a large number of branch jumps and interrupt processing. All these make the internal structure of the CPU extremely complicated.
The GPU is faced with a highly unified, independent, large-scale data and a pure computing environment that does not need to be interrupted. So CPU and GPU present very different architectures (schematic diagram):
From the architecture diagram, we can clearly see that the structure of GPU is relatively simple, with a large number of computing units and an ultra-long pipeline, which is especially suitable for large and uniform data (such as image data).
The main work of the GPU is 3D image processing and special effects processing, in layman's terms, it is a work of image presentation. For 2D graphics, the CPU can easily process it, but for complex 3D images, the CPU will spend a lot of resources to process, which will obviously reduce the efficiency of other aspects, so this kind of work is handed over to the GPU for processing.
Some high frame rate game pictures and high-quality special effects are also handed over to the GPU for processing, sharing the work of the CPU. In addition, GPUs are widely used in fields such as cryptography, big data processing, and financial analysis by virtue of their parallel processing capabilities.
Why is GPU so good at processing image data? This is because every pixel on the image needs to be processed, and the processes and methods of processing each pixel are very similar. Such scenes have become a natural hotbed of GPUs.
But the GPU cannot work alone, it must be controlled by the CPU to work. The CPU can act alone to process complex logical operations and different data types, but when a large amount of data with a uniform processing type is required, the GPU can be called for parallel computing.
The GPU uses a large number of computing units and a long pipeline, but only has very simple control logic and saves the Cache. The CPU not only takes up a lot of space by the Cache but also has complex control logic and many optimization circuits. In contrast, the GPU computing power is only a small part of the CPU.
The CPU is based on a low-latency design. The CPU has a powerful ALU (arithmetic operation unit), which can complete arithmetic calculations in a few clock cycles.

In contrast, GPU is based on a large throughput design, the Cache is relatively small, and the control unit is simple, but the GPU has a large number of cores, which is suitable for parallel high-throughput operations.

There are many arithmetic units ALU and few caches in the GPU. The purpose of the cache is not to save the data that needs to be accessed later (this is different from the CPU) but to provide services for the thread Thread. If there are many threads that need to access the same data, the cache will merge these accesses before accessing the memory DRAM.
All in all, the CPU and GPU are different in their initial design tasks, so there is a big difference in design. And some tasks are similar to the way the GPU was originally used to solve the problem, so GPU is used for calculation.
For example, the computing speed of GPU depends on how many elementary school students are hired, and the computing speed of the CPU depends on how powerful professors are hired. The professor's ability to handle complex tasks crushes elementary school students, but for less complex tasks, they still can't stand the crowd. Of course, the current GPU can also do some slightly complicated tasks, which is equivalent to upgrading to the level of junior high school students and high school students.
GPU is to use a lot of simple computing units to complete a large number of computing tasks, purely crowded tactics. This strategy is based on the premise that the work between primary school students is independent of each other.
This answers the question of what the GPU can do. In graphics operations and large-scale matrix operations, such as machine learning algorithms, GPUs can show their talents. In short, the CPU is good at commanding the overall situation and other complex operations, and the GPU is good at performing simple and repetitive operations on big data. The CPU is a teaching aid for complex mental work, while the GPU is a manual worker (elementary school student) who performs a large number of parallel calculations.
The work of GPU is characterized by a large amount of calculation and no technical content. It needs to be repeated many, many times. It also needs the CPU to feed the data to the mouth before it can start work. In the end, it is managed by the CPU.
Why is GPU so popular in the field of artificial intelligence? Deep learning is a mathematical network model established by simulating the human brain nervous system. The biggest feature of this model is that it requires big data for training.
Therefore, the requirement for computing power in the field of artificial intelligence is that a large number of parallel repetitive calculations are required. GPUs have this expertise, and the times make heroes, so GPUs have come out to shoulder this important task. In the field of artificial intelligence (deep learning), GPU has the following main features:
1. Provides a multi-core parallel computing infrastructure, and the number of cores is very large, which can support parallel computing of large amounts of data. Parallel computing is an algorithm that can execute multiple instructions at a time. The purpose is to increase the calculation speed and solve large and complex calculation problems by expanding the scale of problem-solving.
2. Have higher memory access bandwidth and speed.
3. Possess higher floating-point computing capabilities. Floating-point computing capability is an important indicator related to the multimedia and 3D graphics processing of the processor. In the current computer technology, due to the application of a large number of multimedia technologies, the calculation of floating-point numbers has greatly increased, such as the rendering of 3D graphics. Therefore, the ability of floating-point operations is an important indicator of the computing power of the processor.
It should be emphasized that although the GPU was born for image processing, we can find from the previous introduction that the structure of the GPU does not specifically serve image components, but only optimizes and adjusts the structure of the CPU, so now the GPU can not only In the field of image processing, it is also used for scientific computing, password cracking, numerical analysis, massive data processing (sorting, Map-Reduce, etc.), financial analysis and other fields that require large-scale parallel computing. Therefore, GPU can also be considered as a more versatile chip.
Brief summary: CPU and GPU are two different processors. CPU is the most advanced general-purpose processor for program control and sequential execution. GPU is a dedicated processor for image processing and analysis in specific fields. GPU is affected by CPU control. In many terminal devices, the CPU and GPU are often integrated into one chip, and have CPU or GPU processing capabilities.
Comments
Post a Comment