Skip to main content

You are AMD's CPU so hot in 2020, what measures will Intel take in this regard?

When do general users need CPU performance? We often talk about CPU performance, but when do we need CPU performance most? With the richness of various software and peripheral hardware on today's computers, there are many functions that can be realized. The performance of different CPUs running different applications often has its own advantages. However, the essence of the CPU is calculation, and the performance required is either complicated in calculation or large amount of data to be calculated. In many fields, there are very complex calculations with very large amounts of data. However, for consumer users and general business users who account for the largest number of users, the most demanding performance is usually video-including video games. Because in common applications, video applications need to process or generate the largest amount of data: take the most popular 1080p24FPS, 8bit color video as an example, one second needs to show the user 24 frames, each frame has 1920×1080 Pixels, each pixel has three colors, each color is represented by 8bit (1Byte) data, the data volume is 3×1920×1080×24=149,299,200Byte=142.4MiB. If it is 10bit color, 4K60FPS video, it is as high as 1.74GiB, which is only the amount of data for one second of video.

If you are interested, you can count a 2-hour movie, a 30-episode 45-minute TV series, or 20 hours of game time. The computer needs to process a lot of data. The videos we see can be roughly divided into two categories. One is pre-recorded like TV movies and played back directly on the client; the other is 3D games, which define a series of model scenes, as well as shots and scenes. The trajectory of the object in the middle is calculated in real time. Regardless of screenwriting, performance, editing, etc., it is very simple to generate video by pre-recording today-just turn on the camera/mobile phone to shoot. Real-time generation is a little more complicated, but after the scene is made, you can also generate longer video by defining more motion trajectories, lens trajectories, or developing programs to generate these motion trajectories based on user input. It is often much more difficult and complicated for other applications to obtain such a large amount of data. A sensor collects 64bit data at a frequency of 1KHz, and a single item of data is less than 8000Byte per second. Even a system that collects tens of thousands of items is only about 80MiB. If it is necessary to manually input/generate data, it will be even less-of course, some applications with tens of millions or even billions of users generate a large amount of data by users, but how many such applications are there in the world? It’s not that other systems don’t have a larger amount of data than video, but to obtain such a large amount of data, the cost required to invest is much greater than that of recording video, and it is difficult to be as popular as video applications-today, a person only needs one. Smart phone, open the shooting software to shoot, you can record high-definition video. Today in 2020, is AMD CPU better or Intel CPU better? Of course, pure video playback is now basically a graphics card. Whether it is decoding of pre-recorded videos or generating 3D videos (3D rendering), the efficiency of dedicated hardware circuits is much higher than that of general-purpose processors such as CPUs. However, due to the limitation of the structure of the graphics card itself, there are still many processes that are not suitable for using the graphics card and can only be processed by the CPU. For example, the current mainstream H264/H265 encoding. And 3D rendering has always been the advantage of CPU before, especially for rendering using ray tracing technology. But now GPU renderers can render more scenes, and a new generation of GPUs have built-in hardware processing circuits for ray tracing, and the importance of CPU performance has been further weakened. If it can be accelerated by the graphics card, it mainly depends on the performance of the graphics card. With the same graphics performance, CPUs with high single-core performance are usually more advantageous. At present, Intel still has certain advantages, but the gap is not big. So I won't mention it here. In terms of 3D rendering processed by the CPU, AMD lags behind Intel to a certain extent under the premise of the same number of cores, but the same price can provide more cores and higher overall performance. AMD's royal Cinebench will not say anything. . Other CPU-based rendering software, AMD’s price/performance advantage is also obvious: In VRay, the 3800X of 8C16T is comparable to the 9900K of the same specification, not to mention the 3900X and 3950X of 12/16C. HEDT platform, the lowest 24C 3960X Both are better than the 18C 10980XE [1]. The rendering performance evaluation results of SolidWorks are similar to [2]. However, it should be noted that in addition to rendering, quite a lot of other common operations are dominated by the Intel platform. However, in addition to the 3990X AMD platform, which may be significantly behind in some projects due to optimization problems and low frequency, Intel The lead of the platform is not too big. In the editing of recorded videos, the most commonly used Premiere Pro is taken as an example [3]: In the preview stage of editing that emphasizes response performance, although multi-core still has a slight advantage, the range is small, and the performance of the Intel platform of the same specification is superior. The more time-consuming video export stage, except for the 64-core 3990X, the performance advantages of multi-core are quite obvious. The same specifications are still dominated by the Intel platform. The 9700K of the 8C8T is better than the 3800X of the 8C16T. But the leading margin of 3960X/3970X of 24C/32C is much larger-of course, the data of Intel Xeon-W 3200 platform is missing here. But from the comparative data of another evaluation [4], the overall evaluation score is worse than that of 3960X, but the performance of individual projects, especially the export of ProRes videos, will lead a lot. Other late-stage software, such as Adobe's After Effect[5], BlackMagic's DaVinci Resolve[6] is a preview similar to Pr: multi-core has advantages, but the range is not large. 64C 3990X is not as good as optimization problems and low frequency. 24/32C model. I will not post the test data here. If you are interested, you can click the reference link at the end of the article to see it. Is Intel optimization really invincible?

Finally, for the optimization problem, I can only say that the Intel platform has a strong advantage for teams that have strong development capabilities, have time and ample technical budget to do high-performance computing, and can maximize CPU performance through optimization. But after all, CPU is a general-purpose processor. If it is a single project, in today's industry, using FPGA programming or even self-developed CPU/accelerator with dedicated instruction set, even if a team without hardware development capabilities takes a step back, it will be for most projects In terms of GPU computing, you can get better performance and efficiency than CPU, and you don't have to limit yourself to the x86 platform. For example, the current popular deep learning, Google has developed its own TPU. Apple, Huawei, and Qualcomm have their own AI acceleration solutions on the mobile phone CPU. The open instruction RISC-V has attracted a lot of attention in the past two years, and more generally, there is no Teams with hardware development capabilities directly use GPUs for various trainings. For more general software developers, the main developers are often programmers who have just graduated a few years ago, and even quite a few modules are outsourced. It is a bit unrealistic to talk about performance optimization for the CPU architecture. As for the end user, not to mention-this is the reality of the industry. Divorced from this reality, saying that Intel's optimization is much better than AMD, I can only use this expression: Intel's countermeasures? How do you evaluate Intel’s CEO’s statement that he no longer pursues CPU market share, and Intel is more than CPU? ?

But let me say that not pursuing market share does not mean giving up CPU. As the forerunner of x86 and the maker of multiple standards in the industry, Intel has invested heavily in hardware and software to promote the industry. At least AMD can catch up in a short period of time. In fact, in addition to the video aspect mentioned above, Intel's CPU is indeed dominant in many aspects-whether it is more ordinary programs without multi-threaded optimization, AMD's multi-core cannot perform well; or although the number of end users is small There are many high-performance applications that can use AVX512 such SIMD instructions, but with more applications, and the Intel platform is more efficient. For example, applications such as 3DPM, 14C 7940X that supports AVX512, have better performance than 64C 3990X [7]. Conclusion For end users, unless you have very good development capabilities, you can optimize your application for the CPU. Otherwise, as long as you want to run an application provided by a third party, you can just look at the evaluation results of half a bucket of water level and just see which one is better for your wallet. After all, if an evaluation like Anandtech is half a bucket of water, generally DIYER is a quarter bucket, and novice users may not even have 1% bucket.


Popular posts from this blog

AMD's GPU technology enters the mobile phone chip market for the first time

In addition to the release of the Exynos2100 processor, Samsung also confirmed a major event at this Exynos event, that is, the custom GPU that they have worked with AMD for many years will soon appear and will be used on the next flagship machine. The current Exynos2100 processor uses ARM’s Mali-G78GPU core with a total of 14 cores, so the GPU architecture developed by Samsung will be the next Exynos processor, and the GPU will be the focus. This is probably the meaning of Exynos2100’s GPU stacking. The key reason. Dr. InyupKang, president of Samsung’s LSI business, confirmed that the next-generation mobile GPU in cooperation with AMD will be used in the next flagship product, but he did not specify which product. Samsung is not talking about the next-generation flagship but the next one, so it is very likely that a new Exynos processor will be available this year, either for the GalaxyNote21 series or the new generation of folding screen GalaxyZFold3. In 2019, AMD and Samsung reached

Apple and Intel want to join the game, what happened to the GPU market?

Intel recently announced that it will launch Xe-LP GPU at the end of this year, officially entering the independent GPU market, and will hand over to TSMC for foundry. At the 2020 WWDC held not long ago, Apple also revealed that it is possible to abandon AMD's GPU and use a self-developed solution based on the ARM architecture. It will launch a self-developed GPU next year. What happened to the GPU market? Why are the giants entering the game?    Massive data calls for high-performance GPU    Why has the demand for GPUs increased so rapidly in recent years? Because we are entering an era where everything needs to be visualized. Dai Shuyu, a partner of Aiwa (Beijing) Technology Co., Ltd., told a reporter from China Electronics News that visualization requires a large amount of graphics and image computing capabilities, and a large amount of high-performance image processing capabilities are required for both the cloud and the edge.    Aiwa (Beijing) Technology Co., Ltd. is an enterp

NVIDIA officially launches RTX 30 series mobile graphics cards

In the early morning of January 13, NVIDIA officially launched the RTX30 series of mobile graphics cards at the CES2021 exhibition. Ampere-based GPUs have also reached the mobile terminal, mainly including RTX3080, RTX3070 and RTX3060 models. In addition to improving game performance, the RTX30 series of mobile graphics cards have twice the energy efficiency of the previous generation, and support the third-generation Max-Q technology, mainly supporting DynamicBoost2.0 dynamic acceleration technology, WisperMode2.0 noise control, ResizableBAR (similar to AMD’s SAM technology) and DLSS. The third-generation Max-Q technology uses AI and new system optimization to make high-performance gaming laptops faster and more powerful than ever. These technologies include: ·DynamicBoost2.0: The CPU and GPU powers of traditional gaming notebooks are fixed, while games and creative applications are dynamic, and the requirements for the system will vary with the number of frames. With DynamicBoost2.0,