Recently, Intel held a "Platform Advantage" briefing for tech journalists from the APAC region with an aim to once again prove why Intel processors are a better bet overall compared to AMD's recent offerings. I've had the opportunity to attend this briefing and while Intel does have a point, it didn't really seem convincing. Just so that you know, many of the slides that were shown in the briefing were leaked back in June by AdoredTV and have already received a fair share of criticism from the tech press.
I've always smelt a rat when it comes to first-party performance numbers — not because they are usually in favor of the product but because the comparisons often just don't seem right. Take the Intel Comet Lake-H launch earlier this year, for example. During the presentation, Intel showed the benefits of upgrading to a Core i9-10980HK from a Core i7-7820HK when it comes to gaming. Everything was fine until the fine print showed that the GPUs used belonged to different generations! Sure, the whole point was to show the overall benefits of upgrading from a 3-year old PC and considering the fact that notebook GPUs cannot be usually upgraded over generations, Intel didn't have much leeway here. However, the advantages of having a newer GPU alongside the Comet Lake-H generation can definitely skew the comparison making it highly imprecise even at 1080p.
Intel's recent presentation seemed to once again echo this oddity. This piece is not from an AMD fanboy or critiques Intel just for the sake of it. But when a company advertises 5 GHz boost and gaming performance as the primary incentives for everyone while conveniently ignoring that such a boost requires certain operating conditions and is heavily OEM dependent, it does beget a sterner evaluation.
Are you a techie who knows how to write? Then join our Team! Wanted:
- Specialist News Writer
- Magazine Writer
- Translator (DE<->EN)
Details here
Benchmarking relevancy: Who gets to call the shots?
AMD's 7 nm Ryzen 3000 series has had a good run in most conventional test routines adopted by the majority of tech publications, including ours. Suddenly, Intel found itself batting on the back foot and went on a defensive saying, "come beat us in real world gaming". A few months ago, Intel CEO Bob Swan told Computex 2020 that there needs to be a focus shift from "benchmarks to benefits and impacts of the technology".
As an end-user, ultimately you would want a PC to work well for you. So while Intel's supposed focus shift is good in a utopian sense, it does not encompass all use cases. Intel says, "performance should be measured with real and relevant applications and usages."
My question is, who gets to decide what is relevant for you or me?
Join our Support Satisfaction Survey 2023: We want to hear about your experiences!
Participate here
Following a 2010 FTC ruling, Intel was bound to declare in its briefings that "Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.", specifically referring to SYSMark and MobileMark. Intel said that it prefers using SYSMark instead of PCMark 10 to better illustrate popular software workflows that supposedly a majority of users would use. While welcome, it really isn't a one-size fits all solution. While there are many users who use the workloads tested by SYSMark, there is also a sizeable majority of those who don't.
SYSMark incorporates popular Adobe apps several of which are highly single-threaded but also are now increasingly GPU-aware. The GPU part is not evaluated by the benchmark, but it does kick in during "real-world" use. So, when Intel claims that SYSMark is more reliable because it encompasses real-world applications, the "real" usage can actually be significantly different if you use software components such as filters in Photoshop, for example, that make good use of the GPU.
While Intel believes BAPCo's SYSMark signifies "real and relevant" PC usage, AMD does not echo the same sentiment. In fact, AMD has a contrarian view that SYSMark does not represent typical usage.
Without getting into the politics of who is right, let's just take it that both SYSMark and PCMark 10 have their own relevance; whether one factors them into a purchase decision or not is a different debate altogether. It must be emphasized here that each benchmark is just one piece of the larger puzzle and no one suite alone can paint the entire picture and decide a winner. For many users, a PC or laptop is a fairly long-term investment often involving present and future planning of workloads. Let the user make an informed choice taking both results into account. It really isn't that hard, is it?
Let's talk about a benchmark that is fast becoming an Achilles heel for Intel — Cinebench. Take a look at the following synthetic tests between similar AMD and Intel CPUs across laptop and desktop. We see that both Intel and AMD have leads in certain tests while trailing in the others.
Top 10 Laptops
Multimedia, Budget Multimedia, Gaming, Budget Gaming, Lightweight Gaming, Business, Budget Office, Workstation, Subnotebooks, Ultrabooks, Chromebooks
under 300 USD/Euros, under 500 USD/Euros, 1,000 USD/Euros, for University Students, Best Displays
Top 10 Smartphones
Smartphones, Phablets, ≤6-inch, Camera Smartphones
* ... smaller is better
Of late, Intel has been trying to drive home a point that Cinebench isn't really the benchmark to look out for as it does not really represent a real-world use case. While not entirely false, personally, I don't subscribe to this thinking for several reasons:
- Save for Ice Lake and the upcoming Tiger Lake, all laptop and desktop CPUs released in the past five years or so have essentially been rehashes of the 14 nm Skylake architecture. To my knowledge, Intel has never decried or interfered in the use of any benchmark comparison all these years. Why the sudden consternation against Cinebench or synthetic benchmarks in general?
- Will Intel stop using Cinebench or other synthetic tests altogether, even in the ones that it manages a good lead? That will never happen as far as I understand. It complicates the whole process of comparison between previous generations or even with the competition. When Intel eventually starts throwing in more cores and they perform great, I am pretty sure it will brag about Cinebench and other such benchmarks as well.
- Synthetic benchmarks enable a fairly standardized comparison between hardware and are a quick way to see the kind of performance improvements over generations. To say that customers focus only on specific use cases such as gaming and not consider the overall performance of the CPU across a battery of tests (synthetic and real-world) is misleading, to put it mildly.
Not a true apples-to-apples comparison
During the course of the presentation, Intel showed a few comparisons with similarly spec'd AMD systems. I will briefly touch upon the H-series and desktop portions of the talk for now.
AMD Ryzen 7 4800H vs Intel Core i7-10750H
Intel picked three similarly configured Lenovo Y7000 laptops with the only differentiating factor among them being the CPU. It pitted the 8C/16T Ryzen 7 4800H against the 4C/8T Core i5-10300H and the 6C/12T Core i7-10750H. Going by Intel's own numbers, it is not hard to see why Intel wants to move away from the synthetic tests where it clearly has a disadvantage.
AMD is shown to have clear leads in CPU-intensive tests whereas Intel is shown to have considerable advantages when it comes to 1080p gaming with even the Core i5 supposedly maintaining good leads. Sure, Intel has a definitive advantage in gaming, especially in titles that favor a lesser number of cores and higher clocks such as League of Legends.
However, AMD has managed to significantly close that gap. This is not the Bulldozer or the original Zen era any more. In fact, we have seen the Ryzen 9 4900HS perform better than most Core i9 Coffee Lake-H Refresh laptops in our own tests. Hardware Unboxed's recent video comparing the gaming performance between Ryzen 7 4800H and the Core i7-10750H in what is essentially the same XMG Core 15 chassis with similar specs throws additional light on this as well.
A vast majority of popular AAA titles find it difficult scaling beyond six cores. Combine that with Intel's frequency advantage and you have a great recipe for getting the highest frame rates possible. So, the sales pitch would have been perfect if the company is solely targeting gamers. However, Intel seems to be pushing gaming as the definitive use case for every prospective laptop buyer neglecting the overall benefits of getting more cores at similar TDP.
AMD Ryzen 9 3900XT vs Intel Core i7-10700K
Even on the desktop side, Intel's singular proposition is that the US$387 8C/16T Core i7-10700K offers much more benefits than a US$499 12C/24T Ryzen 9 3900XT on similar test benches using an NVIDIA GeForce RTX 2080 Ti and 16 GB DDR4 RAM. According to Intel, games such as Total War: Warhammer 2, Rocket League, League of Legends, etc. see double-digit performance gains. Intel claims that the Ryzen 9 3900XT leads the Core i7-10700K in just six out of 30 games tested whereas the Core i7-10700K is on par or better in 24 out of 30 tested games.
This comparison is flawed in many aspects. First of all, the Core i7-10700K actually retails for a much higher price (up to US$410 or even more) whereas the Ryzen 9 3900XT can be had for about US$20 cheaper on Amazon. Also, you can save a bit more by going for the Ryzen 9 3900X for an even lesser price with nearly negligible performance loss. For a real core-to-core comparison, how about the fact that an 8C/16T Ryzen 7 3700X retails for just about US$283 and also includes a decent Wraith Prism cooler in the box?
I am still flummoxed as to why Intel even chose this comparison in the first place. Anyone investing in a 12C chip will surely be subjecting it to a variety of intense workloads other than just gaming. And for the times when you do need to unwind, the Ryzen 9 3900X/XT provides ample gaming performance. Heck, you can also disable SMT and/or CCXs in Ryzen Master to really provide that lower core count benefit if really needed. Oh! And by the way, did I mention the socket AM4 backward compatibility advantage that Ryzen CPUs currently enjoy?
Time for more than just an architectural change
Recently, we've seen AMD's marketcap briefly crossing the US$100 billion mark with market analysts predicting a 50% market share for AMD by 2H 2021 (this is just a prediction and AMD will have to do a lot of work on the ground to achieve this). Mindfactory sales figures for April 2020 have shown that AMD CPUs are selling at a rate of nearly 10 to 1 compared to Intel. Though not necessarily holistic, these figures do point us towards the general trend and perception in the market.
And for all the bragging Intel's been doing when it comes to gaming, the latest Steam processor usage survey suggests that AMD has been quickly gaining lost ground although Intel continues to rule the roost here.
Taking all these into account, there is every reason for Intel to be worried about AMD eating into its prized marketshare. So worried that there were even rumors that internally, Intel was mulling over pumping in a cool US$3 billion to thwart AMD's onslaught in the CPU business.
Personally, I feel that the time has now come for Intel to move beyond just the engineering and architectural side of things. The company is now at a very critical juncture. While 10 nm Tiger Lake and Alder Lake are expected to arrive on time, the 7 nm process is being pushed to 2022. By that time, AMD will have had 5 nm chips in the market. Intel's process may offer more density at 7 nm than what TSMC provides at 5 nm, but we also have to factor in optimization, yields and all such complexities associated with a new process — Ice Lake is the best example of that. Further complicating matters is Apple's shift to ARM-based Macs and the possibility of an increasing number of Windows on ARM laptops in the months ahead.
What the company needs now, in my opinion, is a fresh perspective, acknowledgement of current deficiencies, and a clear vision for the future.
And for Gordon Moore's sake, these egregious comparisons need to stop. Now.
Source(s)
Intel Press Brief