Snapdragon 820 vs Exynos 8890: leaked multi-core Geekbench result chart shows Samsung advantage

Snapdragon 820 vs Exynos 8890: leaked multi-core Geekbench result chart shows Samsung advantage
In just over a week's time, three upcoming high-end mobile SoCs have been officially announced, the Samsung Exynos 8890, the Qualcomm Snapdragon 820, and the Huawei Kirin 950. While all of the technical specifications of these chips have now been published, what really matters to us is their number crunching ability. 

With this in mind, we'd like to show you a leaked chart that claims to reveal the multi-core core performance of the aforementioned chips.

The alleged leaked multi-core result chart claims to reveal the multi-core performance - as recorded by popular CPU testing tool Geekbench - of the Snapdragon 820, the Exynos 8890, the Kirin 950, the Apple A9, and the MediaTek MT6797 "Helio X20". 

As per the leaked chart, the Samsung Exynos 8890 will come with insane amounts of multi-core processing power. The octa-core processor with four stock ARM Cortex-A53 cores and four custom M1 cores appears at the top of the chart with a score of 7400 points. For comparison, the Exynos 7420 that powers the Samsung Galaxy S6 series usually reaches a little over 5000 points in the same tests.

The chip with the largest number of CPU cores, the deca-core MediaTek Helio X20 came in second place with a score of 6500 points. The octa-core Huawei Kirin 950 is listed with just 100 points fewer than the X10. The quad-core CPU inside the Qualcomm Snapdragon 820 appears to lag a bit behind its rivals and is listed with a score 5300 points. 

The dual-core nature of the Apple A9 shows its limits during multi-core tests. With just 4436 points, the chip trails behind in this test.

Last week, the same leakster posted a chart that claims to reveal the single-core performance of the chips. In those tests, the Apple A9 led the way with.the best single-core Geekbench performance. 

Since these chips will likely be the major players in the global smartphone market, we're naturally craving for more details about thesm. Until we'll be able to test out these chips for ourselves, we won't be able to comment on their real-world performance. In the meantime, however, these alleged leaked benchmark results help us with managing our expectations.

source: Weibo via GforGames



1. zeeBomb

Posts: 2318; Member since: Aug 14, 2014

Late, Pa? Seems like I've read this somewhere already...

25. TechieXP1969

Posts: 14967; Member since: Sep 25, 2013

They were to busy trying to sell Apple's single core score which means NOTHING accept fanbois who want to hear their losing device win at something, no matter how useless it is.

52. strudelz100

Posts: 646; Member since: Aug 20, 2014

You are only too ignorant and unwilling to educate yourself to and understand how Mobile SoCs work.

60. marorun

Posts: 5029; Member since: Mar 30, 2015

Care to explain in detail instead of been an ***?

56. QWERTYphone

Posts: 654; Member since: Sep 22, 2014

This great. I want to buy a Galaxy S7. BUT, Samsung will NOT get my money unless they offer these features from my S5: Removable battery MicroSD Durable Construction

57. Skim.

Posts: 4; Member since: Sep 25, 2015

Keep your money then lol.


Posts: 35; Member since: Nov 15, 2015

And premium materials. And 6gb Ram.

59. Brewski

Posts: 691; Member since: Jun 05, 2012

It says right in the article this is from a week ago...

2. Felix_Gatto

Posts: 942; Member since: Jul 03, 2013

I don't care those numbers, no one will notice the differences in real life.

18. Zylam

Posts: 1816; Member since: Oct 20, 2010

Yup, benchmarks are just for bragging. Android doesn't get it's share of optimized apps for design and graphic art to warrant any interest in the raw power of the chip, even if they are beasts. Google needs to create a slew of design and editing apps for Android. Also what the heck happened to all the Snapdragon tablets? My Iconia is 4 years old now and I really want to replace it with a snapdragon 805 or better 10 inch tablet. No Android OEM seems interested in tablets anymore. And I don't want touchwiz.

37. Techist

Posts: 311; Member since: Jan 27, 2015

The Sony Xperia Tablet Z4 with the Snapdragon 810 has been out for quite a while now, and there have been others before it.

38. torr310

Posts: 1659; Member since: Oct 27, 2011

Xperia Tablet Z4 is so expensive compared to other Android tablets.

62. Zylam

Posts: 1816; Member since: Oct 20, 2010

Yea the Z4 is ridiculous experience, killer tablet but not worth the price. If it had a 400 dollar variant I'd get in a heart beat.

47. j2001m

Posts: 3061; Member since: Apr 28, 2014

Then you need the new Google tablet with the the keyboard

64. Zylam

Posts: 1816; Member since: Oct 20, 2010

I had a look at rhe pixel C in hands on, they say it's pretty heavy and thick, also it's gonna be crazy expensive. All that metal really adds to the weight but something thin and light would be easier to handle.

61. marorun

Posts: 5029; Member since: Mar 30, 2015

Zylam here you go : Sony Z4 tab using a SD810 ( and we can both agree in a tablet its has more than enuf room for heat. )

63. Zylam

Posts: 1816; Member since: Oct 20, 2010

Yea but the Z4 is ridiculous experience, killer tablet but not worth the price. If it had a 400 dollar variant I'd get in a heart beat.

26. TechieXP1969

Posts: 14967; Member since: Sep 25, 2013

Depends on what you do with your phone in real life. How about you speak for you.

3. TyrionLannister unregistered

Seems overkill. But we can use power saving mode and still get SD 820 like power with much higher efficiency.


Posts: 35; Member since: Nov 15, 2015

This is still powerless, I'd take A72 clocked at 2.7ghz, with an extra battery carried around. How is this overkill? These CPU are not even close to i3. To you it maybe an overkill, for others its still scrap. If James bond wanted an custom made phone, imagine what specs it would have. .

10. Ordinary

Posts: 2454; Member since: Apr 23, 2015

My i7 960 gets around 9220 in multi-core so I think for a phone that has no cooling fans 7400 is more than enough.

11. lpratas

Posts: 398; Member since: Nov 09, 2011

There is where you are mistaken. The older i3 like of the first generation, the Lynnfield and the i3 Sandy Bridge must be weaker. Even the Yvi bridge ones must be weaker.

14. JakeH

Posts: 89; Member since: May 01, 2014

what the hell do you need an i3 in a mobile phone for?

16. TyrionLannister unregistered

I have an I7-4700 HQ and it gets around 11400. A ULV i7 gets around 5000-6000. ULV i3 is not even 5000. This score is even higher than what the surface pro 4 produces. So I simply don't get what you are referring. Unless you're comparing to a desktop sklake i3, your point makes absolutely no sense to me.


Posts: 35; Member since: Nov 15, 2015

Lmao, the power of a mobile chip vs Intel's i3, is day night difference. i3 requires a fan to cool down vs this ... Try using this chip on a desktop, expect lags and hell of stutters. Try running Cad and opening heavy photos on a mobile. It takes ages.

23. TyrionLannister unregistered

Heavy photos, CAD have no relation with CPU. They task the RAM, disk and GPU which a desktop is 10-20x powerful it. Try running CAD on a desktop with an i7 and a 5400 RPM HDD in it, It will take ages too. i3 is way less efficient. And no, the ULV designs don't require fans.

31. TechieXP1969

Posts: 14967; Member since: Sep 25, 2013

But a desktop vs a mobile CPU that may have similar benchmarks, doesn't mean they can pull the same weight. X86 CPU/GPU's are pushing far more lines of code vs some $5.00 app on mobile device. It's just not the same even in theory. You take an ARM chip and try to push a game like GTA or Crysis even on a 1080p display and I am betting the chip won't even push it even close to 60FPS. I just don't see it. Windows itself has over 40M lines of code, those mobile apps don't even have 1/10th of that. You are talking about a mobile CPU/GPU pushing probably less than 20k lines of code vs a workhouse like x86. A ARM chip would probably blow up. You see how terrible it was just trying to push Windows/Office for ARM? Sure it worked, but it wasn't a great experience.

41. TyrionLannister unregistered

You don't even know what you're talking about, do you? Do you even have an idea how an operating system works? These processors can easily push windows 10. Are they as powerful as desktop grade processors, no. Desktop i7 can easily push 25-30k on geekbench and intel Xeon processors can push even 200k. But that's not what we're comparing with. This chip is more powerful than a ULV i3. And a cross-platform benchmark like geekbench precisely measures it. You never run the entire code of windows. It is divided in parts, thousands and thousands of dll files comprise that code, and you only have to run a few ones which are in memory now. Compiling full OS can take easily 30-40 minutes. That way, running it will be impossible. Besides, android is not much behind Windows in code size. It may be one order of magnitude less at best. x86 is a power hungry monster. But these chips are getting closer and closer each year. And now they are better than the low-end i3 chips. They can never be as good as desktop CPUs, but that's never the point.


Posts: 35; Member since: Nov 15, 2015

BUT.....Sorry I'm no computer specialist but I think how the cisc CPU is described is wrong. I think it's more like that CISC can make one complex operation while RISC makes several simple operations. E.g. CISC: load number of address 1 and load number of address 2 and operate them together and the store the solution in address 3, all in one operation. While RISC: Operation 1: load number of address 1. Operation 2: load number of address 2. Operation X: operate them with each other. Last Operation: store the solution in address 3. So it appears there is not much of a difference in speed, since CISC needs more time to operate one complex instruction and RISC operates lots of simple instructions. The result should be the same. I think we dont get very much out of this example. But what about more complex instructions like: calculate e^((-4)^(-1))*(-ln(432)). If I got it right, the CISC CPU should more or less understand this instruction and need only one operation to execute it. While the RISC CPU needs to do loads and loads of operations just because of the "^ and * and ln()". And thats where CISC CPUs win over the RISC CPUs I think. I think the the main reason for ARM based CPUs to be at the top right now is because the code is very simple, the tasks are very simple, and the machines are highly optimised for specific tasks. While x86 based CPUs shine especially with complex coding, they are less optimised for a specific task but they can do just about everything. Both architectures come with pros and cons: CISC -Pro:" complex coding", which saves lots of memory storage for a program; some very powerful instrucions are build into the architecture, when they are needed nothing else can beat them. -Con: very complex architecture, which makes manufacturing it very troublesome, high power usage. RISC: -Pro: simple architecture, easier to produce, very power efficient -Con: "simple coding", complex tasks need lots of operations, since the architectur is too simple for higher instructions, which costs lots and lots of memory storage. So while RISC CPUs are great for mobile and all the simple stuff, they are no good for complex stuff. Just imagine playing Assassin's Creed Unity with an ARM based Computer. The program suddenly doesnt need 50GB of storage but 200 or more GB of Storage. Or we could highly optimise the machine for that game, but then we would have a machine specifically for ACU and nothing else. But what if i wanna do some rendering stuff? Or use my arcGIS? Or just for fun make the machine calculate the climate change for the next 50 years or so? Nope, not gonna happen. Dayum! And thats why i think that ARM based CPUs will not win over the PC in the PC market, but true in the mobile segment they are without a doub at the top. But please guys, dont give comments that sound like ARM is the future and Intel is pure sh*t. Both have theire Pros and Cons.

50. TyrionLannister unregistered

You are entirely right. This is a huge advantage for x86 for now. The instruction set of something like MIPS or ARM has like 20 instructions while x86 has like 100 or more. It has even complex instructions for string copy and all. It requires complex coding and very complex pipeline control. If a complex instruction encounters an error, we roll back to a few cycles ago. ARM CPUs are good at simple tasks. The pipeline is only 5-6 stages long and optimising is easy. It also requires much less computation. I never said ARM is the future. In fact, the high-end market is far from it. What I'm talking about is the lo end processors like ULV range from intel or the core M range. In a specific load containing heavy instructions, an X86 can win and in a different case, an ARM can win. ARM specialises in making common case fast and x86 in complex. Now regarding geekbench, it has a load of inputs in different scenarios. Overall, an exynos 8890 will be faster than a x86 in mixed usage by this benchmark. Can you make a workload which can make x86 faster? obviously yes; something which does a lot of string copying will be enough to smoke ARM chips. On anther end, something comprising of simple tasks like simple integer arithmetic will lead to the ARM one taking the case. There is no such thing as absolute faster. It depends on workload. But you can't say ARM is inherently slow as it can't do a particular instruction as fast as x86. I can point out another instruction proving the contrary. The debate was never about ARM winning the PC market. I never even hinted that. ARM will have to make a new architecture with a much wider pipeline and much different instruction set for that. Regarding Assassin's creed. The 50 GB is never in the memory. The part of the game you're playing resides in the VRAM( which ranges from 1-12 GB depending upon your GPU). The CUDA cores and related architecture takes care of that. ARM v/s x86 will only come to compute physics in the game and those are vector computations. The shading, tesselation, lighting and texturing which take the cake in memory are handles by OpenGL(or DirectX) and the GPU architecture takes care of that and rasterization. I hope you get my point.

Latest Stories

This copy is for your personal, non-commercial use only. You can order presentation-ready copies for distribution to your colleagues, clients or customers at or use the Reprints & Permissions tool that appears at the bottom of each web page. Visit for samples and additional information.