Nvidia Tegra K1 smokes the iPad Air and Snapdragon 800 in graphics benchmarks

Nvidia Tegra K1 smokes the iPad Air and Snapdragon 800 in graphics benchmarks
Nvidia dropped a bombshell announcement with its new Tegra K1 chipset last week, as it ushered in the era of Cortex-A15 processors and Kepler graphics for mobile. The 192-core GPU is certainly its main selling point, and now we get the benchmarks to gauge exactly how much better it will be from anything else currently on the market. 

As you can see in the GFXBench score table below, not only does it score 2.5x better in graphics performance than the current cream of the Qualcomm crop, Snapdragon 800, but it is also more than twice faster than the blazing A7 chip in the new iPad Air. In addition, it scores a tad higher than Intel's new HD4400 graphics subset that comes with the Haswell processor line, which is an amazing feat in itself, given that K1 will find itself in slim and light mobile devices soon.

Nvidia Tegra K1 smokes the iPad Air and Snapdragon 800 in graphics benchmarks
source: Wccftech

FEATURED VIDEO

50 Comments

1. tech2

Posts: 3487; Member since: Oct 26, 2012

Any word on when its hitting the conveyor belt or when is the earliest device launching ? Dont let us down in the battery department this time Nvidia ! There is a reason why Exynos is using big.LITTLE architecture with A15s. Its too power hungry

19. brrunopt

Posts: 742; Member since: Aug 15, 2013

thats why nVidea uses the 4+1 configuration...

22. Shubham412302

Posts: 549; Member since: Nov 09, 2011

adreno 420 dosenot have a match for k1 as its just 40% faster as compared to adreno 330 and k1 is said to be many times powerful than a7 in unreal engine 4 technology

35. PhenomFaz

Posts: 1236; Member since: Sep 26, 2012

Yup battery life is gonna be crucial...but in the emantime I've got one word for Nvidia... DAMN! :)

2. darkskoliro

Posts: 1092; Member since: May 07, 2012

What the djsjaoka i just bought an ultrabook with hd4400 and im getting outperformed by a mobile device? -.-

3. hurrycanger

Posts: 1754; Member since: Dec 01, 2013

I know right xD I have a desktop with hd2000. Guess how awesome I feel...

13. hipnotika

Posts: 353; Member since: Mar 06, 2013

because intel hd graphics is worst gpu on the planet.

20. brrunopt

Posts: 742; Member since: Aug 15, 2013

So you haven't seen the HD5200 , witch performs better than the GT740M @ darkskoliro The HD4400 is not a powerfull GPU, is wasn't design to be powerfull GPU , just powerfull enouth to normal use...

23. yowanvista

Posts: 341; Member since: Sep 20, 2011

Any dedicated Kepler card beats Intel's IPG, the Iris Pro 5200 relies on RAM for video memory and this is way slower than GDDR3/5 found on any modern card. In theory the GT740M outperforms the 5200 but both will run games at less than 10fps maxed out.

29. brrunopt

Posts: 742; Member since: Aug 15, 2013

1st - having dedicated memory doenst mean it will have better performance than any other with shared memory. 2nd - the hd5200 has 128MB of eDRAM witch is faster than GDDR5 afaik, and only high end (GTX ... ) have GDDR5, the GT740M has DDR3 . The HD5200 performs between the GT740M and GT750M, thats a fact ...

43. joaolx

Posts: 364; Member since: Aug 16, 2011

http://www.game-debate.com/gpu/index.php?gid=1783&gid2=1247&compare=iris-pro-graphics-5200-mobile-vs-geforce-gt-740m If it has dedicated memory it can actually mean it performs better. Since intel hd uses ram memory the cpu won't use full memory and bandwidth because the gpu is also using it. Now the interesting thing is that the hd 5200 has 128MB ram but that is basically nothing really but it allows it to perform equally or better than the gt640m. The gt740m is equal to the ddr3 model of the gt650m. Both two years old. But let's not forget that the hd 5200 is iris pro and can only be found on high-end cpu. I have an high-end quad-core i7 and I actually don't have it because it's only really for the highest-end do to the edram. Without the edram it's basically a hd 5100.

50. brrunopt

Posts: 742; Member since: Aug 15, 2013

You can compare specifications all you want, what matters is the results. And the results don't lie, in average the Hd5200 not only performs better the GT640M but also above the Gt740M (in some games above the GT750) And the HD5100 (no dedicated memory) performs at level with a GT710http://www.notebookcheck.net/Computer-Games-on-Laptop-Graphic-Cards.13849.0.html There are laptops with the i7QM+HD5200 for around the same price that with i7 QM + GT740M / GT750M

44. joaolx

Posts: 364; Member since: Aug 16, 2011

The main affect that the integrated gpu will have is that they won't allow the cpu to use the whole ram and it's bandwidth decreasing it's maximum performance in games. The intel hd 5200 uses 128MB of edram but it's not enough to still beat it.

49. darkskoliro

Posts: 1092; Member since: May 07, 2012

Im actually satisfied with the HD4400 :) Because it runs my Dota 2 smoothly on medium graphics which was better than I expected hahaha.

4. JMartin22

Posts: 2325; Member since: Apr 30, 2013

The fact that it's ARMv7 architecture, which is outdated. Cortex-A15 is yesteryears technology. ARMv8 based Cortex-A53/A57 will be much more power efficient and powerful, respectively. The 192 graphics cores is marketing bells and whistles. But the fact is, for all of its power, it's still outdated.

8. Retro-touch unregistered

The SoC will come in two versions, one version with a quad-core (4+1) Cortex-A15, and one that leverages two of NVIDIA’s own 64-bit ARMv8 Denver CPUs

9. yowanvista

Posts: 341; Member since: Sep 20, 2011

Cortex-A53 and higher are primarily aimed at high performance systems like servers. In fact the Cortex-A53 has the same pipeline length as Cortex-A7 and real-time version of ARMv8 (ARMv8-R) is still 32-bit on any Cortex-A5x core. ARMv7 is far from being outdated nor is the revised Cortex-A15 (r3p3) found in the K1. The architectural changes brought by AArch64 are only incremental additions and will in no way be produce a significantly more power efficient core design. The Cortex-A5x series were designed with performance in mind. Please enlighten me, how are those 192 shaders specifically outdated? You do realize that theTegra K1 has 1 SMX comprising of 192 shaders?. Think of it as a downscaled version of a Kepler based GPU like the GTX 780 which has 12 SMX. How is that 'outdated'?

11. Reality_Check

Posts: 277; Member since: Aug 15, 2013

Some people like to think they know everything. Ignorance.. SMH

28. AppleHateBoy unregistered

IMHO, it is outdated. GTX 680 was launched wayy back in March of 2012. Kepler architecture is almost 2 years old now and 2 years is a loong time wrt the pace at which the mobile industry is growing. Fortunately, it's alteast newer than PowerVR 6 series GPU powering the Apple A7 which was announced in Feb of 2011..

33. brrunopt

Posts: 742; Member since: Aug 15, 2013

exactly, 2y old and still is more powerfull than any other .. tegra 6 (Parker) is supposed to come with Maxwell, it will be another big jump in performance.

41. yowanvista

Posts: 341; Member since: Sep 20, 2011

Kepler isn't geared towards mobile implementations it a desktop class architecture. Nvidia just shrank it down in the K1 but it remains a top notch performance. The latest desktop cards still use the refreshed Kepler so no, it's far from being outdated.

37. Extradite

Posts: 316; Member since: Dec 30, 2013

Pathetic OEMs are still not providing us consumers with bigger 3500Mah battery so it can run A15CPUs with equivalent to having 3000Mah battery. Those A15s are a powerfull CPUs.

45. joaolx

Posts: 364; Member since: Aug 16, 2011

Nvidia has a good history have marketing but when an actual tablet/smartphone comes out with the supposed tegra it is already outdated.

5. livyatan

Posts: 867; Member since: Jun 19, 2013

Actually the Adreno 330 is faster than PowerVR SGX6430 in the iPad Air. It proves that in the other off screen benchmarks, as well as with the GFLOPS number -130 vs 115. And I'm getting a 26fps in T-Rex on my Note 3. As for the K1, it is leagues above with 340+ GFLOPS. I've said long ago that it will embarrass Intel, and I got laughed at. And now, there you have it! It outperforms the HD4400, at one third the TDP! What a revolutionary achievement.. and other GPU solutions from ARM vendors are going to do the same - the newly announced SGX TX high end PowerVR, and Mali t760. Both are over 320GFLOPS at a mere 2W TDP.

21. brrunopt

Posts: 742; Member since: Aug 15, 2013

Most of the TDP is for the CPU , witch is a lot more powerfull.

30. AppleHateBoy unregistered

The 2 W TDP is for the GPU. The TDP of the SoC is in the 4-5 W range.

32. brrunopt

Posts: 742; Member since: Aug 15, 2013

i'm talking about the TDP of lhe Intel SOC , most is for lhe CPU (i5-4200U), not the GPU (HD4400)

46. joaolx

Posts: 364; Member since: Aug 16, 2011

Knowing the disadvantages of integrated gpu and knowing that the hd4400 is an incredibly low-end gpu it doesn't surprise me at all. Not to mention the intel hd weren't designed for gaming.

6. yowanvista

Posts: 341; Member since: Sep 20, 2011

"192-core GPU", actually its 192 unified shaders. Shader ≠ cores

26. renz4

Posts: 319; Member since: Aug 10, 2013

and nvidia has been calling their stream processor (SP) as CUDA cores for years (since fermi i think)

Latest Stories

This copy is for your personal, non-commercial use only. You can order presentation-ready copies for distribution to your colleagues, clients or customers at https://www.parsintl.com/phonearena or use the Reprints & Permissions tool that appears at the bottom of each web page. Visit https://www.parsintl.com/ for samples and additional information.