Nvidia Tegra K1 smokes the iPad Air and Snapdragon 800 in graphics benchmarks
1. tech2 (Posts: 3347; Member since: 26 Oct 2012)
Any word on when its hitting the conveyor belt or when is the earliest device launching ?
Dont let us down in the battery department this time Nvidia ! There is a reason why Exynos is using big.LITTLE architecture with A15s. Its too power hungry
19. brrunopt (Posts: 742; Member since: 15 Aug 2013)
thats why nVidea uses the 4+1 configuration...
22. Shubham412302 (Posts: 430; Member since: 09 Nov 2011)
adreno 420 dosenot have a match for k1 as its just 40% faster as compared to adreno 330 and k1 is said to be many times powerful than a7 in unreal engine 4 technology
35. PhenomFaz (Posts: 1235; Member since: 26 Sep 2012)
Yup battery life is gonna be crucial...but in the emantime I've got one word for Nvidia...
2. darkskoliro (Posts: 1076; Member since: 07 May 2012)
What the djsjaoka i just bought an ultrabook with hd4400 and im getting outperformed by a mobile device? -.-
3. hurrycanger (Posts: 1556; Member since: 01 Dec 2013)
I know right xD
I have a desktop with hd2000. Guess how awesome I feel...
13. hipnotika (Posts: 353; Member since: 06 Mar 2013)
because intel hd graphics is worst gpu on the planet.
20. brrunopt (Posts: 742; Member since: 15 Aug 2013)
So you haven't seen the HD5200 , witch performs better than the GT740M
The HD4400 is not a powerfull GPU, is wasn't design to be powerfull GPU , just powerfull enouth to normal use...
23. yowanvista (Posts: 340; Member since: 20 Sep 2011)
Any dedicated Kepler card beats Intel's IPG, the Iris Pro 5200 relies on RAM for video memory and this is way slower than GDDR3/5 found on any modern card. In theory the GT740M outperforms the 5200 but both will run games at less than 10fps maxed out.
29. brrunopt (Posts: 742; Member since: 15 Aug 2013)
1st - having dedicated memory doenst mean it will have better performance than any other with shared memory.
2nd - the hd5200 has 128MB of eDRAM witch is faster than GDDR5 afaik, and only high end (GTX ... ) have GDDR5, the GT740M has DDR3 .
The HD5200 performs between the GT740M and GT750M, thats a fact ...
43. joaolx (Posts: 364; Member since: 16 Aug 2011)
If it has dedicated memory it can actually mean it performs better. Since intel hd uses ram memory the cpu won't use full memory and bandwidth because the gpu is also using it. Now the interesting thing is that the hd 5200 has 128MB ram but that is basically nothing really but it allows it to perform equally or better than the gt640m. The gt740m is equal to the ddr3 model of the gt650m. Both two years old.
But let's not forget that the hd 5200 is iris pro and can only be found on high-end cpu. I have an high-end quad-core i7 and I actually don't have it because it's only really for the highest-end do to the edram. Without the edram it's basically a hd 5100.
50. brrunopt (Posts: 742; Member since: 15 Aug 2013)
You can compare specifications all you want, what matters is the results. And the results don't lie, in average the Hd5200 not only performs better the GT640M but also above the Gt740M (in some games above the GT750)
And the HD5100 (no dedicated memory) performs at level with a GT710
There are laptops with the i7QM+HD5200 for around the same price that with i7 QM + GT740M / GT750M
44. joaolx (Posts: 364; Member since: 16 Aug 2011)
The main affect that the integrated gpu will have is that they won't allow the cpu to use the whole ram and it's bandwidth decreasing it's maximum performance in games. The intel hd 5200 uses 128MB of edram but it's not enough to still beat it.
49. darkskoliro (Posts: 1076; Member since: 07 May 2012)
Im actually satisfied with the HD4400 :) Because it runs my Dota 2 smoothly on medium graphics which was better than I expected hahaha.
4. JMartin22 (Posts: 1892; Member since: 30 Apr 2013)
The fact that it's ARMv7 architecture, which is outdated. Cortex-A15 is yesteryears technology. ARMv8 based Cortex-A53/A57 will be much more power efficient and powerful, respectively. The 192 graphics cores is marketing bells and whistles. But the fact is, for all of its power, it's still outdated.
8. Retro-touch (Posts: 272; Member since: 24 Oct 2011)
The SoC will come in two versions, one version with a quad-core (4+1) Cortex-A15, and one that leverages two of NVIDIA’s own 64-bit ARMv8 Denver CPUs
9. yowanvista (Posts: 340; Member since: 20 Sep 2011)
Cortex-A53 and higher are primarily aimed at high performance systems like servers. In fact the Cortex-A53 has the same pipeline length as Cortex-A7 and real-time version of ARMv8 (ARMv8-R) is still 32-bit on any Cortex-A5x core. ARMv7 is far from being outdated nor is the revised Cortex-A15 (r3p3) found in the K1. The architectural changes brought by AArch64 are only incremental additions and will in no way be produce a significantly more power efficient core design. The Cortex-A5x series were designed with performance in mind.
Please enlighten me, how are those 192 shaders specifically outdated? You do realize that theTegra K1 has 1 SMX comprising of 192 shaders?. Think of it as a downscaled version of a Kepler based GPU like the GTX 780 which has 12 SMX. How is that 'outdated'?
11. Reality_Check (Posts: 277; Member since: 15 Aug 2013)
Some people like to think they know everything. Ignorance.. SMH
28. AppleHateBoy (unregistered)
IMHO, it is outdated. GTX 680 was launched wayy back in March of 2012. Kepler architecture is almost 2 years old now and 2 years is a loong time wrt the pace at which the mobile industry is growing.
Fortunately, it's alteast newer than PowerVR 6 series GPU powering the Apple A7 which was announced in Feb of 2011..
33. brrunopt (Posts: 742; Member since: 15 Aug 2013)
exactly, 2y old and still is more powerfull than any other ..
tegra 6 (Parker) is supposed to come with Maxwell, it will be another big jump in performance.
41. yowanvista (Posts: 340; Member since: 20 Sep 2011)
Kepler isn't geared towards mobile implementations it a desktop class architecture. Nvidia just shrank it down in the K1 but it remains a top notch performance. The latest desktop cards still use the refreshed Kepler so no, it's far from being outdated.
37. Extradite (banned) (Posts: 316; Member since: 30 Dec 2013)
Pathetic OEMs are still not providing us consumers with bigger 3500Mah battery so it can run A15CPUs with equivalent to having 3000Mah battery. Those A15s are a powerfull CPUs.
45. joaolx (Posts: 364; Member since: 16 Aug 2011)
Nvidia has a good history have marketing but when an actual tablet/smartphone comes out with the supposed tegra it is already outdated.
5. livyatan (Posts: 867; Member since: 19 Jun 2013)
Actually the Adreno 330 is faster than PowerVR SGX6430 in the iPad Air.
It proves that in the other off screen benchmarks, as well as with the GFLOPS number -130 vs 115.
And I'm getting a 26fps in T-Rex on my Note 3.
As for the K1, it is leagues above with 340+ GFLOPS.
I've said long ago that it will embarrass Intel, and I got laughed at.
And now, there you have it!
It outperforms the HD4400, at one third the TDP!
What a revolutionary achievement.. and other GPU solutions from ARM vendors are going to do the same - the newly announced SGX TX high end PowerVR, and Mali t760.
Both are over 320GFLOPS at a mere 2W TDP.
21. brrunopt (Posts: 742; Member since: 15 Aug 2013)
Most of the TDP is for the CPU , witch is a lot more powerfull.
30. AppleHateBoy (unregistered)
The 2 W TDP is for the GPU. The TDP of the SoC is in the 4-5 W range.
32. brrunopt (Posts: 742; Member since: 15 Aug 2013)
i'm talking about the TDP of lhe Intel SOC , most is for lhe CPU (i5-4200U), not the GPU (HD4400)
46. joaolx (Posts: 364; Member since: 16 Aug 2011)
Knowing the disadvantages of integrated gpu and knowing that the hd4400 is an incredibly low-end gpu it doesn't surprise me at all. Not to mention the intel hd weren't designed for gaming.
6. yowanvista (Posts: 340; Member since: 20 Sep 2011)
"192-core GPU", actually its 192 unified shaders. Shader ≠ cores
26. renz4 (Posts: 317; Member since: 10 Aug 2013)
and nvidia has been calling their stream processor (SP) as CUDA cores for years (since fermi i think)
7. aayupanday (Posts: 574; Member since: 28 Jun 2012)
The K1's gonna let you play Next-Gen Console Quality graphics games while at the same time make an omlett on your device's back.
38. Extradite (banned) (Posts: 316; Member since: 30 Dec 2013)
He surely is a weak fragile guy, that he's shting the pants cause it will get hot. Be a man, not a pussicat...
40. timepilot84 (Posts: 113; Member since: 16 Aug 2012)
At 5W, I wouldn't be eating any eggs you cook on it. Not unless you like E Coli.
10. galanoth (Posts: 425; Member since: 26 Nov 2011)
That's a reference tablet though.
Clock speed could be on ultra high with a heatsink for all we know.
15. MrPhilo (Posts: 48; Member since: 25 Feb 2013)
The Reference Device is a 7" Tablet, pretty much a Tegra Note 7 but with 4GB Ram, 1080p Screen and Tegra K1 (A15 ver).
12. Lyngdoh (Posts: 236; Member since: 06 Sep 2012)
Are these benchmarks ran on a standard resolution?? Else this is not a fair comparison.
14. Berzerk000 (Posts: 4272; Member since: 26 Jun 2011)
Yes, 1080p Offscreen. The device resolution is not a factor.
18. Diazene (Posts: 129; Member since: 01 May 2013)
what about the relative die size or power consumption?
24. joaolx (Posts: 364; Member since: 16 Aug 2011)
I thought Nvidia said it would be 3x faster than the A7. Guess not, but still impressive
27. renz4 (Posts: 317; Member since: 10 Aug 2013)
i think you can try to do the math from above number. qualcom claim that adreno 420 will be 40% faster than current adreno 330
31. TylerGrunter (Posts: 1514; Member since: 16 Feb 2012)
That graph is coming from notebookcheck.net. I'm not blaming Daniel P., the source page that he refers to doesn't give the original one. But it would be nice if the real source is added:
And by the way: that's not a real benchmar, but what Nvidia claims it can do. I would wait for the real implementations before starting too muc hype.
47. joaolx (Posts: 364; Member since: 16 Aug 2011)
That's nvidia right there. When it comes out, just like all other tegras, it will be outperformed by qualcomm and all others..
34. theoak (Posts: 324; Member since: 16 Nov 2011)
Clock speed needs to be considered too. If NVidia is running 1 GHz faster it is kind of hard to compare.
36. npaladin2000 (Posts: 165; Member since: 06 Nov 2011)
Did they include on-board WiFi and LTE radios? If not, Qualcomm is still going to own the US phone market.
39. downphoenix (Posts: 3155; Member since: 19 Jun 2010)
fantastic specs but they mean squat if they end up like in the Tegra 4, not in anything except for some cheap chinese crap.
42. fteoOpty64 (Posts: 8; Member since: 19 Mar 2013)
Agree that NV has to DELIVER this time!. They got burned on T4 and T4i and cannot make that mistake again or else no one is ever going to trust them in mobile. At least they had the trust of MS, HP and Toshiba for T4 design wins.
For K1, NV has to quickly capture the super-tablet and super-monitor markets by mid-year. It appears they *can* do it this time. We shall see.
PS: NV Shield console with K1 will be a hoot! Lets hope they put a 6.5 inch FHD screen on it and camera on Shield to see the player (ie face tracking).
48. joaolx (Posts: 364; Member since: 16 Aug 2011)
They got burned with the tegra 2/3/4/4i. Not just the last two. They claim they have the best SoC only to outperformed by the time any smartphone or tablet comes out with their SoC.