NVIDIA claims its Denver 64-bit ARM SoC for Android rivals PC performance

NVIDIA claims its Denver 64-bit ARM SoC for Android rivals PC performance
Earlier this year, we saw NVIDIA's next-gen "Denver" SoC bookmark a spectacular 44,000 on AnTuTu. But that was before the chip was officially announced. On Monday, NVIDIA finally unveiled its 64-bit ARMv8 SoC for Android, which is a sequel to its Tegra K1 chip. The new "Denver" SoC supposedly offers the same performance found on some PCs.

"Denver" will be available in two pin-compatible versions, and both feature a new technology called Dynamic Code Optimization. This technology "optimizes frequently used software routines at runtime into dense, highly tuned microcode-equivalent routines. These are stored in a dedicated, 128MB main-memory-based optimization cache." This means that when you run certain apps, the optimized code is already ready and waiting. And the Dynamic Code Optimization will work on apps already written for the ARM platform, meaning no additional work for the developer.


NVIDIA has high hopes for the SoC, saying that it  will "rival some mainstream PC-class CPUs at significantly reduced power consumption." With speed, power and less power use, "Denver" appears perfect to run games, create content and run powerful enterprise-centric apps on Android flavored devices.


via: TheRegister

FEATURED VIDEO

22 Comments

1. sriuslywtf

Posts: 297; Member since: Jul 09, 2013

Yeah.. Go Nvidia.. But this time make it available to many OEM's.

2. 0xFFFF

Posts: 3806; Member since: Apr 16, 2014

Nvidia's Denver is an interesting project. But Nvidia has taught the world well that the hype typically exceeds the reality of their chips. Hopefully they can get Denver into a reasonably priced developer tablet or phone and then into a variety of devices. It's great to see the Tegra K1 showing up in tablets and even in a Chromebook.

3. renz4

Posts: 319; Member since: Aug 10, 2013

instead of pure ARM CPU isn't that Denver is more like transmeta CPU?

16. Augustine

Posts: 1043; Member since: Sep 28, 2013

From the sounds of it, it seems that Denver uses a trace cache much like the Intel P4 did. However, the article states a 128MB cache, but the slide, only a 128KB cache, which is unusually larger than the typical instruction cache size on most processors, but that would make sense if it were a trace cache, since uops are longer and more numerous than instructions.

4. livyatan

Posts: 867; Member since: Jun 19, 2013

This looks like it could beat the chips out of low power Kaveri and Core i3 (because of far lower power consumption on the given performance).

10. vincelongman

Posts: 5750; Member since: Feb 10, 2013

Intel are going to release Core M CPUs with TDP of only 3-5W and claim that devices will be able to be under 9 mm and fanless For comparison Nvidia, Apple, Qualcomm's ARM CPUs and Intel's Atom CPUs are about 1-2W for phones and 2-3W for tablets So its gonna be interesting to see how the A8, 64-bit K1 and next Exynos compare to Intel's Core M and Atom CPUs

12. brrunopt

Posts: 742; Member since: Aug 15, 2013

Many ARM chips reach up to 5W too on tablets

5. GreekGeek

Posts: 1276; Member since: Mar 22, 2014

But it will never make it on phones. Im still skeptical about its throttling control though. Sure it's great but can it rival a Snapdragon when you use it for a couple of hours straight without slowing down?

9. vincelongman

Posts: 5750; Member since: Feb 10, 2013

The 32-bit quad-core K1 doesn't have any heating or battery issueshttp://www.anandtech.com/show/8329/revisiting-shield-tablet-gaming-ux-and-battery-life The 64-bit dual-core K1 will be even more efficient OEMs are probably arent using the K1 right now because it just came out and is probably priced higher than the 805 Also there's no built in LTE (compared to the 801)

11. GreekGeek

Posts: 1276; Member since: Mar 22, 2014

No matter how powerful it is, if it can only do "short sprints" then it's useless.

13. vincelongman

Posts: 5750; Member since: Feb 10, 2013

Read the article They did the T-Rex benchmark and capped the K1 to 30 fps to make the test fair Otherwise the K1 would be pushing nearly 60 fps while the S800/Exynos/Atom were only pushing ~25 fps Even when pushing 30 fps, 5 fps higher than the others, it still lasted longer than the Note 3 (S800), Tab S 8.4 (Exynos), MeMO Pad 7 (64-bit Atom) Disappointing they didn't have any 801 or A7 to compare with But still impressive and shows the "short sprints" is totally false Also most users I've read say they get 3 hours of SOT for Trine 2 and over 5 hours SOT for Android games

19. Iodine

Posts: 1503; Member since: Jun 19, 2014

It doesn't have any heating or battery issues ? That chip runs at 85 degrees centigrade ! That is a temperature potentially harmful to silicon itself. The Tegra shield has ~2,5 hours of battery life when running at full load.... Even an iPhone with its tiny battery is attacking 2 hours when running its GPU at full load. OEM's aren't using it because you can't put an 7W chip into a phone even though 1440p phones would need that. Snapdragon 805 doesn't have built in LTE as well.

21. brrunopt

Posts: 742; Member since: Aug 15, 2013

pretty much every phone lasts longer than tablets with the same hardware, dont compare phones with tablets.

22. vincelongman

Posts: 5750; Member since: Feb 10, 2013

Well the K1 is over twice as fast as the A7/801, so of course if its maxed out it wont last quite as long and might get slightly hotter That's like complaining that a Toyota Corolla doing 50 km/h is more efficient than a Lamborghini doing 150 km/h Or complaining that iPhones/Androids don't have a week long battery like old brick phones The K1 is 5W as well, which on par with the A7/801 in tablets Also read the article, again it disputes what you're saying (temps stayed under 50C and lasted longer than many other devices)

6. aayupanday

Posts: 582; Member since: Jun 28, 2012

This one's gonna be in the HTC made Nexus 8/9.

17. nathan.carter

Posts: 416; Member since: Aug 11, 2014

also in the HTC made nexus ONE

7. nlbates66

Posts: 328; Member since: Aug 15, 2012

haha live in hope, rival a few lower atoms maybe, but decent chips?

8. Iodine

Posts: 1503; Member since: Jun 19, 2014

If this custom CPU will be similar to the GPU story.... No thanks. The all new Kepler GPU was pushed as revolutionary design that crushes everything competition has done at the beginning of the year. Eight months later and we got two tablets and one all in one computer. Why ? The performance of tegra is indeed amazing, but for what I know it runs at 7W at full load... So ~2,5x performance with 2 to 3 x higher power compustion compared to competition ? I don't think there is anything "revolutionary" or "industry leading" about that like nvidia is selfclaiming. And then the competition is also moving ahead... Claiming that the all new tegra is 3x faster and 1,5x more efficient than last years Apple A7 while shipping K1 products so late that they will compete with the A8 which could be far more efficient than the tegra, but nvidia don't want you to know that and they will happily ignore that fact they are behind, until they can compare their competition to something that they will release a year or two later.

14. brrunopt

Posts: 742; Member since: Aug 15, 2013

" 2 to 3 x higher power compustion compared to competition ? " Not even close, other high end SOC's reach up to 5W (a few even more) The A7 is far behind in performance compared to the K1, and not even the A87 will reach is level..

18. Iodine

Posts: 1503; Member since: Jun 19, 2014

iPhone 5S lasts almost 2 hours when running its GPU at full load, that's about 13% less than tegra K1 tablet with over three times bigger battery. Extrapolating from the iPhone battery size the power compustion is about 3,12 W and then some of that is the display which is tuned to 50% brightness by the test. Safe to say that the A7 is significally less than 3W SoC when running at full GPU load. Even when we will favor the tegra and take the 3W for the A7, Tegra is still 2,3 times more power hungry and about 2,4 faster in off screen tests. So I am asking again, where is the revolution ?

20. brrunopt

Posts: 742; Member since: Aug 15, 2013

dont compare tablets to phones The A7 on the 5s drops to less than 1ghz after a few seconds and the GPU probably have some throttle too.. On the iPad air the A7 can take up to 7W , reducing to 6W after 1 minute due to thermal throttling ..http://images.anandtech.com/reviews/tablets/apple/ipadair/maxpower2sm.png

15. yowanvista

Posts: 341; Member since: Sep 20, 2011

Usual PR BS, RISC processors will never outclass CISC on an architectural basis. Current x86-64 processors may not be power efficient compared to ARM but that's not the whole part of the equation, there's a lot more than just TDP. The cache argument is moot, nvidia can argue that caches could speed up RISC CPUs. But the same can be said about CISC cpus. You get a bigger speed improvement by using CISC with caches than with RISC, because the same size cache has more effect on high density code that CISC provides.

Latest Stories

This copy is for your personal, non-commercial use only. You can order presentation-ready copies for distribution to your colleagues, clients or customers at https://www.parsintl.com/phonearena or use the Reprints & Permissions tool that appears at the bottom of each web page. Visit https://www.parsintl.com/ for samples and additional information.