x PhoneArena is hiring! Reviewer in the USA
  • Hidden picShow menu
  • Home
  • News
  • Qualcomm insider claims that Apple's 64-bit A7 CPU caused panic among chipmakers

Qualcomm insider claims that Apple's 64-bit A7 CPU caused panic among chipmakers

Posted: , by Chris P.

Tags :

Qualcomm insider claims that Apple's 64-bit A7 CPU caused panic among chipmakers
Chipmaking industry top executives have been putting a brave face ever since Apple outed its 64-bit A7 chip along with the iPhone 5s, but if a story shared by an alleged Qualcomm insider is to be believed, the in-house system-on-chip actually “set off panic” among silicon vendors.

Quoted by HubSpot, the insider claims that the A7 hit Qualcomm “in the gut”:

Not just us, but everyone, really. We were slack-jawed, and stunned, and unprepared. It’s not that big a performance difference right now, since most current software won’t benefit. But in Spinal Tapterms it’s like, 32 more, and now everyone wants it.” says the insider. “Apple kicked everybody in the balls with this. It’s being downplayed, but it set off panic in the industry.”

That's despite the knee-jerk reaction of one Qualcomm executive, that labeled the chip a marketing gimmick, the same executive that was promptly reassigned after Qualcomm issued a statement refuting the former's statement. More importantly, not long after the iPhone 5s debut, both Qualcomm and Samsung announced their own transition plants. Be that as it may, chipmakers apparently didn't view the transition towards a 64-bit architecture with too much interest initially, and reportedly thought it wasn't worth the R&D spending at the time.

The roadmap for 64-bit was nowhere close to Apple’s, since no one thought it was that essential,” says the insider. “The evolution was going to be steady. Sure, it’s neat, it’s the future, but it’s not really essential for conditions now.”

Whether you decide to trust the alleged insider's story or not, there's no denying that the competition has jumped on the bandwagon almost instantaneously, and that, more than anything, should signal just how close to home the above claims really are.

source: HubSpot via Apple Insider

  • Options

posted on 17 Dec 2013, 09:49 16

1. Ninetysix (Posts: 2690; Member since: 08 Oct 2012)

But that fandroid guy said that Apple doesn't innovate anymore. I'm confused :(

posted on 17 Dec 2013, 09:53 21

2. PapaSmurf (Posts: 10457; Member since: 14 May 2012)

A7 chip is the best chip on the market hands down.

posted on 17 Dec 2013, 12:34 8

40. brrunopt (Posts: 742; Member since: 15 Aug 2013)

best is a very relative term..

as powerfull goes it's not the best...

posted on 17 Dec 2013, 12:41 3

41. PapaSmurf (Posts: 10457; Member since: 14 May 2012)

You sure about that? Shall I bring Geekbench into this?

Ninetysix, bust out the links, please.

posted on 17 Dec 2013, 12:44 9

42. brrunopt (Posts: 742; Member since: 15 Aug 2013)


Nexus 5 - ~3300
iphone 5s - ~ 2600



posted on 17 Dec 2013, 12:55 8

45. saurik (Posts: 86; Member since: 13 May 2013)

If you view the Nexus 5's info which scored ~3300 on Geekbench, you'd see that it's clocked at 2.8GHz whereas the official version ships at 2.3GHz, implying that the device used to benchmark was overclocked.
Keeping in mind that the average consumer doesn't overclock his/her phone, the best way to find out the "real" benchmark score would be to look at the official Geekbench's Android comparison chart (http://browser.primatelabs.com/android-benchmarks​) where the Nexus 5 scores 2560.
Do your homework before posting, kid.

posted on 17 Dec 2013, 14:37 5

57. brrunopt (Posts: 742; Member since: 15 Aug 2013)

missed that, my bad..

still , Nexus 5 at 2.3Ghz (more exactly 2265Mhz) scores up to 3000 , above the 5s

posted on 17 Dec 2013, 15:29 3

63. brrunopt (Posts: 742; Member since: 15 Aug 2013)

and even on those "official" results the 5s scores 2523 vs 2805 on the note 3 , 2701 on the Sony Xperia Z1 , ....

"Do your homework before posting, kid "

posted on 17 Dec 2013, 14:04 2

53. jacko1977 (Posts: 428; Member since: 11 Feb 2012)

apple cheats just got to find it

posted on 17 Dec 2013, 13:51 4

49. Dr.Phil (Posts: 1642; Member since: 14 Feb 2011)

While I agree with your assessment and agree that the A7 chip is "the best chip on the market", I must also point out that the way you word your comments make it seem like you are an arrogant jerk. It's very counterproductive to having a good conversation that allows insight and opinions on the matter. There are several times that I will voice a majority or dissenting opinion, but I do it in a manner that promotes dialogue from another person who may or may not agree with me.

Also, when talking about benchmarks you have to take things with a grain of salt, especially benchmarks that are cross-platform like Geekbench. There is no way of knowing how much the benchmark relies on software utilization. There seems to be a pretty significant difference in scores when the software of the device is updated. Now, I am not denying that even when you throw out software utilization that Apple's A7 chip wouldn't still come out on top. What I am saying is that you can't just quote benchmarks like Geekbench to support your claims.

posted on 17 Dec 2013, 14:29 1

56. JakeLee (banned) (Posts: 1021; Member since: 02 Nov 2013)

The metal benchmark routines don't rely on OS's system libraries very much and mostly consist of pure machine level instructions. Therefore, it would be safe to say the benchmark results are rather accurate as long as they remain within the ARM boundary.

But I absolutely agree with you on what he sounds like.

posted on 17 Dec 2013, 14:58

60. PapaSmurf (Posts: 10457; Member since: 14 May 2012)

You're still at it? Seriously it's getting real sad to see someone so petty like you.

posted on 17 Dec 2013, 17:49

67. Dr.Phil (Posts: 1642; Member since: 14 Feb 2011)

Yes but the Geekbench benchmark has actually seen some pretty significant differences in device scores with software updates. You can look up the scores of the 5S before and after iOS7 and there was a big difference before they re-optimized it.

posted on 17 Dec 2013, 20:56 1

72. JakeLee (banned) (Posts: 1021; Member since: 02 Nov 2013)


Are you THE Dr.Phil btw?


We also have an "Obama" here which is of course a pun, but in your case, I'm not so sure considering the polite way you've been talking.

posted on 17 Dec 2013, 14:57 1

59. PapaSmurf (Posts: 10457; Member since: 14 May 2012)

... What? Arrogant jerk, for giving Apple credit? Usually I always agree with you, but here I won't. Counterproductive? All anyone has to say is, "No, I believe this _______ from ______ is the best," and you can go from there.

Geekbench is the most accurate way to measure the power of a SoC, whether it's on Android or iOS to date. Until something new comes up, Geekbench seems to be the most accurate. I have my opinion, and you have yours.

Agree to disagree.

posted on 17 Dec 2013, 17:53 1

68. Dr.Phil (Posts: 1642; Member since: 14 Feb 2011)

I wasn't saying you were an arrogant jerk, I was saying that the way you are wording your comments make it sound like you are one. I also wasn't stating that in a way to make you feel bad about yourself but rather to point out how you are coming across. I was trying to be helpful and hopefully save you from people who are going to see your comments as an attack and will want to attack you directly back. I also said I agreed with your statement that the A7 chipset was one of the best chipsets you can find on the market right now.

Also, it's very debatable as to how accurate Geekbench is. As I have noted in my previous comment, there was a significant difference between pre and post iOS7 scores on the iPhone 5S. This suggests that the benchmark relies more heavily on software optimization than what we think.

posted on 17 Dec 2013, 16:57

66. applesucker (Posts: 81; Member since: 29 Oct 2013)

only when it runs on android because the difference will be noticeable not like the ios which is already had a straight performance

posted on 17 Dec 2013, 21:24 1

73. JakeLee (banned) (Posts: 1021; Member since: 02 Nov 2013)

How about this?


Every new / updated app from Feb 1st on will be automatically a 64-bit one.
It's not that hard deducting that the new iPhone in 2014 will feature a 64-bit-only SoC omitting Aarch32.

What this means? Increased design / space / cost / power efficiency plus a design also feasible for servers without much of a change.

Can Qualcomm or Sammy mimic this? Not in five years.

posted on 18 Dec 2013, 22:34

79. grahaman27 (Posts: 361; Member since: 05 Apr 2013)

Your a moron. It does not mean that at all, they must be updated in order to simply be compatible with ios7, it doesn't mean 64 bit support for everything. Its crazy that apple needs developers to make 2 versions of one app just to support their new chip fully.... Welcome to 2006 apple.

posted on 19 Dec 2013, 01:34

80. JakeLee (banned) (Posts: 1021; Member since: 02 Nov 2013)

Ever heard of fat binary submission?

Use XCode5, target iOS6 and above, and you automatically generate both 32 and 64-bit version by default.

That's how Apple handled the transition on Mac

You don't have to prove your ignorance since you are a moron by default.

posted on 19 Dec 2013, 03:22

81. Victor.H (Posts: 791; Member since: 27 May 2011)

And I will be putting the last sentence in a frame, definitely.

posted on 17 Dec 2013, 09:55 5

3. _Bone_ (Posts: 2155; Member since: 29 Oct 2012)

Not real innovation as like icons and rounded corners, 64bit has been around, first on mobile via Huawei. But it's a logical evolution iOS is taking early, and it might pave way for real innovation.

posted on 17 Dec 2013, 09:58 12

4. PapaSmurf (Posts: 10457; Member since: 14 May 2012)

A7 is one hell of a beast. Give credit to where it's due.

posted on 17 Dec 2013, 10:21 7

10. _Bone_ (Posts: 2155; Member since: 29 Oct 2012)

Huawei's K3v2 is the first 64bit mobile SoC. Give credit where it's due.

posted on 17 Dec 2013, 10:35 3

13. PapaSmurf (Posts: 10457; Member since: 14 May 2012)

Please explain to me where I said the A7 was the first 64-bit SoC. I'll wait.

Bone, I would have expected you to be smarter than to try and question me.

posted on 17 Dec 2013, 10:43 5

17. androiphone20 (Posts: 1654; Member since: 10 Jul 2013)

New instruction set ARMv8, does that mean anything to you?

posted on 17 Dec 2013, 10:48 7

19. PapaSmurf (Posts: 10457; Member since: 14 May 2012)

No because he doesn't like Apple.

posted on 17 Dec 2013, 11:17 2

30. Finalflash (Posts: 3536; Member since: 23 Jul 2013)

But even the article points out the real reason for the panic, that being "it is 32 more". Not the benefits but the marketing gold that is with consumers who also don't know what 64 bit really is. The processor is good that isn't the contention, it is that 64 bit is a marketing gimmick at best and that is what the industry saw it as as well. Also the instruction set doesn't do much either (as in it didn't double the speed, more like 1-2%), it is the core components of the CPU die here that made the difference with the gains shown in benchmarking.

posted on 17 Dec 2013, 12:53 2

44. JakeLee (banned) (Posts: 1021; Member since: 02 Nov 2013)

From your previous comments, I assume you are an app programmer proficient with Java.
There are so many things HHL programmers aren't aware of, especially the Java programmers tend to know nothing about how CPUs work :
Download some AOSP codes from codeaurora.org, look at bionic math particularly. Even if both input and output are mostly 32-bit data, very often the intermediate results are 64-bit ones spanning over two 32-bit registers. Just look how inefficient it is dealing with a single value spanning over two registers.

Even more importantly, the 31 64-bit architectural registers on ARM64 are capable to fill the 64-byte cache line completely during every iteration, thus absolutely maximizing the performance/power efficiency while this kind of luxury is simply not possible with the 14 32-bit registers on ARM32. (You are lucky if you can fill 8 bytes)
That amount of boost in register capacity also grants even world's most lackluster compilers like GCC generate halfway decent machine codes - a hidden benefit.

The result? The A7 operates roughly 30% faster in 64-bit mode than in 32-bit mode without any kind of special 64-bit treatments WHATEVER you are doing.
And it's just the beginning.

For comparison :
You may put a $1000+ graphics card into your Windows machine that's 20 times as fast as the Intel integrated one, it won't improve your Office-like applications' UX by any bit.

30% isn't much? That's roughly the performance difference between the most expensive i7 and cheapest i5. Just look at their prices.

Do not even think of refuting my arguments with Windows x64 though. That would just highlight your ignorance. The x86, being a CISC one, can do arithmetics directly on RAM without loading the data to its architectural registers first. On CISC machines therefore, the ISA serves just as the frontend while the backend does the vast majority of the job. (arithmetics on memory are taken over by the hidden physical registers)
Therefore, it hardly matters if the frontend operates in 32 or 64-bit on x86 machines.

There is also this kind of "backend" on ARM, but it rather plays a supportive role due to the RISC nature.

That's the difference.

posted on 17 Dec 2013, 11:08 7

28. saurik (Posts: 86; Member since: 13 May 2013)

No it's not. Here are two sources confirming that K3v2 is not a 64bit SoC and A7 is the first 64bit mobile SoC.


Also, a quick look at your beloved android benchmarks reveal that K3v2 is 32bit (1.5 GHz quad-core) and ranks much below the iPhone 5's dual-core A6 let alone the A7.

Give credit where it's due.

Want to comment? Please login or register.

Latest stories