Wow, that CISCO fella has little knowledge about how architectures work... How the friggin' hell is he in that position blows my mind in a lot of ways...
Comparing different ISAs literally is comparing 2 types of different nature food. Milk and Fruits, for example. You can't say that Milk will keep you alive neglecting Fruits for example. In this case, X86 serves its purpose for a certain type of loads, whereas the ARM ISAs cover another part (low power envelopes are their current strongest point). Both aim to "calculate" stuff and produce results on a screen for you, but comparing them directly without context is, at least for me, a dumb thing to do.
To answer on how it could compare to... You have to go into the metal: current ARM design wins (Samsung Exynoss-series, Apple's A-series and Qualcomm Snapdragon-series, for example) can do much more than an Athlon XP or a Pentium III. Not because of one simple catchy phrase like "because they are old", but because the amount of transistors you can pack in today's designs (the "nanometer" race), plus dedicated hardware for the stuff you know your design is not good for emulating (decoding, for example). Also, software evolution helps a lot. Packing more stuff into the hardware allows for new software to depend less on "emulating" things through the RAW processing power of the CPU or GPU. This is a little more complex to explain, but X86, being a CISC approach, packs more potential for certain number crunching schemes than ARM being a RISC approach. Think of what you usually hear as "floating point operations" or "streamed instructions". ARM being a RISC approach, can't compete for a "fixed pipe" approach to solve one specific need a CISC approach can (in very simple terms), but both can approaches can crunch numbers providing the same exact results. I won't go to the "what if ARM goes to the same power envelope as X86", because I'm not even 100% sure of what's better. I believe you have to pick the right tool for the job at hand, and that implies a case-by-case analysis.
Now, comparing what's inside the Samsung Galaxy S4 to your run-of-the-mill i5 computer desktop, for example. There's not even competition there. The i5 floors the SG4 in raw power (transistor count is higher, design power is higher, all hardware/software accompanying the CPU is stronger), but you can't put the i5 PC inside your pocket. It's a trade off for going portable, no surprises there. Also important is the OS. He mentioned that Microsoft failed to "sell" Windows 8... Well, I bet Android nor iOS have even HALF the features of even Win XP or Win 2000. He's comparing a light OS to a full blown desktop-designed part. I do agree that 90% of regular people is happy with what Android or iOS offers, but from an operative point of view, there's not even a comparison to be made. In the particular case of Android (Linux kernel), is like comparing it to Fedora or Gentoo.
Anyway, what the CISCO dude didn't tell you, is that when you want to go low power and high performance, you will always have intersection points with old tech, but current to current, the higher power consuming, higher transistor counting parts will prevail and be better -> rule of thumb. This is also the case with OSes. To get a hold of the new goodies you have to update parts of the OS that could not work well with all the old stuff in the code. Think the jump from Win 98 to Win XP or from the Linux kernel 2.x to 3.x and so on.
Hope we help on this discussion, sounds like a fun thread to read later on, haha.
Cheers!