RISC Vs. CISC In Mobile Computing 126
eldavojohn writes "For the processor geeks here, Jon Stokes has a thoughtful article up at Ars Technica analyzing RISC vs. CISC in mobile phones (Wikipedia on Reduced Instruction Set Computers and Complex Instruction Set Computers). He wraps it up with two questions: 'How much is the legacy x86 code base really worth for mobile and ultramobile devices? The consensus seems to be "not much," and I vacillate on this question quite a bit. This question merits an entire article of its own, though,' and 'Will Intel retain its process leadership vs. foundries like TSMC, which are rapidly catching up to it in their timetables for process transitions? ARM, MIPS, and other players in the mobile device space that I haven't mentioned, like NVIDIA, AMD/ATI, VIA, and PowerVR, all depend on these foundries to get their chips to market, so being one process node behind hurts them. But if these RISC and mobile graphics products can compete with Intel's offerings on feature size, then that will neutralize Intel's considerable process advantage.'"
Completely pointless (Score:5, Interesting)
Intel have a poor track record... (Score:3, Interesting)
Unlike the old RISC workstation manufacturers which relied on a small market of high margin machines, the current embedded CPU manufacturers operate in a huge, cut-throat world where they need to squeeze up the price/performance ratio as high as possible to maintain a lead. I think this market will be somewhat tougher to crack than the workstation market, since intel does not have what they had before: an advantage in volume shipped.
CISC is alive and well and so is RISC (Score:5, Interesting)
Both refer to the instruction sets, not the internal workings. x86 was CISC in 1978 and it's still CISC in 2008. ARM was RISC in 1988 and still RISC in 2008. AMD64 is a border line case.
People get confused with the way current x86's break apart instructions into microops. That's doesn't make it RISC. That just make it microcoded. That's how most CISC processors work. RISC process rarely use anything like microcode and when they do, it is looked upon as very unRISCy.
Today, the internals of RISC and CISC processors are so complex that the almighty instruction set processing is barely a shim. There are still some advantages to RISC but they are dwarfed by out-of-order execution, vector extensions, branch prediction and other enormously complex features of modern processors.
Re:ARM is RISC in name only (Score:3, Interesting)
Ultimately, though, I think "RISC" is still a pretty valid description. Sure the complexity of some instructions strains the ideals behind RISC philosophy, but it certainly has what I consider the most important aspects of a RISC ISA:
1) Fixed instruction width. Makes superscalar instruction fetch and decode a breeze.
2) Pure load/store design. Instructions are -either- a load, a store, or an operation on registers. This makes dispatch and scheduling simpler.
These I consider critical to being "RISC", and they're also solid and easily definable characteristics. "Complexity of instructions" is subjective. Personally if I had to draw a hard and fast line, I'd say any ISA that can be completely implemented without microcode, and still follows the above two rules, qualifies as not being "too complex". I mean, it's relative, right? And since some x86 instructions get decoded into hundreds of micro-ops, I don't think a mere conjoining of two alu operations is all that bad.
I think people are missing the point (Score:3, Interesting)
People (and not just us) should be asking would end customers find it useful to be able to run their PC apps on their mobile devices? Current mobile devices typically have PowerPoint and Word readers (with maybe some editing capabilities) but would users find it worthwhile being able to load apps onto their mobile devices from the same CDs/DVDs that were used to load the apps onto their PCs?
If end customers do find this attractive, would they be willing to pay the extra money for the chips (the Atom looks to require considerably more gates than a comparable ARM) as well as for the extra memory (Flash, RAM & Disk) that would be required to support PC OSes and apps? Even if end customers found this approach attractive, I think OEMs are going to have a long, hard think about whether or not they want to port their radio code to the x86 with Windows/Linux when they already have infrastructures built up with the processors and tools they are currently using.
The whole thing doesn't really make sense to me because if Intel wanted to be in the MCU business, then why did they spin it off as Marvell (which also included the mobile WiFi technology as well)?
The whole think seems like a significant risk that customers will want products built from this chip and the need for Intel and OEMs to recreate the infrastructure they have for existing chips (ie ARM) for the x86 Atom.
myke
Re:RISC on a PC doesn't make sense anymore (Score:3, Interesting)
Until recently, there were still speed advantages to using a four core multi-processor G5 for some operations over the 3.0GHz eight-core Xeon Mac Pros because of VMX.
It is somewhat ironic that the Core architecture chips now used by Apple in all but the Mac Pros are all below the 3GHz clock "wall" that was never overcome by the G5, but the Intel name seems to have gone a long way in assuaging consumer doubts about buying a Mac.
Re:CISC is dead (Score:3, Interesting)
The new x86 Intel CPUs don't really have that philosopy. They use many techniques pioneered on RISC CPUs, but they haven't disposed of the instruction set. Compilers are still stuck trying to optimize at the CISC level. The microcode engine is still there in some sense, converting high level x86 code to internal micro operations. Intel keeps CISC working by pouring huge amounts of resources into the design.
Of course Intel is in a bind here. They can't dump the x86, it's their bread, butter, and dessert. They have to make CISC fast because their enormous customer base demands it. They're forever stuck trying to keep an instruction set from 1978 going strong since they can't just toss it all out and make something simpler.
Re:What the Heck? (Score:3, Interesting)
Re:There is no RISC vs CISC any more (Score:3, Interesting)
Re:Completely pointless (Score:4, Interesting)
Instruction compression, not complexity... (Score:1, Interesting)
The concept of risc never made much sense to me. (Score:3, Interesting)
In other words, you are designing your instruction set to your hardware.
Now, assuming that you are going to have close to infinite investment into speeding up the CPU, it seems that if you are going to fix an instruction set across that development time, you want the instruction set that is the smallest and most powerful you could get it.
That way for the same cycle instead of executing one simple instruction you are executing one powerful one (that does, say 5x more than the simple one)
Now at first the more powerful one will take more time than the simple one, but as the silicon becomes more powerful, The hardware designers are going to come up with a way to make it only take 2x as long as the simple one. Then less.
I guess I mean that you will get more relative benefit tweaking the performance of a hard instruction than an easy one.
Also, at some point the Memory to CPU channel will be the limit.
I'd kinda like to see Intel take on an instruction set designed for the compiler rather than the CPU (like Java Bytecode). Bytecode tends to be MUCH smaller--and a quad-core system that directly executes bytecode, once fully optimized, should blow away anything we have now in terms of overall speed.
Looks like microcode, smells like microcode,... (Score:3, Interesting)
The distinction you seem to be trying to draw here is not very sound. Modern CPUs "translating instructions into hardware instructions" with a gate maze is essentially the same thing as pulling a wide microcode word from ROM whose bits directly control the logic units. In both cases you put some bits in to start the process off, and you get a larger number of bits as a wide bus of signals out, which are used to direct traffic inside the CPU. The picture only looks
Specifically, the different parts of each microcode instruction executed in parallel then, just as now, though out of order execution was much rarer (some DSPs had it IIRC). This was not because microcode as it was then conceived couldn't handle it, but that the in-CPU hardware to support it wasn't there. There's no point going through gymnastics to feed your ALU if you've only got one and it's an order of magnitude slower than the circuit that feeds it.
One of the biggest annoyances of staying in any one field for too long is having to watch some technology following the logical path from conception to fruition go through an endless series of renaming (AKA jargon upgrades) that add nothing but confusion and pomposity to the field.
--MarkusQ
Re:What the Heck? (Score:2, Interesting)
The interesting part of the article is about the process. Intel's domination has been in their process, always a few steps ahead of the competition (maybe just a half step ahead of TSMC). Newer processes have always yielded faster, smaller, and cooler chips. Not anymore. 60nm didn't make chips use less power and 45nm doesn't help either.
In a sense, one dimension of the playing-field has become level for Intel and the custom fabs. And that's the level in which embedded plays.