RISC Vs. CISC In Mobile Computing 126
eldavojohn writes "For the processor geeks here, Jon Stokes has a thoughtful article up at Ars Technica analyzing RISC vs. CISC in mobile phones (Wikipedia on Reduced Instruction Set Computers and Complex Instruction Set Computers). He wraps it up with two questions: 'How much is the legacy x86 code base really worth for mobile and ultramobile devices? The consensus seems to be "not much," and I vacillate on this question quite a bit. This question merits an entire article of its own, though,' and 'Will Intel retain its process leadership vs. foundries like TSMC, which are rapidly catching up to it in their timetables for process transitions? ARM, MIPS, and other players in the mobile device space that I haven't mentioned, like NVIDIA, AMD/ATI, VIA, and PowerVR, all depend on these foundries to get their chips to market, so being one process node behind hurts them. But if these RISC and mobile graphics products can compete with Intel's offerings on feature size, then that will neutralize Intel's considerable process advantage.'"
CISC is dead (Score:5, Insightful)
What the Heck? (Score:5, Insightful)
The author's real point appears to be: x86 vs. Other Embedded Architectures. Without even looking at the article (which I did do), it's not hard to answer that one: There is no need for x86 code in a mobile platform. The hardware is going to be different than a PC, the interface is going to be different than a PC, and the usage of the device is going to be different than a PC. Providing x86 compatibility thus offers few, if any, real advantages over an ARM or other mobile chip.
If Intel's ATOM takes off, it will be on the merits of the processor and not on its x86 compatibility. Besides, x86 was a terrible architecture from the get-go. There's something mildly hilarious about the fact that it became the dominant instruction set in Desktop PCs across the world.
ARM is RISC in name only (Score:2, Insightful)
I've had to work with the ARM ISA in the past (I was studying its implementation as a soft core on an FPGA), and I can tell you it doesn't follow the RISC philosophy well, if at all.
One very non-RISC thing ARM did was move the shift instructions into every arithmetic instruction. That's right: there are no dedicated shift instructions. When you need a shift instruction, you have to encode it as part of a move operation or an add. In effect, every add, and, or, sub, etc. is actually a an add+shift, and+shift, or+shift, etc. This is the opposite of the RISC philosophy, and it significantly complicates the hardware, since a variable shifter has to be on the ALU critical path.
Other non-RISC things ARM did include the Java instruction set extensions, the Thumb instruction set extensions (further reduce code size), vector & media instruction set instructions, etc.
I think calling ARM "RISC" is a marketing decision only, done for historical reasons. It doesn't have much to do with the technical reality, IMO. Jon Stokes would have done better to say ARM vs. x86, instead of RISC vs. CISC, which is an outdated idea back from the 80s & 90s.
Re:CISC is dead (Score:1, Insightful)
"Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture [Burroughs B5000] was a good idea.
Just as an aside, to give you an interesting benchmark -- on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore's law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there's approximately a factor of 1,000 in efficiency that has been lost by bad CPU architectures.
The myth that it doesn't matter what your processor architecture is -- that Moore's law will take care of you -- is totally false." --Alan Kay
I'm not saying (nor is Alan, I suspect) that RISC is better, or even that RISC versus CISC matters, but architecture certainly does. If we didn't have x86 compatibility as a goal, do you think your CPU would look anything like the Core2Duo? Have you ever built a large system where initial architectural assumptions did *not* significantly affect the final performance?
Re:Completely pointless (Score:4, Insightful)
Apple got rid of PowerPC because Motorola and IBM had no incentive to innovate and develop competitive processors in the mid-range; RISC was most worthwhile in the high-end big iron IBM machines using POWER and the low end embedded market served by Motorola/Freescale.
Re:I think people are missing the point (Score:3, Insightful)
Almost all application code written today is done in some portable manner. Writing custom assembly specific to a processor in an application is only done in certain performance critical things (ffmpeg anyone?). This is one of the reasons that Apple was so easily able to move from PPC to X86.
Spotting those who RTFA'd (Score:5, Insightful)
Read the article. Jon Stokes makes that point: but he also makes the point that in embedded processors, it does matter, because the transistor budget is much, much smaller than for a modern desktop CPU. It may come to pass in a few generations of die feature shrinking that we arrive back at the current situation of ISAs becoming irrelevant, but for the moment in the embedded space it does matter that you need to give up a few million transistors to buffering, chopping up and reissuing instructions compared to just reading and running them.
Remember, this is Jon Stokes we're talking about: he's the guy that taught most Slashdotters what they know about CISC and RISC as it is.
Re:What the Heck? (Score:2, Insightful)
A guy can dream can't he?
Re:CISC is dead (Score:4, Insightful)
The defining characteristic of CISC is that it assumes that the fetch part of the fetch/execute cycle is expensive. Therefore, instructions are designed to do as much as possible so you need to use as few as possible.
The defining characteristic of RISC is pipelining. RISC assumes that fetches are cheap (because of caches) and thus higher instruction throughput is the goal.
The KEY difference between RISC and CISC isn't the number of instructions or how "complex" they are.
RISC instructions are fixed-size (usually 32 or 62 bits). CISC instructions tend to vary in size, with added words for immediate data and other trimmings.
CISC has lots of addressing modes, RISC tends to have very few.
CISC allows memory access with most instructions. Most RISC instructions operate only on registers.
CISC has few registers. RISC has many registers.
Arguing about whether CISC or RISC is faster is moot. Core 2 isn't fast because it's "CISC" or "RISC", it's fast because it's a very well designed architecture. The fact is, designing ANY competitive CPU today is extraordinarily difficult. RISC made a difference in the early 90s when CISC designs were microcoded and RISC could be pipelined. But most performance CPUs today are vastly different internally.
Re:What the Heck? (Score:3, Insightful)
That's effectively what we've been seeing with microprocessors since they were invented. The moment that improvements in lithography shrink the die size, chip designers immediately start thinking about what they can do with all that extra space. So they start cramming in rather spacey features like FPUs, microcode engines, out of order engines, superscalar execution, SIMD cores, ever-larger L2 caches, 64bit support, so on and so forth. You'd be amazed how much chip designers cram into these processors. In some cases, the number of pins on the chip is actually becoming more of a limitation than the silicon area! (Each pin that's wired into the package significantly increases the cost to manufacture. It's bloody HARD to match a silicon wire of 45nm to a trace on the chip packaging.)
You might find these images to be of interest:
A simple "map" of the Core Duo [intel.com]
X-Ray of the Core 2 Duo chip [hardwarelogic.com]
Can you spot all four cores? [arstechnica.com]
Nehalm, Intel's next architecture to replace the Core Duo line [arstechnica.com] (This chip is designed with 32nm processes in mind.)
An abstract look at Nehalm design [wikipedia.org]
Detailed map of Via's Isiah processor [arstechnica.com]
Photos that really show off the incredibly small size of these chips. [intel.com]