Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

RISC Vs. CISC In Mobile Computing 126

eldavojohn writes "For the processor geeks here, Jon Stokes has a thoughtful article up at Ars Technica analyzing RISC vs. CISC in mobile phones (Wikipedia on Reduced Instruction Set Computers and Complex Instruction Set Computers). He wraps it up with two questions: 'How much is the legacy x86 code base really worth for mobile and ultramobile devices? The consensus seems to be "not much," and I vacillate on this question quite a bit. This question merits an entire article of its own, though,' and 'Will Intel retain its process leadership vs. foundries like TSMC, which are rapidly catching up to it in their timetables for process transitions? ARM, MIPS, and other players in the mobile device space that I haven't mentioned, like NVIDIA, AMD/ATI, VIA, and PowerVR, all depend on these foundries to get their chips to market, so being one process node behind hurts them. But if these RISC and mobile graphics products can compete with Intel's offerings on feature size, then that will neutralize Intel's considerable process advantage.'"
This discussion has been archived. No new comments can be posted.

RISC Vs. CISC In Mobile Computing

Comments Filter:
  • CISC is dead (Score:5, Insightful)

    by jmv ( 93421 ) on Monday May 19, 2008 @07:56PM (#23469032) Homepage
    There are no CISC CPUs anymore. There are RISC CPUs with RISC instruction sets (e.g. ARM) and there are RISC CPUs with CISC instruction sets (e.g. x86). The cores are mostly the same, except that the chips with CISC instructions need to do a little more work in the decoder. It requires a bit extra transistors and a bit more power, but it's not a huge deal for PCs and servers. Of course, for embedded applications, it makes a difference and for those it makes sense to have more "specialised" architectures (from microcontrollers to DSPs, ARM and all kinds of hybrids).
  • What the Heck? (Score:5, Insightful)

    by AKAImBatman ( 238306 ) <akaimbatman AT gmail DOT com> on Monday May 19, 2008 @08:02PM (#23469086) Homepage Journal
    RISC vs. CISC? What is this, the early 90's? There are no RISC chips anymore, except as product lines that were originally developed with the RISC methodology in mind. Similarly, true CISC doesn't exist either. Microcode has done wonders in turning complex instructions into a series of simpler instructions like one would find on a RISC processor.

    The author's real point appears to be: x86 vs. Other Embedded Architectures. Without even looking at the article (which I did do), it's not hard to answer that one: There is no need for x86 code in a mobile platform. The hardware is going to be different than a PC, the interface is going to be different than a PC, and the usage of the device is going to be different than a PC. Providing x86 compatibility thus offers few, if any, real advantages over an ARM or other mobile chip.

    If Intel's ATOM takes off, it will be on the merits of the processor and not on its x86 compatibility. Besides, x86 was a terrible architecture from the get-go. There's something mildly hilarious about the fact that it became the dominant instruction set in Desktop PCs across the world.
  • by RecessionCone ( 1062552 ) on Monday May 19, 2008 @08:19PM (#23469218)
    The RISC philosophy was to have every instruction be as simple as possible, so that the execution of each instruction could be as efficient as possible. The idea was that even though you might have to execute more instructions to get the job done, the speed you gained from the simple instruction set would compensate.

    I've had to work with the ARM ISA in the past (I was studying its implementation as a soft core on an FPGA), and I can tell you it doesn't follow the RISC philosophy well, if at all.

    One very non-RISC thing ARM did was move the shift instructions into every arithmetic instruction. That's right: there are no dedicated shift instructions. When you need a shift instruction, you have to encode it as part of a move operation or an add. In effect, every add, and, or, sub, etc. is actually a an add+shift, and+shift, or+shift, etc. This is the opposite of the RISC philosophy, and it significantly complicates the hardware, since a variable shifter has to be on the ALU critical path.

    Other non-RISC things ARM did include the Java instruction set extensions, the Thumb instruction set extensions (further reduce code size), vector & media instruction set instructions, etc.

    I think calling ARM "RISC" is a marketing decision only, done for historical reasons. It doesn't have much to do with the technical reality, IMO. Jon Stokes would have done better to say ARM vs. x86, instead of RISC vs. CISC, which is an outdated idea back from the 80s & 90s.

  • Re:CISC is dead (Score:1, Insightful)

    by Anonymous Coward on Monday May 19, 2008 @08:55PM (#23469494)
    CPU architecture not a huge deal?

    "Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture [Burroughs B5000] was a good idea.

    Just as an aside, to give you an interesting benchmark -- on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore's law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there's approximately a factor of 1,000 in efficiency that has been lost by bad CPU architectures.

    The myth that it doesn't matter what your processor architecture is -- that Moore's law will take care of you -- is totally false." --Alan Kay

    I'm not saying (nor is Alan, I suspect) that RISC is better, or even that RISC versus CISC matters, but architecture certainly does. If we didn't have x86 compatibility as a goal, do you think your CPU would look anything like the Core2Duo? Have you ever built a large system where initial architectural assumptions did *not* significantly affect the final performance?
  • by vought ( 160908 ) on Monday May 19, 2008 @09:29PM (#23469762)
    Not in disagreement, but Apple didn't ditch PowerPC because RISC offered no performance advantage; indeed, the G5 at lower clock speeds marginally outperformed the first Intel-based Macs at the same price points.

    Apple got rid of PowerPC because Motorola and IBM had no incentive to innovate and develop competitive processors in the mid-range; RISC was most worthwhile in the high-end big iron IBM machines using POWER and the low end embedded market served by Motorola/Freescale.

  • Huh? It takes more than just the same processor to be able to run the same apps. You gotta have the same operating system. And running Vista on a cell phone doesn't sound like a good idea to me (the mouse is a poor interface for a cell phone).

    Almost all application code written today is done in some portable manner. Writing custom assembly specific to a processor in an application is only done in certain performance critical things (ffmpeg anyone?). This is one of the reasons that Apple was so easily able to move from PPC to X86.
  • by Jacques Chester ( 151652 ) on Monday May 19, 2008 @11:46PM (#23470748)
    Every 4+ comment has the same "RISC|CISC is dead" comment talking about how x86 chips break down that massive, warty ISA into a series of RISC-like micro-ops for internal consumption. And that this has been the case since at least the Pentium Pro.

    Read the article. Jon Stokes makes that point: but he also makes the point that in embedded processors, it does matter, because the transistor budget is much, much smaller than for a modern desktop CPU. It may come to pass in a few generations of die feature shrinking that we arrive back at the current situation of ISAs becoming irrelevant, but for the moment in the embedded space it does matter that you need to give up a few million transistors to buffering, chopping up and reissuing instructions compared to just reading and running them.

    Remember, this is Jon Stokes we're talking about: he's the guy that taught most Slashdotters what they know about CISC and RISC as it is.
  • Re:What the Heck? (Score:2, Insightful)

    by benhattman ( 1258918 ) on Monday May 19, 2008 @11:47PM (#23470762)

    If Intel's ATOM takes off, it will be on the merits of the processor and not on its x86 compatibility. Besides, x86 was a terrible architecture from the get-go. There's something mildly hilarious about the fact that it became the dominant instruction set in Desktop PCs across the world.
    I for one think this might be an excellent migration path for the industry. Let the mobile industry settle on a non-x86 processor. Then develop all the necessary software for that processor (lightweight OS, web browser, etc). Then produce an amped up version of that chip for laptops/desktops. Voila, we bootstrap the software that is needed to sell a chip, and we end up with a significantly more efficient platform than anything we'll ever see with x86.

    A guy can dream can't he?
  • Re:CISC is dead (Score:4, Insightful)

    by RzUpAnmsCwrds ( 262647 ) on Tuesday May 20, 2008 @12:24AM (#23470982)
    People don't get RISC, and they don't get CISC.

    The defining characteristic of CISC is that it assumes that the fetch part of the fetch/execute cycle is expensive. Therefore, instructions are designed to do as much as possible so you need to use as few as possible.

    The defining characteristic of RISC is pipelining. RISC assumes that fetches are cheap (because of caches) and thus higher instruction throughput is the goal.

    The KEY difference between RISC and CISC isn't the number of instructions or how "complex" they are.

    RISC instructions are fixed-size (usually 32 or 62 bits). CISC instructions tend to vary in size, with added words for immediate data and other trimmings.

    CISC has lots of addressing modes, RISC tends to have very few.

    CISC allows memory access with most instructions. Most RISC instructions operate only on registers.

    CISC has few registers. RISC has many registers.

    Arguing about whether CISC or RISC is faster is moot. Core 2 isn't fast because it's "CISC" or "RISC", it's fast because it's a very well designed architecture. The fact is, designing ANY competitive CPU today is extraordinarily difficult. RISC made a difference in the early 90s when CISC designs were microcoded and RISC could be pipelined. But most performance CPUs today are vastly different internally.
  • Re:What the Heck? (Score:3, Insightful)

    by AKAImBatman ( 238306 ) <akaimbatman AT gmail DOT com> on Tuesday May 20, 2008 @10:58AM (#23475748) Homepage Journal

    Newer processes have always yielded faster, smaller, and cooler chips. Not anymore. 60nm didn't make chips use less power and 45nm doesn't help either.
    60nm and 45nm DID yield smaller and cooler chips. (On the smaller side, take a look at the Core Duo silicon sometime. It's amazing how much smaller it is than the PIV chip!) There's just one catch with that: When you shrink the processes and make the chip smaller and cooler, you also have the option of using those gains for new features. e.g. If my power usage and silicon footprint cut in half, then I have the opportunity to add another core for the same power usage AND still get twice the yield from a silicon wafer as I got before! (Half-sized silicon chip == 1 quarter the space)

    That's effectively what we've been seeing with microprocessors since they were invented. The moment that improvements in lithography shrink the die size, chip designers immediately start thinking about what they can do with all that extra space. So they start cramming in rather spacey features like FPUs, microcode engines, out of order engines, superscalar execution, SIMD cores, ever-larger L2 caches, 64bit support, so on and so forth. You'd be amazed how much chip designers cram into these processors. In some cases, the number of pins on the chip is actually becoming more of a limitation than the silicon area! (Each pin that's wired into the package significantly increases the cost to manufacture. It's bloody HARD to match a silicon wire of 45nm to a trace on the chip packaging.)

    You might find these images to be of interest:

    A simple "map" of the Core Duo [intel.com]
    X-Ray of the Core 2 Duo chip [hardwarelogic.com]
    Can you spot all four cores? [arstechnica.com]
    Nehalm, Intel's next architecture to replace the Core Duo line [arstechnica.com] (This chip is designed with 32nm processes in mind.)
    An abstract look at Nehalm design [wikipedia.org]
    Detailed map of Via's Isiah processor [arstechnica.com]
    Photos that really show off the incredibly small size of these chips. [intel.com]

One man's constant is another man's variable. -- A.J. Perlis

Working...