Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Hardware

RISC Vs. CISC In Mobile Computing 126

eldavojohn writes "For the processor geeks here, Jon Stokes has a thoughtful article up at Ars Technica analyzing RISC vs. CISC in mobile phones (Wikipedia on Reduced Instruction Set Computers and Complex Instruction Set Computers). He wraps it up with two questions: 'How much is the legacy x86 code base really worth for mobile and ultramobile devices? The consensus seems to be "not much," and I vacillate on this question quite a bit. This question merits an entire article of its own, though,' and 'Will Intel retain its process leadership vs. foundries like TSMC, which are rapidly catching up to it in their timetables for process transitions? ARM, MIPS, and other players in the mobile device space that I haven't mentioned, like NVIDIA, AMD/ATI, VIA, and PowerVR, all depend on these foundries to get their chips to market, so being one process node behind hurts them. But if these RISC and mobile graphics products can compete with Intel's offerings on feature size, then that will neutralize Intel's considerable process advantage.'"
This discussion has been archived. No new comments can be posted.

RISC Vs. CISC In Mobile Computing

Comments Filter:
  • Re:CISC is dead (Score:5, Informative)

    by RecessionCone ( 1062552 ) on Monday May 19, 2008 @08:22PM (#23469238)
    Actually, have you heard of micro-op and macro-op fusion? Intel is touting them as a big plus for their Core microarchitecture: basically, they take RISC internal instructions and fuse them into CISC internal instructions (micro-op fusion) and also take sets of CISC external instructions and fuse them into CISC internal instructions (macro-op fusion).

    So basically, things are so much more complicated these days that you can't even call x86 chips RISC CPUs with CISC instruction sets.

    We're in a post-RISC era.

  • Re:CISC is dead (Score:3, Informative)

    by Anonymous Coward on Monday May 19, 2008 @08:25PM (#23469254)
    The CPU's in today's IBM Mainframes are based on the POWER archictecture. That makes them technically RISC processors. You're a bit behind the times.
  • by EmbeddedJanitor ( 597831 ) on Monday May 19, 2008 @08:32PM (#23469308)
    Mostly little 8-bitters (PIC and AVR), but there are many processors that tend towards the RISC end of the spectrum (ARM, MIPS etc) which clearly have RISC roots. ARM, MIPS etc dominate in mobile space because they switch less transistors to achieve the same function (one of the goals of RISC design) and thus use less power.

    The only real point in x86 is Windows compatability. Linux runs fine on ARM and many other architectures. There are probably more ARM Linux systems than x86-based Linux systems (all those Linux cellphones run ARM).

    Apart from some very low level stuff, modern code tends to be very CPU agnostic.

  • by m.dillon ( 147925 ) on Monday May 19, 2008 @08:42PM (#23469402) Homepage
    There's no distinction between the two any more, and hasn't been for a long time. The whole point of RISC was to simplify the instruction format and pipeline.

    The problem these days is that it doesn't actually cost anything to have a complex instruction format. It's such a tiny, isolated piece of the chip that it doesn't count for anything, it doesn't even slow the chip down because the chip is decoding from a wide cache line (or multiple wide cache lines) anyway.

    So what does that leave us with? A load-store instruction architecture verses a read-modify-write instruction architecture? Completely irrelevant now that all modern processors have write buffer pipelines. And, it turns out, you need to have a RMW style instruction anyway, even if you are RISC, if you want to have any hope of operating in a SMP environment. And regardless of the distinction cpu architectures already have to optimize across multiple instructions, so again the concept devolves into trivialities.

    Power savings are certainly a function of the design principles used in creating the architecture, but it has nothing whatsoever to do with the high level concept of 'RISC' vs 'CISC'. Not any more.

    So what does that leave us with? Nothing.

    -Matt
  • by LWATCDR ( 28044 ) on Monday May 19, 2008 @08:51PM (#23469466) Homepage Journal
    SPARC? POWER?
    Both of those are actually popular when it comes to big iron. Yes Intel is it on the desktop but for big honking server it is just so so. For small lower power devices it is pretty lame. There is no reason why a small light mobile device has to be an X86.
  • by Waffle Iron ( 339739 ) on Monday May 19, 2008 @09:08PM (#23469580)

    People get confused with the way current x86's break apart instructions into microops. That's doesn't make it RISC. That just make it microcoded.

    That most certainly does not make it microcoded. Microcode is a set of words encoded in ROM memory that are read out one per clock, whose bits directly control the logic units of a processor. Microcode usually runs sequentially, in a fixed order, may contain subroutines, and is usually not very efficient.

    Modern CISC CPUs translate the incoming instructions into a different set of hardware instructions. These instructions are not coded in a ROM, and they can run independently, out of order and concurrently. They are much closer to RISC instructions than to any microcode.

    The X86 still contains real microcode to handle the stupid complex instructions from the 80286 era that nobody uses anymore. They usually take many clocks per instruction, and using them is not recommended.

  • RTFA much? (Score:4, Informative)

    by tepples ( 727027 ) <tepplesNO@SPAMgmail.com> on Monday May 19, 2008 @09:23PM (#23469714) Homepage Journal

    The problem these days is that it doesn't actually cost anything to have a complex instruction format. It's such a tiny, isolated piece of the chip that it doesn't count for anything
    Did you understand the article? Page 2 [arstechnica.com] is entirely about how the decoder on Atom isn't "such a tiny, isolated piece of the chip that it doesn't count for anything".

    And, it turns out, you need to have a RMW style instruction anyway, even if you are RISC, if you want to have any hope of operating in a SMP environment.
    But if only one instruction is an atomic swap, that means it doesn't need to be on the critical path, right?
  • Re:CISC is dead (Score:4, Informative)

    by hey! ( 33014 ) on Monday May 19, 2008 @09:33PM (#23469796) Homepage Journal

    ok but there's not tons of old main frames running still?


    No, there's not lots of old mainframes running still. But there are probably more new mainframes running than when computers were exclusively located in data centers. Back on the day, your chances of working directly with a mainframe, given that you worked with computers, was 1.0; now it's probably more like 0.001. But there's a lot more people working with computers.
  • Re:What the Heck? (Score:4, Informative)

    Weird. Half the responders disagreed with him and you didn't notice?

    RISC design was really, really attractive from an architectural standpoint. It simplified the hardware to such a great degree that it was completely worth the pain and suffering it put compiler writers through. With microcode, even stupid CISC architectures like x86 were able to run on a RISC CPU.

    But here's the rub: It is always slower to use multiple instructions to complete a task that could be completed in a single instruction with dedicated silicon.

    With that simple fact in mind, it didn't take long for CISC-style instructions to start reappearing in the silicon designs. Especially once the fab technologies improved enough to negate the speed advantages in early RISC chips. (e.g. Alpha seriously kicked ass back in the day.) Chip designers like Intel took note of what instructions were slowing things down and began adding them back into the silicon.

    Thus the bar moved. Rather than trying to keep the silicon clean, the next arms race began over who could put fancier vector instructions into their CPUs. Thus began the war over SIMD instructions. (Which, honestly, not that many people cared about. They are cool instructions, though, and can be blazingly fast when used appropriately.)

    An interesting trend you'll notice is that instructions take more or fewer instructions to execute between revisions of processors. (Especially with x86.) Part of this is definitely changes in the microcode and CPU design. But part of it is a re-evaluation of silicon usage. Some instructions which used to be fast thus become slow when they move to microcode, and some instructions that were slow become fast when they move to silicon.

    Rather interesting to watch in action. :-)
  • Re:CISC is dead (Score:3, Informative)

    by phantomfive ( 622387 ) on Monday May 19, 2008 @09:53PM (#23469952) Journal
    Don't know if you read the article, but the author goes into great detail about the advantages of RISC over CISC. While you are right, that Intel has managed to play some tricks to get CISC running really fast, it has been at the cost of other things. Imagine if all that space on the die used for transistors to do microcode translation had been used for cache, instead. Also, as you mention, it takes more power. This is extremely important in the embedded area, and is becoming more important in the server room as well.

    Some more advantages of RISC over CISC: it is easier to work with, giving designers more time to optimize other areas of the chip. AMD and Intel have spent a bundle of cash to get the old x86 to run decently.
    RISC is easier for compiler writers. In the x86, there are so many instructions, the chip designers don't optimize all of them equally. If you want maximum efficiency, you will need to use the correct instruction, and it may vary from chip to chip. Whereas with a RISC architecture, it's a lot easier to guess which instruction to use (there may be only one).

    There really is no advantage to CISC, other than the backwards compatibility of the x86 architecture.
  • by Anonymous Coward on Monday May 19, 2008 @10:46PM (#23470320)
    ""
    The problem these days is that it doesn't actually cost anything to have a complex instruction format. It's such a tiny, isolated piece of the chip that it doesn't count for anything, it doesn't even slow the chip down because the chip is decoding from a wide cache line (or multiple wide cache lines) anyway.
    ""

    The problem with your assumption is that it's _wrong_.

    It does cost something. The WHOLE ARTICLE explains in very good detail the type of overhead that goes into supporting x86 processors.

    The whole point of ATOM is Intel's attempt to make the ancient CISC _instruction_set_ work on a embedded style processor with the performance to handle multimedia and limited gaming.

    The overhead of CISC is the complex arrangement that takes the x86 ISA and translates it to the RISC-like chip that Intel uses to do the actual processing.

    When your dealing with a huge chip like a Xeon or Core2Duo with a huge battery or connected directly to the wall then it doesn't matter. Your taking a chip that would use 80watts TPD and going to 96.

    But with ARM platform you not only have to make it so small that it can fit in your pocket, but you have to make the battery last at least 8-10 _hours_.

    This is a hell of a lot easier when you can deal with a instruction set that is designed specifically to be stuck in a tiny space.

    If you don't understand this, you know NOTHING about hardware or processors.
  • Re:CISC is dead (Score:3, Informative)

    by level_headed_midwest ( 888889 ) on Tuesday May 20, 2008 @12:18AM (#23470936)
    They tried to throw x86 out with the Itanium. That initially went over about as well as selling ice to Eskimos in December but IA64 has started to get a little more traction in the huge-iron arena as of late. While it would be nice to be done with x86, IA64 isn't where it's at as Intel owns the ISA licenses lock, stock, and barrel. This means it's back to the Bad Old Days when chips cost a fortune and performance increases were small and infrequent. Also, IA64's EPIC model sucks on most code as it's strictly in-order.
  • by Game_Ender ( 815505 ) on Tuesday May 20, 2008 @01:50AM (#23471520)
    Except for VBA in Microsoft Office. Its implemented in tens of thousands of lines assembly (on both Windows and Mac) using specific knowledge of how the complier lays out the virtual function tables of C++ classes. Even the x86 assembly makes calls into the windows API, so its not even portable to other x86 platforms (like Intel Mac). In Excel, they have the floating point number formating routines hand codded in assembly. I assume the office team does this keep these apps nice and snappy under a large work load, but it certainly doesn't help ISA switches.

    Most video/audio apps have significant features that depend on well tuned hand written assembly. That is why it took adobe so long to port photoshop, they had to recode all there PPC optimized processing routines. The above reason is why VBA was dropped from Mac Office 2008.

  • by Waffle Iron ( 339739 ) on Tuesday May 20, 2008 @03:29AM (#23472124)

    Modern CPUs "translating instructions into hardware instructions" with a gate maze is essentially the same thing as pulling a wide microcode word from ROM whose bits directly control the logic units.

    Only if you ignore the mechanism of how it's done. However, the term "microcode" was created to describe the mechanism, not the result.

    Under your definition, it would appear any division of an instruction into multiple suboperations would qualify as microcode. That would presumably include the old-time CPUs that used state machine sequencers made from random flip flops and gates to run multi-step operations.

    The end result of those state machines was the same as microcode, and the microcode ROM (which included the next ROM address as part of the word) was logically a form of state machine. However, the word microcode was used to differentiate a specific type of state machine, where the logic functions were encoded in a regular grid-shaped ROM array, from other types of state machines. Modern CISC code translation does not involve ROM encoding, and is not this type of state machine.

  • Re:CISC is dead (Score:4, Informative)

    by Guy Harris ( 3803 ) <guy@alum.mit.edu> on Tuesday May 20, 2008 @03:36AM (#23472176)

    The third slide in the presentation clearly states that the Z6 is a sibling of the Power6

    As that slide says, "Siblings, not identical twins", "Different personalities", and "Very different ISAs=> very different cores".

    Further along in the presentation, slide 14 talks about the use of multiple-passes and millicode to handle CISC ops.

    To be precise, it says "Multi-pass handling of special cases" and "Leverage millicode for complex operations"; that means "complex instructions trap to millicode", where "millicode" is similar to, for example, PALcode in Alpha processors - it's z/Architecture machine code plus some special millicode-mode-only instructions to, for example, manipulate internal status registers. See, for example, "Millicode in an IBM zSeries processor" [ibm.com].

    Clearly the Z6 is exquisitely optimized to execute the z/Architecture instruction efficiently. It is also clear that it is part of the Power6 family.

    It's clear that, as the the third slide says, the Z6 "share[s] lots of DNA" with the Power6, i.e. it shares the fab technology, some low-level "design building blocks", large portions of some functional units, the pipeline design style, and many of the designers.

    It's not at all clear, however, that it would belong to a family with "Power" in its name, given that it does not implement the Power ISA. If it's a sibling of the Power6, that makes it, in a sense, a member of a family that includes the Power6, but, given that its native instruction set is different from the Power instruction set, it makes no sense to give that family a name such as "Power6" or anything else involving "Power" - and it certainly makes no sense to assert that it's "sed on the POWER archictecture", as the person to whom I was responding asserted.

  • Not quite (Score:3, Informative)

    by MarkusQ ( 450076 ) on Tuesday May 20, 2008 @09:02AM (#23474056) Journal

    word microcode was used to differentiate a specific type of state machine, where the logic functions were encoded in a regular grid-shaped ROM array

    While this was by far the most common sort of implementation, it wasn't what drove the definition. Many factors can effect how things ultimately get laid out on the silicon, and nobody ever said "well, we thought it was going to be micro coded but the ROM area wound up L shaped instead of rectangular, so I guess it isn't."

    What drove the definition was what differentiated micro-coded architectures from their piers and predecessors--the explicit use of a systematic way to organize and sequence the control lines (and there was some overlap and blur around the edges--ad hoc systems with "meta-control lines," gates arrays, RAM, and even demultiplexors instead of ROMS, etc.) to permit the design of more complex instructions. Because they were systematic, such systems could be written down like code instead of being laid out like circuits (which the ultimately were) and thus the name.

    Microcode is a way of designing (and thinking about) a CPU, not at the end of the day a way of implementing one. You could take a fully specified microcoded architecture and opportunistically replace some or all of the microstore with a gatemaze without effecting it's formal behaviour. Since the result would often be smaller, faster, and use less power this was commonly done.

    --MarkusQ

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...