Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Hardware

RISC Vs. CISC In Mobile Computing 126

eldavojohn writes "For the processor geeks here, Jon Stokes has a thoughtful article up at Ars Technica analyzing RISC vs. CISC in mobile phones (Wikipedia on Reduced Instruction Set Computers and Complex Instruction Set Computers). He wraps it up with two questions: 'How much is the legacy x86 code base really worth for mobile and ultramobile devices? The consensus seems to be "not much," and I vacillate on this question quite a bit. This question merits an entire article of its own, though,' and 'Will Intel retain its process leadership vs. foundries like TSMC, which are rapidly catching up to it in their timetables for process transitions? ARM, MIPS, and other players in the mobile device space that I haven't mentioned, like NVIDIA, AMD/ATI, VIA, and PowerVR, all depend on these foundries to get their chips to market, so being one process node behind hurts them. But if these RISC and mobile graphics products can compete with Intel's offerings on feature size, then that will neutralize Intel's considerable process advantage.'"
This discussion has been archived. No new comments can be posted.

RISC Vs. CISC In Mobile Computing

Comments Filter:
  • Completely pointless (Score:5, Interesting)

    by El Cabri ( 13930 ) * on Monday May 19, 2008 @08:06PM (#23469122) Journal
    RISC vs CISC was the architecture flamewar of the late 1980s. Welcome to the 21th century, you'll like it here. It's a world when, since the late 90s, the ISA (instruction set architecture), is so abstracted away from the actual micro-architecture of microprocessor, as to make it completely pointless to make distinctions between the two. Modern processors are RISC, they are CISC, they are vector machines, they're everything you want them to be. Move on, the modern problems are now in multi-core architecture and their issues of memory coherence, cache sharing, memory bandwidth, interlocking mechanisms, uniform vs non-uniform, etc. The "pure RISC" standard bearers of yore have disappeared or have been expelled from the personnal computing sphere (remember Apple ditching PowerPC ? Alpha anyone ? Where are those shiny MIPS-based SGIs gone?). Even Intel couldn't impose a new ISA on its own (poor adoption of IA-64). The only RISC ISA that has any existence in the personnal computing arena, including mobile, is ARM, but precisely, they do only mobile. There's really no reason at all to build any device on which you plan to run generic OSes and rich computing experience on anything else than x86 or x86-64 machines.
  • by serviscope_minor ( 664417 ) on Monday May 19, 2008 @08:07PM (#23469128) Journal
    Intel sucessfully killed the high end CPU manufacturers. However, recently they have had poor performance in the very low power arena. Their main offering (XScale, until they sold it) was poor compared ot the competitors. Compare the Intel PXA27x to the Philips LPC3180. The philips chip has about the same instruction rate for integer instructions (at half the clock rate), hardware floating point (so it's about 5x as fast at this) and draws about 1/5 of the power. I know which one I prefer...

    Unlike the old RISC workstation manufacturers which relied on a small market of high margin machines, the current embedded CPU manufacturers operate in a huge, cut-throat world where they need to squeeze up the price/performance ratio as high as possible to maintain a lead. I think this market will be somewhat tougher to crack than the workstation market, since intel does not have what they had before: an advantage in volume shipped.

  • by erice ( 13380 ) on Monday May 19, 2008 @08:28PM (#23469268) Homepage
    They just aren't very important distinctions anymore.
    Both refer to the instruction sets, not the internal workings. x86 was CISC in 1978 and it's still CISC in 2008. ARM was RISC in 1988 and still RISC in 2008. AMD64 is a border line case.

    People get confused with the way current x86's break apart instructions into microops. That's doesn't make it RISC. That just make it microcoded. That's how most CISC processors work. RISC process rarely use anything like microcode and when they do, it is looked upon as very unRISCy.

    Today, the internals of RISC and CISC processors are so complex that the almighty instruction set processing is barely a shim. There are still some advantages to RISC but they are dwarfed by out-of-order execution, vector extensions, branch prediction and other enormously complex features of modern processors.
  • by Chris Burke ( 6130 ) on Monday May 19, 2008 @08:46PM (#23469432) Homepage
    That's pretty standard in a lot of "RISCy" architectures, though. The POWER instruction set has a lot of ALU instructions that look like multiple operations jammed together. It has one particularly complicated shifting and masking instruction that makes me think that they decided to add programmatic access to the load data aligner in the data cache. I've always wondered if they regretted that as they changed the micro-architecture, and most likely the DC ended up being farther away from the integer scheduler. Maybe a similar motivation is behind the shifting on every alu op in ARM; I don't really know.

    Ultimately, though, I think "RISC" is still a pretty valid description. Sure the complexity of some instructions strains the ideals behind RISC philosophy, but it certainly has what I consider the most important aspects of a RISC ISA:
    1) Fixed instruction width. Makes superscalar instruction fetch and decode a breeze.
    2) Pure load/store design. Instructions are -either- a load, a store, or an operation on registers. This makes dispatch and scheduling simpler.

    These I consider critical to being "RISC", and they're also solid and easily definable characteristics. "Complexity of instructions" is subjective. Personally if I had to draw a hard and fast line, I'd say any ISA that can be completely implemented without microcode, and still follows the above two rules, qualifies as not being "too complex". I mean, it's relative, right? And since some x86 instructions get decoded into hundreds of micro-ops, I don't think a mere conjoining of two alu operations is all that bad.
  • by mykepredko ( 40154 ) on Monday May 19, 2008 @08:53PM (#23469486) Homepage
    As I read the previous posts, it seems like the focus is on RISC vs CISC but I think the real question is there value-add for designers to have an x86 compatible embedded microcontroller?

    People (and not just us) should be asking would end customers find it useful to be able to run their PC apps on their mobile devices? Current mobile devices typically have PowerPoint and Word readers (with maybe some editing capabilities) but would users find it worthwhile being able to load apps onto their mobile devices from the same CDs/DVDs that were used to load the apps onto their PCs?

    If end customers do find this attractive, would they be willing to pay the extra money for the chips (the Atom looks to require considerably more gates than a comparable ARM) as well as for the extra memory (Flash, RAM & Disk) that would be required to support PC OSes and apps? Even if end customers found this approach attractive, I think OEMs are going to have a long, hard think about whether or not they want to port their radio code to the x86 with Windows/Linux when they already have infrastructures built up with the processors and tools they are currently using.

    The whole thing doesn't really make sense to me because if Intel wanted to be in the MCU business, then why did they spin it off as Marvell (which also included the mobile WiFi technology as well)?

    The whole think seems like a significant risk that customers will want products built from this chip and the need for Intel and OEMs to recreate the infrastructure they have for existing chips (ie ARM) for the x86 Atom.

    myke
  • by vought ( 160908 ) on Monday May 19, 2008 @09:36PM (#23469812)

    Although they require multiple instructions to do things, these are almost always 1 or 2 cycles each. That means that although it may have to execute 3 instructions to do the same as 1 CISC instruction, it's often done it in half the clock cycles.
    Unfortunately, marketing rules the day in the mind of consumers, so AltiVec/VMX and Apple's PowerPC ISA advantages were lost on consumers looking for the "fastest" machines in the consumer space.

    Until recently, there were still speed advantages to using a four core multi-processor G5 for some operations over the 3.0GHz eight-core Xeon Mac Pros because of VMX.

    It is somewhat ironic that the Core architecture chips now used by Apple in all but the Mac Pros are all below the 3GHz clock "wall" that was never overcome by the G5, but the Intel name seems to have gone a long way in assuaging consumer doubts about buying a Mac.
  • Re:CISC is dead (Score:3, Interesting)

    by Darinbob ( 1142669 ) on Monday May 19, 2008 @09:47PM (#23469910)

    There are no CISC CPUs anymore.
    One big difference between CISC and RISC was with philosophy. CISC didn't really have a philosphy though, it was just the default. The RISC philosophy was to trim out the fat; speed up the processing and make efficient use of chip resources even if it makes the assembler code. Ie, toss out the middle-man that is the microcode engine, moving some down to hardware and some up to the programmer's level. Then use that extra savings for more registers, concurrency, etc.

    The new x86 Intel CPUs don't really have that philosopy. They use many techniques pioneered on RISC CPUs, but they haven't disposed of the instruction set. Compilers are still stuck trying to optimize at the CISC level. The microcode engine is still there in some sense, converting high level x86 code to internal micro operations. Intel keeps CISC working by pouring huge amounts of resources into the design.

    Of course Intel is in a bind here. They can't dump the x86, it's their bread, butter, and dessert. They have to make CISC fast because their enormous customer base demands it. They're forever stuck trying to keep an instruction set from 1978 going strong since they can't just toss it all out and make something simpler.
  • Re:What the Heck? (Score:3, Interesting)

    by Darinbob ( 1142669 ) on Monday May 19, 2008 @09:52PM (#23469948)

    Microcode has done wonders in turning complex instructions into a series of simpler instructions like one would find on a RISC processor.
    But that's exactly what most CISC style computers were doing when RISC was first thought about. This is the classic CISC computer design model, such as with the VAX. High level instructions with complex addressing modes, all handled by a microcode engine that had it's own programming with a simpler and finer-grained instruction set (some had a VLIW-like microcode, some was more RISC-like).
  • by Darinbob ( 1142669 ) on Monday May 19, 2008 @10:03PM (#23470014)

    And, it turns out, you need to have a RMW style instruction anyway, even if you are RISC, if you want to have any hope of operating in a SMP environment.
    PowerPC manages without that. It still has to use 1 special load and 1 special store instruction though, but it has no read-modify-write or test-and-set instructions.

  • by prockcore ( 543967 ) on Monday May 19, 2008 @10:50PM (#23470374)
    The PowerPC is nothing without the AltiVec vector unit, which is a decidely CISC concept.
  • by Anonymous Coward on Monday May 19, 2008 @11:16PM (#23470580)
    The real question is whether ARM thumb instructions have higher code density than x86 instructions for infrequently executed code. Instruction bandwidth is far more precious than the execution complexity on-die (chip IOs toggling far outweigh any decoder logic), so for mobile it's really about how efficiently you can compress the instructions, not what kind of architecture they are based on. I'm guessing ARM still comes out ahead, but it would be an interesting experiment to run...
  • by bill_kress ( 99356 ) on Tuesday May 20, 2008 @12:19AM (#23470944)
    I understand the theory--you simplify instructions, do things to speed up the processor so it can run faster, then optimize the processor to run as fast as you can.

    In other words, you are designing your instruction set to your hardware.

    Now, assuming that you are going to have close to infinite investment into speeding up the CPU, it seems that if you are going to fix an instruction set across that development time, you want the instruction set that is the smallest and most powerful you could get it.

    That way for the same cycle instead of executing one simple instruction you are executing one powerful one (that does, say 5x more than the simple one)

    Now at first the more powerful one will take more time than the simple one, but as the silicon becomes more powerful, The hardware designers are going to come up with a way to make it only take 2x as long as the simple one. Then less.

    I guess I mean that you will get more relative benefit tweaking the performance of a hard instruction than an easy one.

    Also, at some point the Memory to CPU channel will be the limit.

    I'd kinda like to see Intel take on an instruction set designed for the compiler rather than the CPU (like Java Bytecode). Bytecode tends to be MUCH smaller--and a quad-core system that directly executes bytecode, once fully optimized, should blow away anything we have now in terms of overall speed.
  • by MarkusQ ( 450076 ) on Tuesday May 20, 2008 @12:26AM (#23470990) Journal

    That most certainly does not make it microcoded. Microcode is a set of words encoded in ROM memory that are read out one per clock, whose bits directly control the logic units of a processor. Microcode usually runs sequentially, in a fixed order, may contain subroutines, and is usually not very efficient.

    Modern CISC CPUs translate the incoming instructions into a different set of hardware instructions. These instructions are not coded in a ROM, and they can run independently, out of order and concurrently. They are much closer to RISC instructions than to any microcode.

    The distinction you seem to be trying to draw here is not very sound. Modern CPUs "translating instructions into hardware instructions" with a gate maze is essentially the same thing as pulling a wide microcode word from ROM whose bits directly control the logic units. In both cases you put some bits in to start the process off, and you get a larger number of bits as a wide bus of signals out, which are used to direct traffic inside the CPU. The picture only looks

    Specifically, the different parts of each microcode instruction executed in parallel then, just as now, though out of order execution was much rarer (some DSPs had it IIRC). This was not because microcode as it was then conceived couldn't handle it, but that the in-CPU hardware to support it wasn't there. There's no point going through gymnastics to feed your ALU if you've only got one and it's an order of magnitude slower than the circuit that feeds it.

    One of the biggest annoyances of staying in any one field for too long is having to watch some technology following the logical path from conception to fruition go through an endless series of renaming (AKA jargon upgrades) that add nothing but confusion and pomposity to the field.

    --MarkusQ

  • Re:What the Heck? (Score:2, Interesting)

    by eyal0 ( 912653 ) on Tuesday May 20, 2008 @05:47AM (#23472818)
    I didn't think about it that way, but you're right, it's true. If you don't care how hot or big your chip gets, give your user as many instructions as you can. Having a bunch of little instructions means that they all take as long as the slowest one, even if most of them don't need a full clock cycle.

    The interesting part of the article is about the process. Intel's domination has been in their process, always a few steps ahead of the competition (maybe just a half step ahead of TSMC). Newer processes have always yielded faster, smaller, and cooler chips. Not anymore. 60nm didn't make chips use less power and 45nm doesn't help either.

    In a sense, one dimension of the playing-field has become level for Intel and the custom fabs. And that's the level in which embedded plays.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...