RISC Vs. CISC In Mobile Computing 126
eldavojohn writes "For the processor geeks here, Jon Stokes has a thoughtful article up at Ars Technica analyzing RISC vs. CISC in mobile phones (Wikipedia on Reduced Instruction Set Computers and Complex Instruction Set Computers). He wraps it up with two questions: 'How much is the legacy x86 code base really worth for mobile and ultramobile devices? The consensus seems to be "not much," and I vacillate on this question quite a bit. This question merits an entire article of its own, though,' and 'Will Intel retain its process leadership vs. foundries like TSMC, which are rapidly catching up to it in their timetables for process transitions? ARM, MIPS, and other players in the mobile device space that I haven't mentioned, like NVIDIA, AMD/ATI, VIA, and PowerVR, all depend on these foundries to get their chips to market, so being one process node behind hurts them. But if these RISC and mobile graphics products can compete with Intel's offerings on feature size, then that will neutralize Intel's considerable process advantage.'"
CISC is dead (Score:5, Insightful)
Re: (Score:1)
Re: (Score:3, Informative)
Re: (Score:1)
Re:CISC is dead (Score:4, Informative)
No, there's not lots of old mainframes running still. But there are probably more new mainframes running than when computers were exclusively located in data centers. Back on the day, your chances of working directly with a mainframe, given that you worked with computers, was 1.0; now it's probably more like 0.001. But there's a lot more people working with computers.
Re: (Score:2)
Not according to an IBM slide presentation on the Z6 microprocessor [ibm.com], which is the latest processor for the z/Architecture machines (the machines that are descendants of the System/360 mainframes).
The AS/400^WiSeries^WSystem i midrange machines do use PowerPC processors, but they're different from the z/Architecture machines.
Re:CISC is dead (Score:4, Informative)
As that slide says, "Siblings, not identical twins", "Different personalities", and "Very different ISAs=> very different cores".
To be precise, it says "Multi-pass handling of special cases" and "Leverage millicode for complex operations"; that means "complex instructions trap to millicode", where "millicode" is similar to, for example, PALcode in Alpha processors - it's z/Architecture machine code plus some special millicode-mode-only instructions to, for example, manipulate internal status registers. See, for example, "Millicode in an IBM zSeries processor" [ibm.com].
It's clear that, as the the third slide says, the Z6 "share[s] lots of DNA" with the Power6, i.e. it shares the fab technology, some low-level "design building blocks", large portions of some functional units, the pipeline design style, and many of the designers.
It's not at all clear, however, that it would belong to a family with "Power" in its name, given that it does not implement the Power ISA. If it's a sibling of the Power6, that makes it, in a sense, a member of a family that includes the Power6, but, given that its native instruction set is different from the Power instruction set, it makes no sense to give that family a name such as "Power6" or anything else involving "Power" - and it certainly makes no sense to assert that it's "sed on the POWER archictecture", as the person to whom I was responding asserted.
Re:CISC is dead (Score:5, Informative)
So basically, things are so much more complicated these days that you can't even call x86 chips RISC CPUs with CISC instruction sets.
We're in a post-RISC era.
Re: (Score:1)
CISC is alive and well and so is RISC (Score:5, Interesting)
Both refer to the instruction sets, not the internal workings. x86 was CISC in 1978 and it's still CISC in 2008. ARM was RISC in 1988 and still RISC in 2008. AMD64 is a border line case.
People get confused with the way current x86's break apart instructions into microops. That's doesn't make it RISC. That just make it microcoded. That's how most CISC processors work. RISC process rarely use anything like microcode and when they do, it is looked upon as very unRISCy.
Today, the internals of RISC and CISC processors are so complex that the almighty instruction set processing is barely a shim. There are still some advantages to RISC but they are dwarfed by out-of-order execution, vector extensions, branch prediction and other enormously complex features of modern processors.
Re:CISC is alive and well and so is RISC (Score:5, Informative)
That most certainly does not make it microcoded. Microcode is a set of words encoded in ROM memory that are read out one per clock, whose bits directly control the logic units of a processor. Microcode usually runs sequentially, in a fixed order, may contain subroutines, and is usually not very efficient.
Modern CISC CPUs translate the incoming instructions into a different set of hardware instructions. These instructions are not coded in a ROM, and they can run independently, out of order and concurrently. They are much closer to RISC instructions than to any microcode.
The X86 still contains real microcode to handle the stupid complex instructions from the 80286 era that nobody uses anymore. They usually take many clocks per instruction, and using them is not recommended.
Looks like microcode, smells like microcode,... (Score:3, Interesting)
Re:Looks like microcode, smells like microcode,... (Score:4, Informative)
Only if you ignore the mechanism of how it's done. However, the term "microcode" was created to describe the mechanism, not the result.
Under your definition, it would appear any division of an instruction into multiple suboperations would qualify as microcode. That would presumably include the old-time CPUs that used state machine sequencers made from random flip flops and gates to run multi-step operations.
The end result of those state machines was the same as microcode, and the microcode ROM (which included the next ROM address as part of the word) was logically a form of state machine. However, the word microcode was used to differentiate a specific type of state machine, where the logic functions were encoded in a regular grid-shaped ROM array, from other types of state machines. Modern CISC code translation does not involve ROM encoding, and is not this type of state machine.
Not quite (Score:3, Informative)
While this was by far the most common sort of implementation, it wasn't what drove the definition. Many factors can effect how things ultimately get laid out on the silicon, and nobody ever said "well, we thought it was going to be micro coded but the ROM area wound up L shaped instead of rectangular, so I guess it isn't."
What drove the definition was what
Re: (Score:2)
The distinctions can get really blurry around the edges, but in a sense it really IS fair to claim the x86 is a RISC core with CISC wrapped around it. The x86-64 moreso.
Under the surface, tyhe execution units look a lot like a set of specialized RISC processors with a large set of registers. They do no branch prediction or out of order instructions, they just execute the instruction stream from the dispatcher.
Above that layer, the familiar registers of the CISC instruction set are actually virtual regis
Re: (Score:1, Insightful)
"Neither Intel nor Motorola nor any other chip company understands the first thing about why that architecture [Burroughs B5000] was a good idea.
Just as an aside, to give you an interesting benchmark -- on roughly the same system, roughly optimized the same way, a benchmark from 1979 at Xerox PARC runs only 50 times faster today. Moore's law has given us somewhere between 40,000 and 60,000 times improvement in that time. So there's approximately a factor of 1,000 in efficien
Re: (Score:2)
It's just a question to who and when. CPU architecture does matter overall. It does NOT matter to the programmer of an embedded system today. All that matters there is is it fast enough to do what needs doing. To the hardware guy the question is is it low enough power (and heat) to make a practical device today. Somewhere in there, the unit cost will come into play as well.
Being a particular architecture has little to do with any of that. Thus, as long as you have the source and the toolchain needed it's
Re: (Score:2)
There are no CISC CPUs anymore.
For cutting-edge processors that is definitely the case. CISC really doesn't lend itself well to techniques like pipelining, so Intel takes its "complicated" legacy CISC instructions and breaks them down into several smaller RISC operations (much like parent was saying).
I believe there are still a lot of older processors out there still being used that would be considered CISC CPUs though. This would be more true in spaces like embedded systems where they don't need the latest and greatest to accomplish
Re: (Score:2)
While embedded apps often don't need the latest and greatest, low power and little enough heat to go with passive cooling are a BIG plus. RISC lends itself better to low power and heat.
Re: (Score:3, Interesting)
There are no CISC CPUs anymore.
One big difference between CISC and RISC was with philosophy. CISC didn't really have a philosphy though, it was just the default. The RISC philosophy was to trim out the fat; speed up the processing and make efficient use of chip resources even if it makes the assembler code. Ie, toss out the middle-man that is the microcode engine, moving some down to hardware and some up to the programmer's level. Then use that extra savings for more registers, concurrency, etc.
The new x86 Intel CPUs don't really ha
Re: (Score:2)
One big difference between CISC and RISC was with philosophy. CISC didn't really have a philosphy though, it was just the default.
Insofar as there was a philosophy, it was to make best use of the scarce resources of the time: memory (so pack as much functionality as possible into an instruction byte) and programmer labor (so make the assembly language as versatile to hand-code as possible).
Of course Intel is in a bind here. They can't dump the x86, it's their bread, butter, and dessert. They have to make CISC fast because their enormous customer base demands it. They're forever stuck trying to keep an instruction set from 1978 going strong since they can't just toss it all out and make something simpler.
I wouldn't be quite so sympathetic toward them. They do it because it's a good barrier to entry and they already own most of the ways around it. Besides, people just won't buy a desktop processor that doesn't natively run x86 code, so if you wan
Re: (Score:2)
No. Compilers are fully aware of what's happening and take into account what happens after CISC instructions are broken down. Of course there is some gymnastics and overhead involved in translating CISC instructions into RISC instructions, but it's not as bad as you make it sound. What's really complex in modern general-purpose CPUs is all the stuff related to superscalar execution: dependencies, out-of-order execution, branch prediction. Those
Re: (Score:3, Informative)
Re:CISC is dead (Score:4, Insightful)
The defining characteristic of CISC is that it assumes that the fetch part of the fetch/execute cycle is expensive. Therefore, instructions are designed to do as much as possible so you need to use as few as possible.
The defining characteristic of RISC is pipelining. RISC assumes that fetches are cheap (because of caches) and thus higher instruction throughput is the goal.
The KEY difference between RISC and CISC isn't the number of instructions or how "complex" they are.
RISC instructions are fixed-size (usually 32 or 62 bits). CISC instructions tend to vary in size, with added words for immediate data and other trimmings.
CISC has lots of addressing modes, RISC tends to have very few.
CISC allows memory access with most instructions. Most RISC instructions operate only on registers.
CISC has few registers. RISC has many registers.
Arguing about whether CISC or RISC is faster is moot. Core 2 isn't fast because it's "CISC" or "RISC", it's fast because it's a very well designed architecture. The fact is, designing ANY competitive CPU today is extraordinarily difficult. RISC made a difference in the early 90s when CISC designs were microcoded and RISC could be pipelined. But most performance CPUs today are vastly different internally.
Re: (Score:2)
This is no longer true, now the bottleneck is memory.
Re: (Score:2)
This is no longer true, now the bottleneck is memory.
I'd also make the claim that single-threaded performance is a bottleneck.
Designers wowed us in the 1980s and early 90s with fully-pipelined designs that approached an IPC of 1. This gave you double or more performance over non-pipelined architectures. Even Intel surprised the world with the
Re: (Score:3, Informative)
Re: (Score:2)
In the x86, there are so many instructions, the chip designers don't optimize all of them equally. If you want maximum efficiency, you will need to use the correct instruction, and it may vary from chip to chip.
RISC as a thought suppressant (Score:2)
I hate the way RISC is embraced by many as a pretext to operate their brain in neutral.
Consider the x86 read-modify-write address mode, which few RISC chips incorporate.
Compared to the RISC load, operate, store sequence, this saves you: two instruction fetches, the unnecessary use of a named register, unnecessary read/write transfers to the permanent register file, and transferring the *same* load/store address to the memory order buffer *twice*.
Fixed width RISC instruction sets with large register files an
What the Heck? (Score:5, Insightful)
The author's real point appears to be: x86 vs. Other Embedded Architectures. Without even looking at the article (which I did do), it's not hard to answer that one: There is no need for x86 code in a mobile platform. The hardware is going to be different than a PC, the interface is going to be different than a PC, and the usage of the device is going to be different than a PC. Providing x86 compatibility thus offers few, if any, real advantages over an ARM or other mobile chip.
If Intel's ATOM takes off, it will be on the merits of the processor and not on its x86 compatibility. Besides, x86 was a terrible architecture from the get-go. There's something mildly hilarious about the fact that it became the dominant instruction set in Desktop PCs across the world.
Re: (Score:1)
It's a P6 chip. Triple the speed of the Pentium
Yeah. It's not just the chip, it has a PCI bus
RISC architecture is gonna change everything
Yes, I know. Not technically correct. But had to do it.
Re: (Score:1)
*sigh* Despite its flaws, it was a fun shiney movie. Nobody would pay $10 to see what geeks REALLY do. Not ev
Re: (Score:1)
It really was 'bleeding edge' tech (in theory) when it came out. We've come so far so quickly. A friend and I still joke about the '28.8bps' modem (which I had the zoom modem featured in the film).
Now on the same note, the virus attack scene at the end was repeated today at my place of work. I haven't seen bugz crawling across the screen literally eating it
There are very few RISC, but there are some (Score:5, Informative)
The only real point in x86 is Windows compatability. Linux runs fine on ARM and many other architectures. There are probably more ARM Linux systems than x86-based Linux systems (all those Linux cellphones run ARM).
Apart from some very low level stuff, modern code tends to be very CPU agnostic.
Re: (Score:1)
Re: (Score:2)
Currently deployed Windows Mobile is a working subset of the Win32 api used on PCs, uses the same 'standards compliant' protocols to communicate with other computers, and the only difference is it was compiled for the ARM.
With such a blistering example that the x86 instruction set is unnecessary to achieve platform portability, what advantage can the Atom bring the table? Out-of-order execution isn't really a big advantage especially when ARM already is already sampling such
Re: (Score:2)
Mostly little 8-bitters (PIC and AVR), ...
If you consider PIC to be RISC, then you, sir, are quite the masochist. :-)
PIC may have a "reduced" instruction set, but it violates nearly every other principal of RISC design: its instruction set is hopelessly non-orthogonal, it is severely lacking in general-purpose registers (ONE), and its addressing modes are extremely crude, requiring bank switching and making position-independent code essentially impossible.
AVR on the other hand is great. It has a fully orthogonal instruction set, 32 general purpos
Re: (Score:3, Interesting)
Microcode has done wonders in turning complex instructions into a series of simpler instructions like one would find on a RISC processor.
But that's exactly what most CISC style computers were doing when RISC was first thought about. This is the classic CISC computer design model, such as with the VAX. High level instructions with complex addressing modes, all handled by a microcode engine that had it's own programming with a simpler and finer-grained instruction set (some had a VLIW-like microcode, some was more RISC-like).
Microsoft dependency or lack of... (Score:2)
The author's real point appears to be: x86 vs. Other Embedded Architectures. Without even looking at the article (which I did do), it's not hard to answer that one: There is no need for x86 code in a mobile platform. The hardware is going to be different than a PC, the interface is going to be different than a PC, and the usage of the device is going to be different than a PC.[...] Besides, x86 was a terrible architecture from the get-go. There's something mildly hilarious about the fact that it became the dominant instruction set in Desktop PCs across the world.
Back in the 90s Intel's x86 ISA managed to take over the desktop, not as much because of its inherent benefits, but because of software.
The market for workstation was increasingly flooded with users who were looking for the only thing they recognize : a machine running Windows with associated software.
Given that Microsoft's presence and quality in non-Intel architecture was a joke at best, all of these newcomers catered to Windows-capable architecture. Then with market economics doing their work, price of
Re: (Score:2, Insightful)
If Intel's ATOM takes off, it will be on the merits of the processor and not on its x86 compatibility. Besides, x86 was a terrible architecture from the get-go. There's something mildly hilarious about the fact that it became the dominant instruction set in Desktop PCs across the world.
I for one think this might be an excellent migration path for the industry. Let the mobile industry settle on a non-x86 processor. Then develop all the necessary software for that processor (lightweight OS, web browser, etc). Then produce an amped up version of that chip for laptops/desktops. Voila, we bootstrap the software that is needed to sell a chip, and we end up with a significantly more efficient platform than anything we'll ever see with x86.
A guy can dream can't he?
Re:What the Heck? (Score:4, Informative)
RISC design was really, really attractive from an architectural standpoint. It simplified the hardware to such a great degree that it was completely worth the pain and suffering it put compiler writers through. With microcode, even stupid CISC architectures like x86 were able to run on a RISC CPU.
But here's the rub: It is always slower to use multiple instructions to complete a task that could be completed in a single instruction with dedicated silicon.
With that simple fact in mind, it didn't take long for CISC-style instructions to start reappearing in the silicon designs. Especially once the fab technologies improved enough to negate the speed advantages in early RISC chips. (e.g. Alpha seriously kicked ass back in the day.) Chip designers like Intel took note of what instructions were slowing things down and began adding them back into the silicon.
Thus the bar moved. Rather than trying to keep the silicon clean, the next arms race began over who could put fancier vector instructions into their CPUs. Thus began the war over SIMD instructions. (Which, honestly, not that many people cared about. They are cool instructions, though, and can be blazingly fast when used appropriately.)
An interesting trend you'll notice is that instructions take more or fewer instructions to execute between revisions of processors. (Especially with x86.) Part of this is definitely changes in the microcode and CPU design. But part of it is a re-evaluation of silicon usage. Some instructions which used to be fast thus become slow when they move to microcode, and some instructions that were slow become fast when they move to silicon.
Rather interesting to watch in action.
Re: (Score:2, Interesting)
The interesting part of the article is about the process. Intel's domination has been in their process, always a few steps ahead of the competition (maybe just a half step ahead of TSMC). Newer processes h
Re: (Score:3, Insightful)
60nm and 45nm DID yield smaller and cooler chips. (On the smaller side, take a look at the Core Duo silicon sometime. It's amazing how much smaller it is than the PIV chip!) There's just one catch with that: When you shrink the processes and make the chip smaller and cooler, you also have the option of using those gains for new features. e.g. If my power usag
Re: (Score:2)
Ever hear of Transmeta [wikipedia.org]? They already tried that and failed. And in any case, Intel already provides 5.5 watt Core Solo [wikipedia.org] chips to the market. How much more power efficient do you want? Taking the performance hit of ATOM just to lose a watt and a half doesn't
How much is the legacy x86 code base really worth? (Score:2)
Re:How much is the legacy x86 code base really wor (Score:2)
mod parent up! Re:How much is the legacy x86 (Score:2)
Completely pointless (Score:5, Interesting)
Re: (Score:1)
Expelled simply because backwards-compatibility is important and performance comparisons between different architectures was difficult. Everywhere else, good design/low power is what selects which are used most.
Re:Completely pointless (Score:4, Insightful)
Apple got rid of PowerPC because Motorola and IBM had no incentive to innovate and develop competitive processors in the mid-range; RISC was most worthwhile in the high-end big iron IBM machines using POWER and the low end embedded market served by Motorola/Freescale.
Re:Completely pointless (Score:4, Interesting)
Re: (Score:2)
Ahem, the first PowerPC didn't have the AltiVec vector unit..
As for being a CISC concept, in some way it's true that they are complex intructions (division is also a complex instruction provided by many RISC), but they have also a fixed length, register-register operation only with separated load-store, etc so they're also RISC in many ways..
Re: (Score:2)
The PowerPC is nothing without the AltiVec vector unit, which is a decidely CISC concept.
I beg to differ. The G3 with 1MB backside cache absolutely spanked the PIII at the same clock speeds - the G3 had no vector unit. If you have evidence that a vector unit was all that made PPC competitive, please provide it - but Apple regularly kicked Intel's butt from 1994 through 1999 without vector units - of course Motorola and IBM were still interested in making general purpose CPUs back then, too.
Re: (Score:2)
Personally, I prefer MIPS for my embedded devices. It's cleaner than ARM and dev-boards are easier to use.
Re: (Score:2)
Re: (Score:1, Troll)
Yeah, I like vectors. Takes all of the hard work of having a dynamic-sized array of thingies.
Re: (Score:3, Informative)
Both of those are actually popular when it comes to big iron. Yes Intel is it on the desktop but for big honking server it is just so so. For small lower power devices it is pretty lame. There is no reason why a small light mobile device has to be an X86.
Re: (Score:2)
Intel have a poor track record... (Score:3, Interesting)
Unlike the old RISC workstation manufacturers which relied on a small market of high margin machines, the current embedded CPU manufacturers operate in a huge, cut-throat world where they need to squeeze up the price/performance ratio as high as possible to maintain a lead. I think this market will be somewhat tougher to crack than the workstation market, since intel does not have what they had before: an advantage in volume shipped.
Insane mods. (Score:2)
The article mentions that Intel killed the old workstation RISC vendors. Parent posts suggests why this is not so easy for Intel.
And further, how is intel having a poor track record for embedded processors compared to other manufactureres offtopic for an article about intel producing embedded processors?
Re: (Score:2)
agreed. Xscale had lots of early promise, but alternatives such as Ti's OMAP offer better performance for lower power consumption - simply compare the performance of the Sharp Zaurus with its 400MHz PXA270 against the Nokia N800 with its 400MHz OMAP, the latter beats it hands down.
However, Xscale was a key Intel product for handhelds, and so I was not the only one puzzled when Intel sold Xscale to Marvell, and it was not until the Atom appeared that all became clear.
What value? (Score:1)
Using ARM on mobile platforms at least offers some hope of making a clean break from all the backwards compatbility cruft that x86 has dragged along with it for decades now.
ARM is RISC in name only (Score:2, Insightful)
I've had to work with the ARM ISA in the past (I was studying its implementation as a soft core on an FPGA), and I can tell you it doesn't follow the RISC philosophy well, if at all.
One ver
Re: (Score:1)
Re: (Score:2)
Re: (Score:3, Interesting)
Re: (Score:2)
It has one particularly complicated shifting and masking instruction that makes me think that they decided to add programmatic access to the load data aligner in the data cache.
Ugh, PPC is full of shit like that. I'm implementing a PPC core as part of my emulation platform ( http://sourceforge.net/projects/ironbabel/ [sourceforge.net] ) right now, and instructions like rlwinmx, srawx, srwx, slwx, cntlzwx, crxor are just painful. PPC/Power is really well designed, but god it's painful to deal with.
Re: (Score:2)
They probably just duped the alignment logic in the integer unit as it moved away from the data cache (because chip densities were growing at the same time), so it didn't bother them much at all.
Re: (Score:2)
Re: (Score:2)
RISC is good (Score:2)
There is no RISC vs CISC any more (Score:3, Informative)
The problem these days is that it doesn't actually cost anything to have a complex instruction format. It's such a tiny, isolated piece of the chip that it doesn't count for anything, it doesn't even slow the chip down because the chip is decoding from a wide cache line (or multiple wide cache lines) anyway.
So what does that leave us with? A load-store instruction architecture verses a read-modify-write instruction architecture? Completely irrelevant now that all modern processors have write buffer pipelines. And, it turns out, you need to have a RMW style instruction anyway, even if you are RISC, if you want to have any hope of operating in a SMP environment. And regardless of the distinction cpu architectures already have to optimize across multiple instructions, so again the concept devolves into trivialities.
Power savings are certainly a function of the design principles used in creating the architecture, but it has nothing whatsoever to do with the high level concept of 'RISC' vs 'CISC'. Not any more.
So what does that leave us with? Nothing.
-Matt
RTFA much? (Score:4, Informative)
Re: (Score:3, Interesting)
You all miss the point. (Score:4, Informative)
The problem these days is that it doesn't actually cost anything to have a complex instruction format. It's such a tiny, isolated piece of the chip that it doesn't count for anything, it doesn't even slow the chip down because the chip is decoding from a wide cache line (or multiple wide cache lines) anyway.
""
The problem with your assumption is that it's _wrong_.
It does cost something. The WHOLE ARTICLE explains in very good detail the type of overhead that goes into supporting x86 processors.
The whole point of ATOM is Intel's attempt to make the ancient CISC _instruction_set_ work on a embedded style processor with the performance to handle multimedia and limited gaming.
The overhead of CISC is the complex arrangement that takes the x86 ISA and translates it to the RISC-like chip that Intel uses to do the actual processing.
When your dealing with a huge chip like a Xeon or Core2Duo with a huge battery or connected directly to the wall then it doesn't matter. Your taking a chip that would use 80watts TPD and going to 96.
But with ARM platform you not only have to make it so small that it can fit in your pocket, but you have to make the battery last at least 8-10 _hours_.
This is a hell of a lot easier when you can deal with a instruction set that is designed specifically to be stuck in a tiny space.
If you don't understand this, you know NOTHING about hardware or processors.
I think people are missing the point (Score:3, Interesting)
People (and not just us) should be asking would end customers find it useful to be able to run their PC apps on their mobile devices? Current mobile devices typically have PowerPoint and Word readers (with maybe some editing capabilities) but would users find it worthwhile being able to load apps onto their mobile devices from the same CDs/DVDs that were used to load the apps onto their PCs?
If end customers do find this attractive, would they be willing to pay the extra money for the chips (the Atom looks to require considerably more gates than a comparable ARM) as well as for the extra memory (Flash, RAM & Disk) that would be required to support PC OSes and apps? Even if end customers found this approach attractive, I think OEMs are going to have a long, hard think about whether or not they want to port their radio code to the x86 with Windows/Linux when they already have infrastructures built up with the processors and tools they are currently using.
The whole thing doesn't really make sense to me because if Intel wanted to be in the MCU business, then why did they spin it off as Marvell (which also included the mobile WiFi technology as well)?
The whole think seems like a significant risk that customers will want products built from this chip and the need for Intel and OEMs to recreate the infrastructure they have for existing chips (ie ARM) for the x86 Atom.
myke
Marvell Commics? (Score:1)
Re: (Score:3, Insightful)
Almost all application code written today is done in some portable manner. Writing custom assembly specific to a processor in an application is only done in certain performance critical things (ffmpeg anyone?). This is one of the reasons that Apple was so easily a
Re: (Score:2, Informative)
Re: (Score:2)
Also, it seems that the Atom chips provide a lot more performance than even the fastest ARM systems. Maybe we'll see more mobile devices whi
RISC on a PC doesn't make sense anymore (Score:1)
So my point is: if RISC nee
Re: (Score:2)
RISC processors usually (always, in practice) have higher performance for the same clock speed when compared to CISC processors. Although they require multiple instructions to do things, these are almost always 1 or 2 cycles each. That means that although it may have to execute 3 instructions to do the same as 1 CISC instruction, it's often done it in half the clock cycles.
Re: (Score:3, Interesting)
Although they require multiple instructions to do things, these are almost always 1 or 2 cycles each. That means that although it may have to execute 3 instructions to do the same as 1 CISC instruction, it's often done it in half the clock cycles.
Unfortunately, marketing rules the day in the mind of consumers, so AltiVec/VMX and Apple's PowerPC ISA advantages were lost on consumers looking for the "fastest" machines in the consumer space.
Until recently, there were still speed advantages to using a four core multi-processor G5 for some operations over the 3.0GHz eight-core Xeon Mac Pros because of VMX.
It is somewhat ironic that the Core architecture chips now used by Apple in all but the Mac Pros are all below the 3GHz clock "wall" that was never ov
Re: (Score:2)
Mac Pros and top-of-the-line iMacs, as per the iMac technical specifications [apple.com].
Re: (Score:2)
Oh, and being able to run Windows at the same time as OS X is a pretty nice touch too.
Re: (Score:1)
>So my point is: if RISC needs more instructions to do the same work, does it require a higher clock frequency to
>achieve similar performance to a CISC chip ? Since clock speeds do not scale to infinity, this implies that a
>RISC chip will hit the frequency wall sooner, thus limiting its maximum speed.
Learn about things like pipelining and forwarding, out-of-order execution, etc.
A non-trivial amount of work is involved in decoding machine instructions, this
is far simpler on a RISC machine. Register
Re: (Score:2)
Since clock speeds do not scale to infinity, this implies that a RISC chip will hit the frequency wall sooner, thus limiting its maximum speed.
This is why you have concurrency. Ie, long pipelines, multiple arithmetic units, etc. You should never use clock speed to figure out how fast a CPU is (no matter what the marketting says).
Even in the CISC world the same thing holds. They're limited by an internal clock speed. Just because they may have a instruction that does "indexed load, indexed load, add, indexed store" does not mean they can do that in a single clock cycle. Internally they need parallelism to get the most speed out of the silicon
Re: (Score:2)
RISC-like vs. CISC-like (Score:2)
I suspect that in the near future, the more interesting fi
DSPs keep gaining ground in the mobile world... (Score:1)
BUT the DSPs / CISC chips won't ever REPLACE the General Purpose Processors / RISC chips. Most cellphones today are actually a combo platform with a General-Purpose-Processor that does
Re: (Score:1)
"Lost cost General Purpose Process vs low cost DSP" , http://www.bdti.com/articles/evolution/sld024.htm [bdti.com] , GPP gets score 7 vs DSP gets score of 10.
-----
FYI, BDTI is a company that specializes in designing benchmarks for GPP and DSP chips. Their benchmarks are w
Re: (Score:2)
Fixed- vs. variable-length instruction encoding (Score:2)
x86, which is the classic CISC, is also a variable-length ISA. That means certain instructions take just a single byte to encode, compared with a fixed 4 bytes on the most common RISCs. This can be a factor in instruction cache size/effectiveness. Fewer bytes for instructions == more instructions fit in the ICache == ICache is more effective. I don't have any numbers, but I would expect the average instruction length on CISC to be in the 10s of % smaller than RISC. That means either greater performance
Spotting those who RTFA'd (Score:5, Insightful)
Read the article. Jon Stokes makes that point: but he also makes the point that in embedded processors, it does matter, because the transistor budget is much, much smaller than for a modern desktop CPU. It may come to pass in a few generations of die feature shrinking that we arrive back at the current situation of ISAs becoming irrelevant, but for the moment in the embedded space it does matter that you need to give up a few million transistors to buffering, chopping up and reissuing instructions compared to just reading and running them.
Remember, this is Jon Stokes we're talking about: he's the guy that taught most Slashdotters what they know about CISC and RISC as it is.
The concept of risc never made much sense to me. (Score:3, Interesting)
In other words, you are designing your instruction set to your hardware.
Now, assuming that you are going to have close to infinite investment into speeding up the CPU, it seems that if you are going to fix an instruction set across that development time, you want the instruction set that is the smallest and most powerful you could get it.
That way for the same cycle instead of executing one simple instruction you are executing one powerful one (that does, say 5x more than the simple one)
Now at first the more powerful one will take more time than the simple one, but as the silicon becomes more powerful, The hardware designers are going to come up with a way to make it only take 2x as long as the simple one. Then less.
I guess I mean that you will get more relative benefit tweaking the performance of a hard instruction than an easy one.
Also, at some point the Memory to CPU channel will be the limit.
I'd kinda like to see Intel take on an instruction set designed for the compiler rather than the CPU (like Java Bytecode). Bytecode tends to be MUCH smaller--and a quad-core system that directly executes bytecode, once fully optimized, should blow away anything we have now in terms of overall speed.
Re:The concept of risc never made much sense to me (Score:2)
But - turns out bytecode directly executed is in the same ballpark as "regular" instructions. Doesn't really gain much. (sorry, can't cite)
The reason(s)?
- programming languages following instruction conventions (example: C). C is simple, and follows a "PDP-11" model
- programming languages not expressive enough, unless they are profiled (examples: C, Java). No way to mark code "rarely used", No way to indicate parallelism.
The ol
Re: (Score:2)
Not sure what you mean. If you are saying that a CPU running bytecode tends to run your code as fast as a CPU running assembly, that's what I'm assuming. But the CPUs running bytecode haven't been optimized as much as the intel CPUs..
Also, we could put more powerful instructions into the bytecode if they tend to execute too quickly. Stuff like complete control ov
Re: (Score:2)
As to the idea of higher order functions -- it's been tried. As an example, the Intel architecture has "task switch" segments. Not used, because it turns out that the incomplete case is faster. Or, examine
gratuitous B&B reference (Score:2)
Re: (Score:2)
Having an iPhone but being stuck on ATT's network is like having a supermodel girlfriend that refuses to put out.
I actually paid the $170 to cancel my att contract and get back onto verizon, ATT was that terrible for me.