Intel Medfield SoC Specs Leak 164
MrSeb writes "Specifications and benchmarks of Intel's 32nm Medfield platform — Chipzilla's latest iteration of Atom and first real system-on-a-chip oriented for smartphones and tablets — have leaked. The tablet reference platform is reported to be a 1.6GHz x86 CPU coupled with 1GB of DDR2 RAM, Wi-Fi, Bluetooth, and FM radios, and an as-yet-unknown GPU. The smartphone version will probably be clocked a bit slower, but otherwise the same. Benchmark-wise, Medfield seems to beat the ARM competition from Samsung, Qualcomm, and Nvidia — and, perhaps most importantly, it's also in line with ARM power consumption, with an idle TDP of around 2 watts and load around 3W."
One benchmark (Score:2, Insightful)
It beats the current crop of dual core ARM processors (Exynos, snapdragon s3 and Tegra 2) in one benchmark that "leaked".
Nothing fishy about that at all.
Re:One benchmark (Score:5, Informative)
It beats the current crop of dual core ARM processors (Exynos, snapdragon s3 and Tegra 2) in one benchmark that "leaked".
Nothing fishy about that at all.
Quote Vrzone:
Intel Medfield 1.6GHz currently scores around 10,500 in Caffeinemark 3. For comparison, NVIDIA Tegra 2 scores around 7500, while Qualcomm Snapdragon MSM8260 scores 8000. Samsung Exynos is the current king of the crop, scoring 8500. True - we're waiting for the first Tegra 3 results to come through.
But the same paragraph says
Benchmark data is useless in the absence of real-world, hands-on testing,
If the performance figures are realistic this is one fast processor, and it appears to be a single core chip, (or at least I saw nothing to the contrary). That's impressive.
Single cores can get busy handling games or complex screen movements, leading to a laggy UI. If they put a good strong GPU on this thing you might never see any lag.
Re: (Score:2, Redundant)
Sure, it's great, and compares well to this gen of processors (which are all at the 45/40nm fab (the exynos 4210 is 45nm, not 32nm as vr-zone incorrectly states)).
But what about the next gen chips that will have 32/28nm fab?
And how will it compare to the Quad-Core A9 processors (Tegra 3), higher clocked dual core A9 processors (Exynos 4212) or A15/A15 based designs? (Exynos 5250 and Krait/Snapdragon S4)
Re: (Score:3)
At the stated bench mark scores Medfield IS THE NEXT GEN Chip.
And its single core. When they dual well it, the others will still be trying hard to catch up.
Re: (Score:2)
Great. How about comparing it to a NEXT GEN chip then? Not to mention power consumption - which is its Achilles heel - still can't compare to THE LAST GEN ARM CHIP.
Re: (Score:2)
Actually go read the story. It specifically states that the power consumption is pretty close to the ARM chips.
Re:One benchmark (Score:5, Informative)
Yeah... no.
vr-zone [vr-zone.com]
As it stands right now, the prototype version is consuming 2.6W in idle with the target being 2W, while the worst case scenarios are video playback: watching the video at 720p in Adobe Flash format will consume 3.6W, while the target for shipping parts should be 1W less (2.6W)
extremeTech [extremetech.com]
The final chips, which ship early next year, aim to cut this down to 2W and 2.6W respectively. This is in-line with the latest ARM chips, though again, we’ll need to get our hands on some production silicon to see how Medfield really performs.
Re:One benchmark (Score:5, Insightful)
Yeah... no.
vr-zone [vr-zone.com]
As it stands right now, the prototype version is consuming 2.6W in idle with the target being 2W, while the worst case scenarios are video playback: watching the video at 720p in Adobe Flash format will consume 3.6W, while the target for shipping parts should be 1W less (2.6W)
extremeTech [extremetech.com]
The final chips, which ship early next year, aim to cut this down to 2W and 2.6W respectively. This is in-line with the latest ARM chips, though again, we’ll need to get our hands on some production silicon to see how Medfield really performs.
And which ARM SoC's idle at 2W? That's at least an order of magnitude greater than any ARM SoC - those typically idle at a few tens or hundreds of milliAmps. ARM's big.LITTLE architectures will bring that down even further.
So, Medfield may be competitive on speed and TDP at full load, but if you are a mobile device maker, would you care? You would probably be more interested in eking out more uptime from your tiny battery.
Re: (Score:2)
it would be usable for tablets and media consumption devices, and devices which can hibernate quickly.
of course there's this one nagging thing about using them for phones, which is.. well, fuck, they'd still need an arm chip or two there for the radios.
Re: (Score:2)
I was about to say the same thing. A typical high end smartphone has a 1700mAh battery. That means 1.7A for one hour before the battery is depleted. A phone that idles at 2W will last less than an hour on a full charge, and that is assuming that the screen is off. I don't have tablet battery specs to hand but they won't be that much better.
Android has a handy feature where you can check power consumption built in, and there are apps on the Market that can give you the real-time current consumption. At idle
Re:One benchmark (Score:5, Interesting)
According to what I could dig up (memory, and corroboration here [blogspot.com]), snapdragons use about 500mw at idle. Thats one quarter to one sixth the power consumption of intel's offering.
Doing some research, it looks like Tegra3s use about .5w per core as well. Again, Intel is pretty far back if theyre throwing out a single core and hitting 2-3 watts.
Re: (Score:2)
Re:One benchmark (Score:4, Informative)
My mistake-- those numbers are at full load, not idle. That certainly doesnt help intel at all.
Re:One benchmark (Score:5, Insightful)
I did read the story - but did you? Its idle TDP stands at 2.6W. A 1700mAH battery (typical in a cell phone) @ 3.6V = 6.12 Volt-Amps (i.e. Watts). So, you'll get around 2.5 hrs of uptime under idle conditions, assuming the battery is new. Good luck trying to charge that monster ever 2 hrs!
Who cares about performance when your phone will be dead before making a single call? Not much better in tablets either!
So, what is this chip competing against? Other laptop chips from Intel?
Re: (Score:2)
Re: (Score:2)
whoosh (Score:5, Insightful)
teh37737one's point, if i may, was that this 'leak' was actually a 'plant', a PR move by Intel to get people posting ridiculous speculative nonsense, like, exactly the stuff you posted in your comment.
"if this is realistic, intel has an awesome CPU" etc etc etc.
Does anyone care if its realistic? Intel sure doesn't, it just wants people to speculate that it might be realistic, and then talk about Intel, and how awesome Intel is.
But of course, it might be a load of crap... when the actual numbers come out, who knows what they will say? And when real programs hit the thing, who knows what it will do?
That's why Intel is 'leaking' it. On purpose. So they can have 'plausible deniability'. They can churn the rumor mill, get their product mentioned in the 24 hour ADHD cycle of tech news, get people posting on slashdot, etc, but Intel itself never has to sully it's good name by engaging in outright pushing of vapor-ware.
If only the guys at Duke Nukem had been smart enough to 'leak' stuff 'anonymously' to the press, instead of giving out press releases...
Of course, another way to look at it is this: It's yet another example of the corporate philosophical suite that is drowning our civilization in garbage and awful values. Never say anything directly, never take responsibility for your words or actions, never be straight with people, and hide everything you are doing in layers and layers of techno jargon, babble, and nonsense.
Re:whoosh (Score:4, Insightful)
Does anyone care if its realistic? Intel sure doesn't
Intel will care if the leaks create unrealistic expectations that their product can't meet. The result could be consumer rejection of an otherwise respectable product, because the public had been (mis)led to expect more than the product could actually deliver. (see: Itanium as replacement for x86)
So the "secret Intel propaganda strategy" only works if Intel actually has a reasonable chance of living up to their own unofficial hype. And based on their recent track record, they probably do.
Re:whoosh (Score:5, Informative)
Recent track record... Yeah, sure...
http://www.pcper.com/reviews/Graphics-Cards/Larrabee-canceled-Intel-concedes-discrete-graphics-NVIDIA-AMDfor-now [pcper.com]
There's a few others like this one. This includes the GMA stuff where they claimed the Xy000 series of GMA's were capable of playing games, etc. They're better than their last passes at IGPs, but compared to AMD's lineup in that same space, they're below sub-par. Chipzilla rolls out stuff like this all the time. Been doing it for years now.
Larrabee.
Sandy Bridge (at it's beginnings...).
GMA X-series.
Pentium 4's NetBurst.
iAPX 432.
There's a past track record that implies your faith in this is a bit misplaced at this time.
Re:whoosh...you missed... (Score:2)
Recent track record... Yeah, sure...
http://www.pcper.com/reviews/Graphics-Cards/Larrabee-canceled-Intel-concedes-discrete-graphics-NVIDIA-AMDfor-now [pcper.com]
There's a few others like this one. This includes the GMA stuff where they claimed the Xy000 series of GMA's were capable of playing games, etc. They're better than their last passes at IGPs, but compared to AMD's lineup in that same space, they're below sub-par. Chipzilla rolls out stuff like this all the time. Been doing it for years now.
Larrabee.
Sandy
Re:whoosh (Score:4, Informative)
To Intel,, perception is everything, reality is nothing -- as proven by their continuous predominance in the desktop despite AMD's frequent performance-per-dollar and performance-per-watt lead, and occasional absolute performance lead.
Ah, yes. No-one ever buys Intel chips because they're the best option, poor old AMD keep building the best x86 chips on the planet but stoopid consumers keep buying Intel anyway.
Back in the real world, at the time when AMD were the best choice you could hardly find anyone at all knowledgeable who was recommending Intel Pentium-4 space-heaters, and now that Intel is the best choice for desktop systems the only people recommending AMD CPUs are the dedicated fanboys. And in the low-power space, no-one uses Intel x86 CPUs because that would be absurd; even a 2W CPU can't compete against ARM.
Re: (Score:2)
Intel kept AMD down by signing huge discounted contracts with OEM manufacturers. That's the reality of how Intel won. The consumer had basically a plethora of names and configurations when the PC came around as a major player but 90% of on the shelf boxes were Intel inside. AMD never was able to sign a huge contract and thus was kept out of the office and shelf space. It wasn't like Intel had vastly better chips until the i-series and even now under the $220 range they aren't really that better, special
Re: (Score:2)
You appear to be confused. If you take the time to check the stores and compare AMD's offering to Intel's in terms of processing power and cost, you will learn that AMD's offerings are constantly the best ones in the market. More precisely, you will learn that:
Then, once we factor add
Re: (Score:2)
AMD's processors, for the same price, pack more power per core than Intel's
Yeah, 10+ watts per core more than Sandy Bridge at full load!
Re: (Score:2)
What does the number of cores have to do with a "good, strong" GPU, or a lag of the UI? Lag is usually interrupt-based (think network or disk access), or the result of software, when operations are poorly ordered and/or uncached lookups perform often.. therefore your benchmark has a rabbit with a pancake on it's head -
Re: (Score:2)
If you have a good GPU, you can off load a lot of processing that might be otherwise have to do with the CPU. Sift enough work, and you can do with a single core what others might do with a dual. Thats all I was trying to point out.
Re:One benchmark (Score:4, Informative)
Re: (Score:2)
Re: (Score:2)
Single cores can get busy handling games or complex screen movements, leading to a laggy UI.
I know, right? The entire computer industry was plagued with these "laggy UI"s until multi-core processors were invented and saved us. If only someone had thought out a way to run multiple paths, or maybe call them "threads", of execution on a single CPU.
Oh well, too late for that.
Re: (Score:2)
It beats the current crop of dual core ARM processors (Exynos, snapdragon s3 and Tegra 2) in one benchmark that "leaked".
Nothing fishy about that at all.
So it beats (maybe) the ARMs of the moment. But what about the ARMs when Medfield actually ships? And the quad-core ARMs now?
Re: (Score:3)
Still need to wait for more figures... (Score:2)
The Atom's have always had a reasonable core power consumption... but the external chipsets that ruin the system power consumption figure. Will be curious to see what the total system consumption is going to be with these new one, maybe time to look towards a nice replacement for my Asus eeebox B202's for the desktop.
Re: (Score:2)
Re: (Score:3)
For sure, yes, it's a SoC, but I'm still going to wait for a complete "on the shelf" system to make an appearance before holding my hopes too high. Leaked releases are about as useful as "New solar cell technology yields 50% more efficiency" announcements.
What is interesting is that they only mention the elevated power consumption in relation to video playback (720p) which is something that'd likely be handed off to a dedicated section of silicon, not something done in the general purpose CPU core. Hopefu
Re: (Score:2)
It's hard to imagine how Intel could not win this, now or soon. They have the best chip designers and process.
Re: (Score:2)
Well, the limiting factor is quite certainly backwards compatibility.
The architecture itself very possibly cannot compete with ARM on low power... no matter what the "best chip designers and process" can bring to the table.
I think it's getting to be time to finally retire x86. It'll be hell to bring a new architecture to market... but what's the alternative? Microsoft is dying. Apple is starting to make their own chips.
They probably do have the best people and starting fresh they could very likely do amazin
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:3)
The first x86 processor, the 8086, only had 29,000 transistors total, whereas this new chip uses over 34,000 times that many (a billion) just for DRAM, so how much complexity can x86 really be adding?
The 8086 was a 16-bit processor that could only address 1 MB of RAM (split up into 64k segments) with no support for virtual memory, didn't have any floating point hardware let alone stuff like SSE, and took an awfully large numbers of clock cycles to execute each instruction by modern standards. If you want something capable of actually running modern applications, you're looking at a lot more complexity.
Re: (Score:2)
Re: (Score:2)
The Atom's have always had a reasonable core power consumption... but the external chipsets that ruin the system power consumption figure. Will be curious to see what the total system consumption is going to be with these new one, maybe time to look towards a nice replacement for my Asus eeebox B202's for the desktop.
I thought that problem was only with the early Atoms paired the 945GSE, and then later Pine Trail included the power-proper NM10 chipset.
2 watts idle? (Score:5, Funny)
Re: (Score:2)
I think batteries are rated in mAh, not mWh. For example, my Netbook has a 6600mAh battery that outputs 11.1v. I think that's basically 73.26Wh, or 36 hours of idle time at 2W. I don't have any smartphone to compare, nor do I know the actual idle usage of the Atom CPU in my netbook, or the other components in it.
Re: (Score:2)
I guess I figured smartphones might have a bigger battery than my RAZR's 3.7v 900mAh.
1200-2400mAh for something that does all kinds of applications and web access seems pretty small if 900mAh is what they give phones to people who don't even care about texting.
2W idle power consumption! (Score:2, Interesting)
That just doesn't cut it. Based on that, I'd assume the mobile version of the chip to consume at least 1W at idle loads. That _still_ doesn't cut it.
Re: (Score:2)
Re: (Score:3)
Re:2W idle power consumption! (Score:5, Insightful)
Bingo. My ageing Nokia, while lacking in horsepower, has excellent battery life... It has a 600MHz ARM, and a 3.2Wh battery. It manages to idle for a week at least, I'm sure it's hit 10 days before, but lets say 7, to be safe.
3.2W / 7 / 24 = 20mW idle. Two fucking orders of magnitude better than their *target*. (not to mention this includes the entire phone, not just the core, in real life).
I presume the more powerful android rigs still keep it within 100mW for the whole phone, idling. - That would give you roughly two days idle on a decent sized phone battery (5Wh). That's still more than an order of magnitude difference.
Looks cool (Score:2)
beat ARM on what, 45nm? (Score:4, Interesting)
So here we have Intel putting their low cost product on their high cost process and claiming a victory? I don't buy it but since Intel is going to be selling these things at deep discounts, I might buy a product or two. I don't think in the long run they can continue this game but it's fun to see them attempting it.
LoB
It's not Intel's high cost process (Score:5, Informative)
These days 32nm is their main process. They use 45nm still but not for a ton of stuff. Almost all their chips have moved to it. Heck they have 22nm online now and chips will be coming out rather soon for it (full retail availability in April).
Once of Intel's advantages is that they invest massive R&D in fabrication and thus are usually a node ahead of everyone else. They don't outsource fabbing the chips and they pour billions in to keeping on top of new fabrication tech.
So while 32nm is new to many places (or in some cases 28nm, places like TSMC skipped the 32nm node and instead did the 28nm half node) Intel has been doing 32nm for almost 2 years now (first commercial chips were out in January 2010).
Re: (Score:3)
So here we have Intel putting their low cost product on their high cost process and claiming a victory?
Developing the process is ungodly expensive, pushing out chips is not. Why wouldn't Intel use their state of the art process? It's not like it would be cheaper to produce on 40/45nm, far from it.
I don't think in the long run they can continue this game but it's fun to see them attempting it.
Well, it's the game Intel's been running since the 1980s and has kept AMD a distant second even when their chip designers have been smoking crack. Smaller process = more dies/wafer and so higher margins and more money to funnel back into R&D.
Do I believe everything Intel says? Hell no. But their tick-tocks have
Re: (Score:2)
High cost process? The more you shrink the die, the cheaper it gets to produce. Once the R&D and fabs have been done, you want to move everything to the smaller process if the yields are okay. Smaller dies mean that you get more chips per wafer. That means lower costs.
Re: (Score:2)
Re: (Score:2)
competition is always good.
LoB
no AM? (Score:3)
think of all those amplitudes not being modulated.
this is a terrible, terrible loss for America.
apples and oranges? (Score:4, Interesting)
How can you compare a 1.6ghz presumably single core against dual core cpus on a single thread benchmark?
I just compared my laptop which is 2.2ghz dual core with my desktop, 3ghz single core. laptop gets 16,000, desktop gets 24,000. Laptop was at 50% cpu, desktop was at 100%.
Just let x86 die, please. (Score:4, Interesting)
It's bloated. It had its time. I LOVED writing in assembly on my 80286, the rich instruction set made quick work of even the most complex of paddle ball games...
However, that was when I was still a child. Now I'm grown, it's time to put away childish things. It's time to actually be platform independent and cross platform, like all of my C software is. It's time to get even better performance and power consumption with a leaner or newer instruction set while shrinking the die.
Please, just let all those legacy instruction's microcode go. You can't write truly cross platform code in assembly. It's time to INNOVATE AGAIN. Perhaps create an instruction set that lets you get more out of your MFG process; Maybe one that's cross platform (like ARM is). Let software emulation provide legacy support. Let's get software vendors used to releasing source code, or compiling for multiple architectures and platforms. Let's look at THAT problem and solve it with perhaps a new type of linker that turns object code into the proper machine code for the system during installation (sort of like how Android does). DO ANYTHING other than the same old: Same Inefficient Design made more efficient via shrinking.
Intel, it's time to let x86 go.
The past stays with us (Score:2)
We still use Imperial units in the US. And do you remember the shortage of competent Cobol programmers back when Y2K was the big worry?
The world does indeed move on, but the past stays with us for a long while. A low power x86 SOC is still a useful and a wonderful thing.
My first thought when I read the article was "Cool! You'll be able to get a notepad to run WINE and native Windows XP now!" I can see TONS of uses for this, even with the lousy power specs. Industrial/business types don't like to le
Re: (Score:2)
Hahahaha... oh wait, you're serious.Ok, so what high performance, low-cost, and power efficient processor should we all migrate to then? And will you pay to have all my software ported over to this new ISA?
Re:Just let x86 die, please. (Score:5, Interesting)
I scoured your post for one actual reason why you think x86 is an inferior ISA, but I couldn't find any. I'll give you a couple reasons why it is superior, or at least on par with, any given RISC ISA, on its own merits, not taking into account any backwards compatibility issues:
x86 should be able to compete quite well with any RISC ISA on its own merits today.
Re: (Score:2, Informative)
1. Having variable length instructions complicates instruction decoding, which cost die space and cycles (once for actual decoding and once for instructions spanning fetch boundaries). Also several processor architectures save 16-bit instructions (ARM, SH, MIPS, TI C6x off the top of my head), still having access to 16 general purpose registers as x86-64 with its up to 16 byte insns.
2. Load-op insns and many others are split up internally in smaller micro-ops. They are about as useful as assembler macros. L
Re: (Score:2)
1. Having variable length instructions complicates instruction decoding, which cost die space and cycles (once for actual decoding and once for instructions spanning fetch boundaries). Also several processor architectures save 16-bit instructions (ARM, SH, MIPS, TI C6x off the top of my head), still having access to 16 general purpose registers as x86-64 with its up to 16 byte insns.
Yes, those are the standard arguments against variable length instructions. But both Intel and AMD use extra marker bits to mark the instruction boundaries, which removes the extra-cycles penalty. I'd argue that the extra area with the marker bits is *still* smaller than wasting area on non-compressed instructions. With the cycle penalty gone, now you're just left with a die size argument for the initial decode hardware and the cross-fetch-boundary hardware. I'll argue that that area cost is absolutely
Re:Just let x86 die, please. (Score:4, Interesting)
Indeed. And the ARM ISA isn't even RISC anyway. In fact, which ARM ISA are we even talking about here? Thumb, Thumb2, ThumbEE, Jazelle or the 32-bit ISA? And which extensions, I wonder? NEON, maybe? Or one of the two different sorts of FPU? That's already a significantly complex instruction decoder. The x86 microcode-for-uncommon-instructions approach is probably better.
Whenever this topic comes up, the discussion is immediately flooded with ARM fanboys insisting that x86 can never compete for magical reasons that don't stand up to sensible analysis. And as Intel approaches ARM's level of power consumption, as they inevitably must (for there is no magic in ARM and there is nothing physically preventing parity), what we hear is denial: the insistence that Intel is playing dirty tricks.
At least, post OnLive, nobody is claiming that there is no demand for x86 applications on mobile devices. I suppose the "ARM = magic" power claims will have a similar lifetime, and one day will look as silly as claims that Windows XP will be a failure because everyone will be using Linux by 2005. Hope is a good thing, but this is just foolishness.
Re: (Score:3)
I scoured your post for one actual reason why you think x86 is an inferior ISA, but I couldn't find any. I'll give you a couple reasons why it is superior, or at least on par with, any given RISC ISA, on its own merits, not taking into account any backwards compatibility issues:
The latter pretty much amounts to "code compression"; maybe assembler-language programmers would also find it convenient, but compilers probably don't really care much, except for a little extra register pressure.
x86 has some instructions that use one of the GPRs - ESP/RSP - specially. RISC processors have calling conventions that use one of the GPRs specially. What exactly are the differences here?
Re: (Score:2)
Modern x86 processors decode instructions before they are cached. i.e. the ICache holds fixed-width instructions like a RISC processor. I'll give you the storage benefit on disk and in RAM but, as the meme goes, memory is now very cheap so the compression is unnecessary.
You are talking about a micro-op cache, and not all x86 processors have one. Sandy Bridge does, but it *also* has a regular old ICache which holds raw instruction bytes. And I'm not talking about disk or RAM, I'm talking about the on chip L1 SRAM. Bigger effective ICache == fewer cache misses == better performance.
This isn't a justification, just stating a feature. The all powerful "add eax, [mem_addr]" is a 2-clock instruction that executes as a "mov secret_reg, [mem_addr]; add eax, secret_reg" anyway. There is no benefit to this beyond the previously mentioned "compression". BTW, those directly-from-memory instructions are only important because the x86 has so few registers and, worst of all, those registers are not truly general purpose (look at mul, div, movsb).
Sandy Bridge and AMD processors since K7 have done it with one micro-op. With renamed registers it's not that difficult to do. And load-op is more useful than just for compensating for lack
Re: (Score:2)
P.S: I know that I can have simmilar applications on Android or iPad... But I really, really do not like the idea of "happy walled garden" with pseudoapplications ("apps" in html/javascript? WTF), sorry.
Re: (Score:2)
Tablets are the future apparently. People keep telling me that on here and calling me all sorts of names when I point out all the things that I want from a computer that tablets can't do, like run my software or upgrade the storage or RAM without having to buy a new tablet.
Re: (Score:2)
However, that was when I was still a child. Now I'm grown, it's time to put away childish things.
There is IBM Mainframe System 360 code from the '60s still running on current zEnterprise systems today. That code was probably written while you were still swimming around in your dad's balls (no offense intended; it's just an amusing expression).
This will also be the case with x86. It will stay around forever, because it has been around forever. Tautology intended.
However, supporting legacy stuff does not necessarily preclude innovation.
Re: (Score:2)
There is IBM Mainframe System 360 code from the '60s still running on current zEnterprise systems today.
...and the latest implementations of it use the same "translate some multi-step instructions into internal micro-ops" technique that a lot of x86 processors, dating back to the Pentium Pro, do [hotchips.org]. (They also use Alpha's "trap some instructions to processor-dependent code running in a special mode with access to some special internal registers" technique, only they call it "millicode" rather than "PALcode" - and they have some more instructions to trap.)
Re: (Score:2)
Intel has been translating x86 legacy instructions to RISC operands for ages. It happens in almost every major CPU released in the last decade. So, technically, x86 is already gone a long time.
Re: (Score:2)
Perhaps create an instruction set that lets you get more out of your MFG process; Maybe one that's cross platform (like ARM is).
What do you mean when you say the ARM instruction set is "cross platform"?
Let's look at THAT problem and solve it with perhaps a new type of linker that turns object code into the proper machine code for the system during installation (sort of like how Android does).
Or how the IBM System/38 and successors do it (compilers generate a very CISCy high-level virtual instruction set; core OS code translates it to machine code when it's first run). ("How Android does" is, I think, pretty much "how Java does".)
Intel's process tech is the best (Score:2)
Performance per watt (Score:2)
So...how's the performance per watt if we compare these recent ARM and Intel offerings?
I was also thinking that the Atoms probably have more all sorts of acceleration units. I'm not sure how important would those be on a tablet or a phone though. Anyway, there was an interesting discussion [foldingforum.org] pondering if it would be possible to run Folding@Home on a phone. It ended by realizing that the ARMs wouldn't have the same kind of FPU power.
ARM is now using 28nm fab processing (Score:2)
Perhaps everyone is too stoned from Christmas but 28nm stamping is already approved with GlobalFoundries, TSMC and Samsung.
http://arm.com/about/newsroom/globalfoundries-and-arm-deliver-optimized-soc-solution-based-on-arm-cortex-a-series-processors.php
http://www.tsmc.com/tsmcdotcom/PRListingNewsArchivesAction.do?action=detail&newsid=6181&language=E
http://www.samsung.com/global/business/semiconductor/newsView.do?news_id=1254
Re: (Score:2)
1GB is fine for phones but just isn't much for tablets and netbooks.
From "640kb is enough for everybody" (okay, maybe he didn't say it) to "1GB is fine for phones" in 30 years. I so badly want to go back with a time machine and show them one, and after they're amazed and all that and ask what kind of important things we use it for I'll just say "well, mostly we use it to play Angry Birds".
Re: (Score:2)
http://en.wikipedia.org/wiki/List_of_Intel_codenames [wikipedia.org]
Most are named after bodies of water, and they used to all be rivers in the Pacific Northwest of the US, but they ran out, (and moved some development out of Portland, OR)
Dubious (Score:2)
Intel will need to bend the law of physics before their power hungry chips can match the the energy efficiency of ARM SoCs, but if anyone can do it it'll be Intel. Intel took x86 to workstations and supercomputers killing many RISC processors in the process. It'll be fun to see them pull it off again against ARM.
Re:Dubious (Score:5, Insightful)
Intel took x86 to workstations and supercomputers killing many RISC processors in the process. It'll be fun to see them pull it off again against ARM.
No, it wouldn't. RISC is a superior instruction set. x86 only beat RISC because it was really the only game in town if you want to run Windows, which every non-mac user did. At the time, the desktop was king and made Intel lots and lots of money, which they used to beef up their server offerings. Now we are stuck with x86 with RISC being used only in "closed" architectures like smart phones, consoles and big-iron servers.
I like competition. I'd rather see ARM make gobs of money of designing chips that everyone can improve on than Intel make gobs and more gobs of money selling desktop, server and mobile chips that only they may design, produce and sell.
The final processor line that Intel makes will be the one they are producing when they become the only game in town.
Re: (Score:3, Informative)
Re:Dubious (Score:4, Insightful)
Re: (Score:2)
Uh, the GP most certainly IS talking about the NT line:
Every Windows release from the NT line
XP, Vista and Windows 7
I agree, reading comprehension fail. Windows NT DID run on MIPS and DEC Alpha, and I've heard rumours that every release since has had a supported RISC HAL, even though it was never available to the public. I don't have any actual information to indicate that's true, however.
Re: (Score:2)
The problem is that for all commercial developers and businesses, it's not simply a matter of whether that OS is available on that platform, it is whether it is profitable to pay staff to support that platform.
There were plenty of RISC chips in the 1990's, DEC Alpha, Sun SPARC, SGI/MIPS, each with their own UNIX variant OS. Once Windows NT came onto the market, and starting cutting away at "UNIX prices", it wasn't possible for application developers to justify supporting niche markets with only a handful cu
Re:Dubious (Score:5, Interesting)
RISC isn't an instruction set - it's a design strategy.
RISC = reduced instruction set computing
CISC = complex instruction set computing
The idea of RISC (have a small highly regular/orthogal instruction set) goes back to the early days of computing when chip design and compiler design wasn't what it is today. The idea was that a small simple instruction would correspond to a simpler chip design that could be clocked faster than a CISC design while at the same time being easier to compile optimized code.
Nowadays advances in chip design and compiler code generation/optimization have essentially undone these benefits of RISC, but the remaining benefits are that RISC chips have small die sizes hence low power requirements, high production yields and low cost, and these are the real reasons ARM is so successful, not the fact that the instruction set is "better".
Re: (Score:3, Interesting)
> RISC is a superior instruction set. x86 only beat RISC because it was really the only game in town if you want to run Windows
Modern ARM processors aren't pure RISC processors. Most ARM code is written in Thumb-2, which is a variable length instruction code just like x86. Back in the 90s when transistor budgets were tiny, RISC was a win. When you only have a hundred thousand gates to play with, you're best off spending them on a simple pipelined execution unit. The downsides of RISC has always been the
Re: (Score:2)
Perhaps you could explain this since you seem to be one of the most level headed posters. I work on the software side of the IT house with a fair bit of knowledge in to the electron pushing side, but only in so far as it is necessary for software development, but I've never understood the RISC is better argument. From a purely software implementation standpoint, I've almost universally ended up with significantly faster execution on x86 (and particularly x64) than with anything ARM based, even at similar
Re: (Score:2)
Before a typical workstation class CPUs had evolved complex instruction sets. There had not been enough focus on measuring the frequency of use of the various features of the instruction set. When people started analyzing this (ie Patterson and Hennessy) they showed that the vast majority of software spent its time executing code from a tiny fraction of the instruction set. Obviously if you make those common instructions execute much faster, you can afford to remove the rarely used instructions and make th
Re: (Score:2)
Thanks for that great write up. That made a lot of sense and kind of confirmed what my initial guess was. (That a lot of the issues early on were probably related to poor compiler behavior (or perhaps poor instruction choice), since the instructions were not getting well used.) It also gave a lot of accessible insights from a software developer's perspective. I'd agree that Intel would have to get their fab costs down to compete with ARM plus I get the feeling a lot of the manufacturers like being able t
Why RISC was advantageous (Score:2)
Once C became popular, however, this changed. A few of the reasons were
Re: (Score:2)
The downsides of RISC has always been the increased size of the program code and reduced freedom to access data efficiently (ie with unaligned accesses,
Which, as far as I know, at least one RISCs supports in recent implementations (Power Architecture). SPARC doesn't have it, and I think ARM didn't have it at least at one point.
byte addressing
If you mean "as opposed to word addressing", as far as I know, all the major RISCs used in general-purpose computing are byte-addressed. Alpha was a little weird, at least until the BWX (Byte/Word eXtension) instructions were added, but the others had/have byte and 2-byte loads and stores.
and powerful address offset instructions
Do you mean "more complex addressing modes"
Re: (Score:2)
RISC, CISC & VLIW (Score:3)
Intel took x86 to workstations and supercomputers killing many RISC processors in the process. It'll be fun to see them pull it off again against ARM.
No, it wouldn't. RISC is a superior instruction set. x86 only beat RISC because it was really the only game in town if you want to run Windows, which every non-mac user did. At the time, the desktop was king and made Intel lots and lots of money, which they used to beef up their server offerings. Now we are stuck with x86 with RISC being used only in "closed" architectures like smart phones, consoles and big-iron servers.
I like competition. I'd rather see ARM make gobs of money of designing chips that everyone can improve on than Intel make gobs and more gobs of money selling desktop, server and mobile chips that only they may design, produce and sell.
The final processor line that Intel makes will be the one they are producing when they become the only game in town.
I fully agree. Not only is RISC superior to CISC, it's even turned out to be more optimal than VLIW. After all, remember all the hype about VLIW when Intel & HP first started working on the Itanium? It turned out that that the dynamic analysis part of RISC CPUs that Itanium was going to move into the Compiler, such as branch prediction, register renaming, etc, was just a small portion of the Si, so not much was really saved in terms of real estate there. In the meantime, the much ballyhooed VLIW com
Re: (Score:2)
Even X-Scale was not such a success, was it?
You know X-Scale was ARMv5, right? Intel was not blazing new trails there. They got the StrongARM technology from DEC, and followed it up with X-Scale. It's now owned by Marvell, and they're used all over (see Blackberry Torch and the Kindle, for examples...).
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
Everyone says this, but really, CISC is more efficient. CISC code is more compact than RISC code, which helps cache hit rates. Additionally, the most used opcodes tend to be the shortest.
Actually, this is the everyone-says-it-but-it's-wrong thing. At least, when the RISC in question is ARM. Most 32-bit ARM instructions are 16-bits in Thumb-2 (ARMv8 doesn't have a Thumb mode yet, so all 64-bit instructions are 32 bits). Even without using Thumb-2, I find I get about the same instruction density for both hand-written and compiler-generated assembly for ARM and x86, often with ARM code being 5-10% smaller.
For example, ARM code supports predicated instructions so a simple if statement can
Re:Dubious (Score:5, Interesting)
Bloodthirsty bastard, aren't you? Killing off the competition is fun?
I haven't liked Intel very much since I read the first story of unethical business practices. Intel doesn't rank as highly on my shitlist as Microsoft, but they are on it.
Re: (Score:2)
Exactly what is happening on Android. Android's future is Windows Mobile 5's past already.
As a current user of Ice Cream Sandwich on a Motorola Xoom, you sir are quite wrong. What Android tablets needed was good software. That software is here.