Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Cellphones Intel Linux

Intel and LG Team Up For x86 Smartphone 157

gbjbaanb writes "I love stories about new smartphones; it shows the IT market is doing something different than the usual same-old desktop apps. Maybe one day we'll all be using super smartphones as our primary computing platforms. And so, here's Intel's offering: the LG GW990. Running a Moorestown CPU, which gives 'considerably' better energy efficiency than the Atom, it runs Intel's Linux distro — Moblin. Quoting: 'In some respects, the GW990 — which has an impressive high-resolution 4.8-inch touchscreen display — seems more like a MID than a smartphone. It's possible that we won't see x86 phones with truly competitive all-day battery life until the emergence of Medfield, the Moorestown successor that is said to be coming in 2011. It is clear, however, that Intel aims to eventually compete squarely with ARM in the high-end smartphone market."
This discussion has been archived. No new comments can be posted.

Intel and LG Team Up For x86 Smartphone

Comments Filter:
  • Competition (Score:2, Interesting)

    by Anonymous Coward on Sunday January 10, 2010 @01:32PM (#30715356)

    Has anyone made a scatter plot of benchmark score vs watt, for a given benchmark and various x86 and ARM processors?

  • Do Not Want (Score:4, Interesting)

    by marcansoft ( 727665 ) <hector@TOKYOmarcansoft.com minus city> on Sunday January 10, 2010 @01:39PM (#30715412) Homepage

    Here we have a platform where there is no reason whatsoever to have an ass-backwards-compatible architecture in order to run legacy Windows apps. There is zero reason to use x86 here other than marketing and Intel. Please go away, we're perfectly happy with a modern RISC architecture (ARM), thank you very much.

    Here's to hoping that ARM will permeate its way up into the netbook market and beyond, instead of the other way around. We've been tortured by x86 long enough already.

  • by Hurricane78 ( 562437 ) <deleted&slashdot,org> on Sunday January 10, 2010 @01:54PM (#30715538)

    Well, that’s only your lack of imagination.

    Imagine a very powerful cell phone. With super-fast bluetooth. (Or wired bus if you prefer that.)
    Now imagine a normal screen, keyboard, mouse, and speakers/amplifier. All with bluetooth.
    There. If the speed and storage size are good, that’s all you usually need.

    Now imagine a dock where you put the phone in, to give it monstrous 3d hardware acceleration capabilities, or something else that needs a faster bus than bt can provide.
    Then you got games and professional use covered too.

    Finally one or multiple contact-lens displays, glasses, and a gesture glove reduced to some tiny ring or something. (There is something better, but I can’t talk about that right now.)

    I don’t see what’s missing there...

  • by Locutus ( 9039 ) on Sunday January 10, 2010 @01:55PM (#30715546)
    To top it off, Intel has to use their highend processing factories to get the chips in the ball park as ARM. They just announce the 32nm Atoms along with their new i3, i5 and i7 all on the same process. But as you mentioned, they have to sell the Atom far far far cheaper than the iX CPUs to be competitive. IMO, the FTC should look into this to make sure their not dumping. Atleast with main PC CPUs, they charged high prices at first and then ramped the price down as the newer processes started to come online. With these Atoms, they can't charge what they cost and still be competitive.

    And these new phones will probably have a fan and require 2GB of memory so it can run Windows. lol. If they only talk about Gnu/Linux then we'll know they are serious but if they pull Microsoft in, you know it's a PR game and like the netbook segment, it'll run the prices up so high few will want them.

    LoL
  • Re:Do Not Want (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Sunday January 10, 2010 @01:59PM (#30715564) Journal

    Simpler decoder. The instruction decoder is the one part of a CPU that you can't turn off while executing anything. An x86 decoder is much more complicated than, for example, an ARM decoder, so the minimum operating (i.e. not suspended) power consumption for the CPU is higher.

    An x86 chip has weird instructions for things like string manipulation that no compiler will ever emit, but which have to be supported by the decoder just in case. The usual advantage that x86 has over RISC chips is instruction density. Common instructions are shorter (actually, older instructions are shorter, for the most part, but old has quite a high correlation with common) and there are single instructions for things that are several RISC instructions, meaning that they can get away with smaller instruction caches than RISC chips.

    This doesn't apply to ARM. ARM instructions are incredibly dense. Most of them can be predicated on one or more condition registers, which means that you often don't need conditional branches for if statements in high-level languages. More importantly, there are things like Thumb and Thumb-2, which are 16-bit instruction sets suitable for a lot of ARM code, but which get very good cache density. Unlike x86, these are separate instruction sets. This means that the core can turn off the decoder hardware for the full ARM chip while in Thumb mode, and turn off the Thumb logic while in ARM mode. This gives you x86-like icache density and RISC-like decoder complexity, so you have the best of both worlds.

  • by ColdWetDog ( 752185 ) on Sunday January 10, 2010 @02:01PM (#30715584) Homepage

    Oh I sure hope not. Sounds like hell to me, and I'm an aetheist!

    Perhaps maybe just purgatory. But it could work. Carry your uberdevice in your pocket (lead foil lined), use it with it's native human interface devices when wandering around. Pop it in some sort of dock at work with a decent keyboard, mouse and screen. Remember to pick it up before you go home.

    Obviously this sort of thing raises a number of issues and problems and the hardware in a smart phone just can't compete with a real computer for now for anything other than email / browsing / light apps. I'd love it at the hospital that I work - walk around the bedside inputting data, looking up things, pop the thing in the dock at the nurses station, look up an xray on a decent monitor, type in some notes, get up and walk around some more.

    Right now I have to scribble stuff on paper, walk over to a generic computer, log in to several different applications, gripe because Firefox isn't on this particular machine or doesn't have a utility that I like, actually do something useful, then log out of everything, rinse and repeat.

    So it might not be as bad as you envision it. Of course, this sort of thing requires significant multi vendor coordination and standards, so I don't hold out much hope for it. I guy can dream ...

  • Re:Do Not Want (Score:5, Interesting)

    by TheRaven64 ( 641858 ) on Sunday January 10, 2010 @02:08PM (#30715628) Journal

    The x86 CISC instruction set is so convoluted and ancient that x86 CPUs spend a lot of die area (and power) dealing with it and the weird ways that extensions have been tacked over time

    It's worth noting that how true this is depends a lot on the market that the chip is aimed at. An Atom and a Xeon both have approximately the same number of transistors dedicated to decoding instructions. In the Atom, it's a noticeable chunk of the total, both in terms of die area and power consumption. In the Xeon it's an insignificant amount.

    The x86 decoder was a big problem comparing something like a 386 to a SPARC32. The SPARC32 could use the same number of transistors but have a far higher percentage devoted to execution units. Comparing a Core 2 to an UltraSPARC IV, it's not nearly as relevant. The percentage of the die dedicated to the decoder is pretty small on both and the difference between using 1% of your transistor budget for the decoder and 2% is not significant. Particularly when the more complex decoder lets you get away with a smaller instruction cache.

    When you scale things down to the size of an Atom or a Cortex A8, the difference becomes significant again. In 5-10 years, chips for mobile devices may well be in the same situation that desktop chips were a decade ago, and then x86 will be a minor handicap, rather than a crippling one, but even with a 32nm process the decoder is still a big (relative) drain on a mobile x86 chip.

    From what I've read, Intel doesn't have anything that comes close to the Cortex A9 (as seen in Tegra 2) or the Snapdragon in terms of performance per Watt.

  • Re:Do Not Want (Score:4, Interesting)

    by marcansoft ( 727665 ) <hector@TOKYOmarcansoft.com minus city> on Sunday January 10, 2010 @02:08PM (#30715636) Homepage

    Thumb is implemented as a layer on top of ARM (at least last I checked), so usually the ARM decoder will still be active while in Thumb mode. However, Thumb and Thumb-2 are essentially compressed versions of ARM, so the vast majority of the decoder can be shared. The combined decoder for a modern ARM CPU is still much simpler and better performing than the decoder for an x86 CPU.

    Another large advantage is that ARM programs by definition do not use things like self-modifying code without informing the CPU (i.e. issuing a dcache store and an icache invalidate). This means that ARM CPUs can be essentially Harvard architecture machines and they practically don't need any snooping logic for the caches. x86 CPUs always have to watch for insane things like an instruction modifying the instruction immediately after it in the pipeline, while that doesn't even work at all on ARM (you need the aforementioned flush/invalidate plus a write barrier and whatnot). Having CPUs impose these kinds of (reasonable) demands on the software is a very good thing for performance.

  • by r00t ( 33219 ) on Sunday January 10, 2010 @02:21PM (#30715736) Journal

    The BCD instructions are insignificant. They are nothing compared to stuff like vector floating point and crypto. Despite the waste, x86 instructions are still really compact compared to normal RISC instructions.

    A dirty little secret about RISC compilers is that they seldom use more than a few registers. No kidding. Disassemble a wide variety of things and you'll see.

    Modern x86 gives you 16 integer registers, the same as ARM. Old x86 gives you 8, the same as ARM Thumb. If there is a difference worth mentioning, it's that x86 chips are often designed to dynamically map the architectural registers onto over 100 hidden implementation-specific registers. This can even be done for memory in some cases.

    In the end, it's about the implementation. Intel has the best foundries (best silicon). While optimizing x86 isn't easy, Intel has the money to throw lots of excellent engineers at the problem. In other words, a pig will fly if you provide enough thrust.

  • by Anonymous Coward on Sunday January 10, 2010 @02:23PM (#30715746)

    There goes **everything** if I'm using the phone for everything. When I drop it down a pit toilet, I'm not getting it back.

    That's actually a good thing. The increased likelihood of losing a phone would push reliable backup solutions into the mainstream. Right now, backups are kind of like exercising and eating healthy -- everyone has a vague sense of it being important, but it's easy enough to put it off and worry about it later.

    If phones became the primary computing devices, then you'll see a lot more automated, network-aware backup solutions. Pick any combination of "backup over cellular data service", "backup over WiFi", "backup to a cloud-based service like Mozy or Carbonite", and "backup to network-attached storage at your home or office". Those options are already somewhat viable now, and it'd only get easier in the future.

    So given a future where phones are a primary platform, the average bad case for dropping it in a pit toilet would be the loss of everything you did since leaving home that morning. And if you're in a circumstance where losing the phone is a lot more likely or where even half a day's work is too valuable to lose, you'd probably have it backing up over the cell network on a regular basis.

  • Re:Do Not Want (Score:3, Interesting)

    by TheRaven64 ( 641858 ) on Sunday January 10, 2010 @02:54PM (#30715966) Journal
    Yes, you're quite correct. I remember a story from a former Intel Chief Architect about a bug in one instruction on the 486 that accidentally set a condition flag in some situations. A few game designers found that they could take advantage of this, and just slapped a minimum requirement of 486 on their game. Then the early Pentium designs crashed the game (when they ran it in the simulator), so every subsequent chip had to implement that buggy behaviour, only now it was documented behaviour...
  • by r00t ( 33219 ) on Sunday January 10, 2010 @03:45PM (#30716296) Journal

    Despite the waste, x86 instructions are still really compact compared to normal RISC instructions.

    Is this really an advantage though? Pipelined processors are much simpler when you can just grab 32 bits (or whatever) for an instruction. I'm not a hardware guy (have worked on a CPU design but was involved more in the compiler side) but it seems that it makes more sense to be consistent. Size of binary hasn't been a factor for years, even on handhelds.

    You'd have been 100% correct about 25 years ago. It used to be that CISC instruction decoding was a major cost. Today, CISC instruction decoding is fairly cheap.

    Binary size still matters because the instruction cache and the TLB have limits. As code size increases, you tend to "fall out" of the cache. (no longer fitting) This is a severe performance hit.

    Modern x86 gives you 16 integer registers, the same as ARM.

    x86-64 made some great improvements. What did Intel actually do? Is this an entire parallel instruction set? I will admit to being a bit of a dunce when it comes to this. Can we multiply using a different result register than AX:DX (this one has always annoyed me)

    Many of the more useless instructions go away when you set the "long mode" bit. That frees up bytes for a few new instructions and for 16 new prefix bytes, collectively known as the REX prefix. When you use a REX prefix on an instruction, you get access to extra registers and you can perform 64-bit operations.

    Still, it makes more sense to leave optimisations to the compiler rather than on-the-fly in silicon.

    Intel tried that with the Itanium. It didn't work out so well. There are two huge problems for the compiler. Parallelism is hard for the compiler to identify when dealing with typical code. Memory latency is hard for the compiler to predict. These two problems mean that the compiler will generate code that stalls quite a bit. The "smart" CPU isn't so limited because it can exploit information that is only available at run-time.

    And the conditional execution is a useful feature that I really can't do without.

    It is a fun feature. You do at least get conditional move on x86, which covers most of the need.

  • Re:Advantages... (Score:2, Interesting)

    by hattig ( 47930 ) on Monday January 11, 2010 @10:27AM (#30722796) Journal

    The world has moved on since 1999. Seriously, are you really comparing x86 to ARM based around an eleven year old device's features and compatibility?

    x86 JIT emulation on a 1GHz Cortex A8/A9 in the vein of the Alpha FX!32 emulation might equal a low-end Atom in performance. It does require someone to write it, and somehow integrate it into an ARM version of Wine though.

    And a 2006 mobile phone running one of the more limited smartphone OSes. Brilliant. You do know that there is plenty of Office compatible software out there, like Documents2Go, that does all you need? Well, maybe you need VBA in Excel or something ...

    Hell, a 1GHz A9 could run OpenOffice. Not that it would be pretty on a 4" 800x480 display, but ...

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...