Forgot your password?
typodupeerror
Cellphones Intel Linux

Intel and LG Team Up For x86 Smartphone 157

Posted by Soulskill
from the my-dumbphone-looks-worse-every-day dept.
gbjbaanb writes "I love stories about new smartphones; it shows the IT market is doing something different than the usual same-old desktop apps. Maybe one day we'll all be using super smartphones as our primary computing platforms. And so, here's Intel's offering: the LG GW990. Running a Moorestown CPU, which gives 'considerably' better energy efficiency than the Atom, it runs Intel's Linux distro — Moblin. Quoting: 'In some respects, the GW990 — which has an impressive high-resolution 4.8-inch touchscreen display — seems more like a MID than a smartphone. It's possible that we won't see x86 phones with truly competitive all-day battery life until the emergence of Medfield, the Moorestown successor that is said to be coming in 2011. It is clear, however, that Intel aims to eventually compete squarely with ARM in the high-end smartphone market."
This discussion has been archived. No new comments can be posted.

Intel and LG Team Up For x86 Smartphone

Comments Filter:
  • by omar.sahal (687649) on Sunday January 10, 2010 @01:31PM (#30715348) Homepage Journal

    compete squarely with ARM in the high-end smartphone market

    How can they do that when producing an ARM processor cost only ARMs royalty + costs added on from many producers (Texas instruments qualcomm et al).

    • by Locutus (9039) on Sunday January 10, 2010 @01:55PM (#30715546)
      To top it off, Intel has to use their highend processing factories to get the chips in the ball park as ARM. They just announce the 32nm Atoms along with their new i3, i5 and i7 all on the same process. But as you mentioned, they have to sell the Atom far far far cheaper than the iX CPUs to be competitive. IMO, the FTC should look into this to make sure their not dumping. Atleast with main PC CPUs, they charged high prices at first and then ramped the price down as the newer processes started to come online. With these Atoms, they can't charge what they cost and still be competitive.

      And these new phones will probably have a fan and require 2GB of memory so it can run Windows. lol. If they only talk about Gnu/Linux then we'll know they are serious but if they pull Microsoft in, you know it's a PR game and like the netbook segment, it'll run the prices up so high few will want them.

      LoL
    • Re: (Score:3, Insightful)

      by JackDW (904211)

      Part of the plan would surely involve getting into the IP core business, like ARM. AMD are doing it [eetimes.com], and some Intel researchers already have a prototype [acm.org].

    • Re: (Score:2, Informative)

      by KazW (1136177)

      compete squarely with ARM in the high-end smartphone market

      How can they do that when producing an ARM processor cost only ARMs royalty + costs added on from many producers (Texas instruments qualcomm et al).

      I hate quote mining [wikipedia.org]... You should have used the entire sentence, because you might have had to re-read it and you might have picked up on a key idea of the sentence. I think you did notice though, because your quote conveniently starts just after that word, which makes your post a troll in my eyes, and you're lucky I don't have mod points this week.

      It is clear, however, that Intel aims to eventually compete squarely with ARM in the high-end smartphone market.

    • by sznupi (719324) on Sunday January 10, 2010 @02:30PM (#30715804) Homepage

      ...and order of magnitude less power usage for the same performance. Meaning less problems with heat, smaller battery, much smaller phone with comparable performance.

      There is no benefit of x86 on smartphones that could drag Intel into this market, quite the contrary; ARM is established, and working very fine.

    • How can they do that when producing an ARM processor cost only ARMs royalty + costs added on from many producers

      I was going to ask, how can they compete when x86 processors cost an ARM and a leg!

  • by syousef (465911) on Sunday January 10, 2010 @01:31PM (#30715350) Journal

    Maybe one day we'll all be using super smartphones as our primary computing platforms.

    Oh I sure hope not. Sounds like hell to me, and I'm an aetheist!

    • by Hurricane78 (562437) <deleted.slashdot@org> on Sunday January 10, 2010 @01:54PM (#30715538)

      Well, that’s only your lack of imagination.

      Imagine a very powerful cell phone. With super-fast bluetooth. (Or wired bus if you prefer that.)
      Now imagine a normal screen, keyboard, mouse, and speakers/amplifier. All with bluetooth.
      There. If the speed and storage size are good, that’s all you usually need.

      Now imagine a dock where you put the phone in, to give it monstrous 3d hardware acceleration capabilities, or something else that needs a faster bus than bt can provide.
      Then you got games and professional use covered too.

      Finally one or multiple contact-lens displays, glasses, and a gesture glove reduced to some tiny ring or something. (There is something better, but I can’t talk about that right now.)

      I don’t see what’s missing there...

      • There goes **everything** if I'm using the phone for everything. When I drop it down a pit toilet, I'm not getting it back.

        Say, these are dirt cheap, right? I'm putting mine in the clothes washer you see...

        Should it also serve as my photo album, identity, and debit card? BTW, I just might drop it next to a 600v 3rd rail. Want to fetch it for me?

        • by copponex (13876) on Sunday January 10, 2010 @02:18PM (#30715714) Homepage

          This solutions to this are simple. This took me about a minute, not counting proof reading.

          1) The charging device also has a small hard drive built into it that always syncs the data - just like iTunes already does if you have an iPhone.

          2) The unique data - contact, calendars, documents - are constantly backed up to a server over the internet connection. Program data can easily be preloaded or reloaded onto a new phone.

          3) As far as monetary risks are concerned, there is something called insurance. You may want to look into it.

          The line between what a cell phone and a laptop and a computer mean intrinsically will continue to blur. Soon it will be simply the size of the interface. You'll have a mobile. Maybe the mobile will dock into a laptop or tablet style chassis to provide extra power and a full keyboard and larger screen - just like Lenovo just demonstrated at CES. The mobile can also be docked to your desktop system if you really need some extra horsepower or a fiber connection to the net. Meanwhile, your data is always with you. Doesn't sound so bad.

          • For pity sake, a smartphone is not going to do everything a laptop or desktop will do as well, no matter how you design it. I'm all for using a smartphone, but not as a panacea. Just as I don't use a hammer when I want to saw something.

            There are plenty of problems and solutions (some of which you have outlined) when it comes to phones, but that's not going to make them some piece of magic that does everything well. I don't want to do everything on a tiny screen with a tiny keyboard. Also note that some manu

        • Have you ever heard of the concept of backups? Hell my mobile phone has a backup cron job that is configured to back up everything at night right now. And that function is even built-in! I think all recent Symbian phones have it.
          I then have the phone software on the PC, that also has a built-in function to sync and backup the data in the background, whenever I connect the phone.

          I could throw the phone away right now, and lose nothing.

          I only consider having something like a data safe, so nobody does shit wit

        • I'm putting mine in the clothes washer you see...
          I did this once. :( The phone was clean but never worked again.
          Luckily the SIM card still worked when I purchased a new, unlocked, phone.
      • The power you could have in a normal sized PC, where you can upgrade parts, swap broken parts, expand what you want.

        Sorry, but what you are describing is the laptop, and I hate it when I am made to use one as a regular PC.

      • If all that can be fit into a cellphone, imagine what can be fit into a laptop or desktop...
    • by ColdWetDog (752185) on Sunday January 10, 2010 @02:01PM (#30715584) Homepage

      Oh I sure hope not. Sounds like hell to me, and I'm an aetheist!

      Perhaps maybe just purgatory. But it could work. Carry your uberdevice in your pocket (lead foil lined), use it with it's native human interface devices when wandering around. Pop it in some sort of dock at work with a decent keyboard, mouse and screen. Remember to pick it up before you go home.

      Obviously this sort of thing raises a number of issues and problems and the hardware in a smart phone just can't compete with a real computer for now for anything other than email / browsing / light apps. I'd love it at the hospital that I work - walk around the bedside inputting data, looking up things, pop the thing in the dock at the nurses station, look up an xray on a decent monitor, type in some notes, get up and walk around some more.

      Right now I have to scribble stuff on paper, walk over to a generic computer, log in to several different applications, gripe because Firefox isn't on this particular machine or doesn't have a utility that I like, actually do something useful, then log out of everything, rinse and repeat.

      So it might not be as bad as you envision it. Of course, this sort of thing requires significant multi vendor coordination and standards, so I don't hold out much hope for it. I guy can dream ...

    • by SirWinston (54399)

      I don't see smartphones as our future primary computing platforms...or even as our future mobile computing platforms...in fact, I don't see them in the future at all. My guess is that--considering how texting and mobile browsing seem to be a larger part of the future than voice--we'll see touch tablets too large to be called phones but small enough to be reasonably portable take over eventually, with a tiny bluetooth or successor earclip or earbud headset for voice calls over the tablet's wireless internet

  • Competition (Score:2, Interesting)

    by Anonymous Coward

    Has anyone made a scatter plot of benchmark score vs watt, for a given benchmark and various x86 and ARM processors?

  • Do Not Want (Score:4, Interesting)

    by marcansoft (727665) <hector @ m a r c a n s o f t.com> on Sunday January 10, 2010 @01:39PM (#30715412) Homepage

    Here we have a platform where there is no reason whatsoever to have an ass-backwards-compatible architecture in order to run legacy Windows apps. There is zero reason to use x86 here other than marketing and Intel. Please go away, we're perfectly happy with a modern RISC architecture (ARM), thank you very much.

    Here's to hoping that ARM will permeate its way up into the netbook market and beyond, instead of the other way around. We've been tortured by x86 long enough already.

    • I am honestly curious: What advantage is a "modern RISC architecture" if you are not writing in Assembly anyway?

      • Re:Do Not Want (Score:5, Informative)

        by marcansoft (727665) <hector @ m a r c a n s o f t.com> on Sunday January 10, 2010 @01:57PM (#30715556) Homepage

        The x86 CISC instruction set is so convoluted and ancient that x86 CPUs spend a lot of die area (and power) dealing with it and the weird ways that extensions have been tacked over time. The fact that it's old also means the CPU requires tons of logic because the instruction set was designed for simpler, less well performing CPUs. Newer techniques to speed things up usually work best with software support, while x86 CPUs have to implement these older techniques and then add a compatibility layer to make them work seamlessly with the old instruction set and old OSes that know nothing about them.

        One large difference between ARM and x86 that people rarely realize is that ARM only (usually) guarantees compatibility at the application (usermode) level, while x86 has to maintain compatibility down to the OS (kernelmode) level. ARM is free to update their architecture, add features required for modern performance, and require that the OS deal with them. This is hardly an issue because OSes adapt fast these days and the ARM market has no dependency on ancient OSes. x86 still has to deal with the fact that some nutjob might want to run Windows 3.11. Even when x86 does implement newer stuff, like the SYSCALL and SYSRET instructions that aim to replace the ancient and slow software interrupt system call mechanism, OSes are slow to adapt and the CPU still has to carry around the logic for the old crap. Forever.

        • Re:Do Not Want (Score:5, Interesting)

          by TheRaven64 (641858) on Sunday January 10, 2010 @02:08PM (#30715628) Journal

          The x86 CISC instruction set is so convoluted and ancient that x86 CPUs spend a lot of die area (and power) dealing with it and the weird ways that extensions have been tacked over time

          It's worth noting that how true this is depends a lot on the market that the chip is aimed at. An Atom and a Xeon both have approximately the same number of transistors dedicated to decoding instructions. In the Atom, it's a noticeable chunk of the total, both in terms of die area and power consumption. In the Xeon it's an insignificant amount.

          The x86 decoder was a big problem comparing something like a 386 to a SPARC32. The SPARC32 could use the same number of transistors but have a far higher percentage devoted to execution units. Comparing a Core 2 to an UltraSPARC IV, it's not nearly as relevant. The percentage of the die dedicated to the decoder is pretty small on both and the difference between using 1% of your transistor budget for the decoder and 2% is not significant. Particularly when the more complex decoder lets you get away with a smaller instruction cache.

          When you scale things down to the size of an Atom or a Cortex A8, the difference becomes significant again. In 5-10 years, chips for mobile devices may well be in the same situation that desktop chips were a decade ago, and then x86 will be a minor handicap, rather than a crippling one, but even with a 32nm process the decoder is still a big (relative) drain on a mobile x86 chip.

          From what I've read, Intel doesn't have anything that comes close to the Cortex A9 (as seen in Tegra 2) or the Snapdragon in terms of performance per Watt.

          • Re: (Score:3, Informative)

            by marcansoft (727665)

            What you say is true of the decoder, but the issues with insane software demands do affect even large CPUs significantly. The percentage of the CPU dedicated to instruction decoding can go down as you increase the amount of execution units etc., but x86 CPUs still have to dedicate a lot of chip routing and logic to the (many) "special cases" that software might demand yet are incompatible with modern CPU optimizations. Things like snooping writes in case the CPU needs to invalidate a TLB, etc. As you add co

            • Re: (Score:3, Interesting)

              by TheRaven64 (641858)
              Yes, you're quite correct. I remember a story from a former Intel Chief Architect about a bug in one instruction on the 486 that accidentally set a condition flag in some situations. A few game designers found that they could take advantage of this, and just slapped a minimum requirement of 486 on their game. Then the early Pentium designs crashed the game (when they ran it in the simulator), so every subsequent chip had to implement that buggy behaviour, only now it was documented behaviour...
          • I personally think that in the mobile space there is another factor, there is no need for x86 compatibility whatsoever more for the ARM compatibility since lots of CE programs are binary only.
            So we have a clean table, and unless Intel pulls out dirty tricks from their heat (by forcing the third partys to use their junk) I cannot see how they even can gain any ground there.

      • Re:Do Not Want (Score:5, Interesting)

        by TheRaven64 (641858) on Sunday January 10, 2010 @01:59PM (#30715564) Journal

        Simpler decoder. The instruction decoder is the one part of a CPU that you can't turn off while executing anything. An x86 decoder is much more complicated than, for example, an ARM decoder, so the minimum operating (i.e. not suspended) power consumption for the CPU is higher.

        An x86 chip has weird instructions for things like string manipulation that no compiler will ever emit, but which have to be supported by the decoder just in case. The usual advantage that x86 has over RISC chips is instruction density. Common instructions are shorter (actually, older instructions are shorter, for the most part, but old has quite a high correlation with common) and there are single instructions for things that are several RISC instructions, meaning that they can get away with smaller instruction caches than RISC chips.

        This doesn't apply to ARM. ARM instructions are incredibly dense. Most of them can be predicated on one or more condition registers, which means that you often don't need conditional branches for if statements in high-level languages. More importantly, there are things like Thumb and Thumb-2, which are 16-bit instruction sets suitable for a lot of ARM code, but which get very good cache density. Unlike x86, these are separate instruction sets. This means that the core can turn off the decoder hardware for the full ARM chip while in Thumb mode, and turn off the Thumb logic while in ARM mode. This gives you x86-like icache density and RISC-like decoder complexity, so you have the best of both worlds.

        • Re:Do Not Want (Score:4, Interesting)

          by marcansoft (727665) <hector @ m a r c a n s o f t.com> on Sunday January 10, 2010 @02:08PM (#30715636) Homepage

          Thumb is implemented as a layer on top of ARM (at least last I checked), so usually the ARM decoder will still be active while in Thumb mode. However, Thumb and Thumb-2 are essentially compressed versions of ARM, so the vast majority of the decoder can be shared. The combined decoder for a modern ARM CPU is still much simpler and better performing than the decoder for an x86 CPU.

          Another large advantage is that ARM programs by definition do not use things like self-modifying code without informing the CPU (i.e. issuing a dcache store and an icache invalidate). This means that ARM CPUs can be essentially Harvard architecture machines and they practically don't need any snooping logic for the caches. x86 CPUs always have to watch for insane things like an instruction modifying the instruction immediately after it in the pipeline, while that doesn't even work at all on ARM (you need the aforementioned flush/invalidate plus a write barrier and whatnot). Having CPUs impose these kinds of (reasonable) demands on the software is a very good thing for performance.

          • Another large advantage is that ARM programs by definition do not use things like self-modifying code without informing the CPU (i.e. issuing a dcache store and an icache invalidate). This means that ARM CPUs can be essentially Harvard architecture machines and they practically don't need any snooping logic for the caches.

            This was true until people JITed code became common. (ActionScript, JavaScript, Java, .NET CLR, etc.) Now x86 has an advantage.

            x86: translate to native code and then just run it

            ARM: translate to native code, call into the OS, have THE CACHE FLUSHED (**ouch**), and then run it

            When do we most worry about performance? Oh, when running bloated stuff written in awful scripting languages that must be JITed. Uh oh...

            • On ARM, you call into the OS and store DCache/invalidate ICache ranges. That means only blocks you touched are stored on the D side and invalidated on the I side. This invalidation on the I side is likely to cost near nothing because chances are you weren't running code out of there before.

              x86 has to do the same thing, except the CPU has to devote a huge gob of logic to guessing when. And sometimes it guesses wrong and flushes stuff needlessly.

              Just because something is easier on x86 doesn't mean it's cheape

              • by r00t (33219)

                On ARM, you call into the OS and store DCache/invalidate ICache ranges.

                Yeah, I know. That's unusually stupid; on PowerPC at least they made those instructions unprivileged. System calls are not free.

                That means only blocks you touched are stored on the D side and invalidated on the I side.

                It's even cheaper on x86, because you can delay doing that work. Cache lines get flushed naturally as time passes. By the time the CPU needs things flushed, it might have already happened. Some JITed code will never run; it's best to never pay the cost of handling it.

                Of course these details are only explanations of why the system as a whole might perform well or poorly. The simple

                • Yeah, I know. That's unusually stupid; on PowerPC at least they made those instructions unprivileged. System calls are not free.

                  Yes, PowerPC is a nice architecture too. I wouldn't mind a world with PowerPC desktops and laptops and ARM netbooks and cellphones. FWIW, there's nothing preventing ARM from adding unprivileged operations on the cache.

                  It's even cheaper on x86, because you can delay doing that work. Cache lines get flushed naturally as time passes. By the time the CPU needs things flushed, it might

                  • by r00t (33219)

                    If you're not going to run the JIT code why on earth are you compiling it anyway? There's a reason why it's called JIT. Not to mention that you can just stick the cache ops right before the JIT code is run, instead of as it is compiled.

                    Code contains branches. For a branch that isn't taken, you have a few choices.

                    A. You don't JIT until you hit the branch. On x86, this is mildly bad because you add instruction cache pressure ever time you code back into the JIT engine. On ARM, this is severely bad because you're doing a system call for every handful of JITed instructions.

                    B. You batch things up. You'll JIT some stuff that never runs.

                    I forgot to mention another difference. ARM caches tend to be virtually indexed and/or virtually tagged. This

                    • You don't JIT until you hit the branch. On x86, this is mildly bad because you add instruction cache pressure ever time you code back into the JIT engine. On ARM, this is severely bad because you're doing a system call for every handful of JITed instructions.

                      System calls have tiny overhead on just about every system. Except x86.

                      Nevermind that I doubt most JIT stuff out there works at a branch level; they tend to compile at a procedure level.

                      You batch things up. You'll JIT some stuff that never runs.

                      ICache f

            • Hence ARM invented ThumbEE? [wikipedia.org]
              If and when the LLVM JIT targets ThumbEE, hey presto, the performance problem disappears. e.g. using Redhat's shark implementation of Hotspot.
        • by Bengie (1121981)

          "The instruction decoder is the one part of a CPU that you can't turn off while executing anything"

          The i7 can disable its decoder. Since the decoder turns x86 into micro-ops, if the i7 detects a loop, then the decoder won't be used until the loop ends. It'll turn off the decoder during these. But yeah, too many deprecated instructions that should just be dropped.

        • An x86 chip has weird instructions for things like string manipulation that no compiler will ever emit, but which have to be supported by the decoder just in case.

          Sorry, that's just wrong. Lots of compilers will emit those instructions, especially when optimizing for size or when the string is known to be small or unaligned. Both gcc and Visual Studio will do it. The string instructions perform very well for small strings, and decently for large strings.

          Even if compilers wouldn't emit those instructions, they are sometimes used in C library assembly.

          ARM instructions are incredibly dense. Most of them can be predicated on one or more condition registers, which means that you often don't need conditional branches for if statements in high-level languages.

          In the real world, compilers almost never do this. (way too difficult) When they do, it's almost never anything more co

    • Re: (Score:3, Insightful)

      by itsme1234 (199680)

      Here we have a platform where there is no reason whatsoever to have an ass-backwards-compatible architecture in order to run legacy Windows apps.

      There are tons and tons and tons of x86 apps that run on some (potential) over sized x86 phone with 800x600 resolution, 512MB RAM, 1GHz CPU, 8-16-32... GB flash. Yes, you can do MANY things with iPhone, Android, Windows Mobile or Maemo. However with a small x86 box no matter how underpowered you can do MOSTLY ANYTHING. And there's a big difference. Examples: flash is big news on iPhone and Android. Java (as in browser applets): no chance in Android (don't know about the other platforms). That means some ban

      • There are tons and tons and tons of x86 apps that run on some (potential) over sized x86 phone with 800x600 resolution, 512MB RAM, 1GHz CPU, 8-16-32... GB flash

        And how many of those apps run on Linux? Of those, how many do not run on ARM?

        • by itsme1234 (199680)

          And how many of those apps run on Linux? Of those, how many do not run on ARM?

          I know this is slashdot and all but let's not pretend commercial software doesn't exist. Plus as I said I don't have patience to wait. Oh, they just brought Flash to my platform. How nice. Maybe we'll have Java in mobile browsers by 2011.

      • Re:Do Not Want (Score:4, Insightful)

        by EvilNTUser (573674) on Sunday January 10, 2010 @03:26PM (#30716152)

        If my phone had a USB host port, I could do all of the things you mentioned, and it runs Maemo + ARM Debian. Nasty corporate software excluded - and we'll all be better off if those guys are forced to modify their crap.

        Might I also suggest that you don't switch to a bank with a website that wants to run binaries on your computer. For your own good.

      • by bcmm (768152)

        Tried to print directly from your phone directly to some dumb printer?

        Tried to print directly from your laptop to someone else's dumb printer? Windows printer drivers are pretty horrible. At well over a gigabyte (!) of disk space required for some HP drivers (including all the non-optional utilities), it'd need a lot of storage (for a phone) to make you want to do that without thinking carefully...

      • Actually there are not too many banks which use Applets nowadays, and it comes down to porting the Applet API, Android already has most of Javas APIs underneath (only swing is left outside mostly which can be cross ported)
        since it runs most of its infrastructure on java.
        You the rest comes down to delivering the drivers and having the usb port to outgoing mode switched, the printer drivers are in linux so you have a higher chance to get them there than on WinCE which does not have any printing infrastructure

    • But but how are we going to run the vast library of malware written for x86? That's just a killer app waiting to happen on smartphones, and what's holding them back from becoming truely mainstream. I'm excited about this latest development. Oh, that and the ability to synergistically leverage all the x86 compiler expertise built up over the years.
  • I don't see Intel competing with ARM, ARM has an advantage over x86 in performance per watt, then again DEC, MIPS and many other RISC vendors didn't see Intel competing with them in the high-end workstation and server market. Hindsight is 20/20.

    • by Anonymous Coward on Sunday January 10, 2010 @02:18PM (#30715716)

      When humans get to the point where the technology of Star Trek is reality, the spaceships computers will be running x86. That makes me sad.

      • Actually I dont find that funny, because it is true. For the first time in 20 years we finally have the possibility to bury the overloaded awful power consuming x86 garbage in a future segment of computing without too much hurt and yet again it seems that we cannot do it entirely for the sake of filling the pockets of a handful of people.
        ARM has one of the best instruction sets ever created for a computer, it is probably the best power saving processor architecture in existence, yet Intel wants again to bur

  • maybe the image used on this topics OP? Seriously, Intel has to make their Atom chips on the top-of-the-line 32nm or better process equipment just to be in the ball game with the ARM or PPC chips with regards to performance per watt. They now want to put x86 chips in smartphones? I guess they can try to spin up the press about this failure and hope they can drag it out for another few years. Maybe at 24nm it'll work but by then, ARM will be on 32nm and probably running quad cores and still beating them.

    My
    • Actually ARMs Cortex already will be produced with the GlobalFoundrys 28nm process, this was one of the big news of the CES this year. That is the advantage of having a low transistor count and many manufacturers, you always can move to the best one and you wont have to fight so hard to get to smaller structures. By the time Intel is on 32 nm we probably will see a 2-4 Ghz cortex A10 with 28 nm and less power consumption than the best A9 today running circles again around Intels Low End offerings.

      Intel cann

  • "than the Atom" (Score:2, Insightful)

    by Anonymous Coward

    uh Mooresville is the latest iteration of the Atom.

  • Advantages... (Score:2, Insightful)

    by 91degrees (207121)
    ARM: Low power.

    x86: Runs most desktop PC applications.

    For a desktop PC the ability to run most PC applications is extremely important. For a smartphone, who cares? I don't want to run Paintshop Pro, Word, or Call of Duty on my smartphone. The apps that I do want to run already work on ARM. I do want low power. The improvements Intel has made are barely significant next to ARM's huge advantage here.
  • by Joe The Dragon (967727) on Sunday January 10, 2010 @02:00PM (#30715570)

    Needs to be open no APP store lock / sim locks as well.

    • No it doesn't.

      You want it to be.

      However, as the market has shown, it doesn't have to be and it can be very successful without being 'open' as you define it.

      Most of the rest of the world doesn't have some ideological battle against the man to fight, they just want their phone to work.

      If it needed to be open with no lockin, then it would be or they'd lose money.

      You guys really need to wake up and smell reality. Learn the difference between 'It needs' and 'I want' at the very least.

      • Re: (Score:3, Insightful)

        I own a phone that will never be popular. It will never be the iphone killer that it could be given 6 more months of hardcore development and polish. The nokia n900 runs similar hardware, but improves on it in many places (slide out keyboard, comes with tv-out cable right out of the box, ctrl+shift+x brings up xterm, integrates skype which means that skype calls are just as easy as phone calls), however, it will never be as popular as the iphone because it is so damn open, is without a major carrier's bless
      • If you're referring to the iPhone, the market has shown that it's nowhere near top market share. In fact, with Symbian, Android and Maemo, over half of 2010 smartphone production will be open or opening source.

        WTF, are we winning?

        Remember to vote with your wallet and demand root access.

      • Apple sells a lot, but could it sell more?

        SOE sold a lot of copies of Everquest, it didn't need to improve the game, it was succesful.

        And then Blizzard changed the game and redefined what success meant for a MMO.

        Rougly the same thing the happened to Apple earlier by the way. Apple did fine with its first Mac's. And then WinTel (compaq) showed what success really meant as they broke all sales records everyone had made before.

        Success is relative.

      • by Yfrwlf (998822)
        Yes, curse things that allow me to run what I want! Those all suck! What consumers would want to run what they want??? Oh yes, that's right, the ones who don't know anything but what they see before them, until their friends show them what they could have been running instead if it weren't for them using a locked down device with no freedom.
      • by Yfrwlf (998822)
        Most of the rest of the world doesn't have some ideological battle against the man to fight, they just want their phone to work.

        Let me reply again with less sarcasm and more clarity. =P

        No, the world always wants more freedom. Freedom should not be belittled, hence my previous comment. That's what the "fight against the man" represents, is a desire to not be controlled, and in this way consumers have many reasons to rebel. Yes, you're right in that some may be happy being lulled into the "my device
  • It is clear, however, that Intel aims to eventually compete squarely with ARM in the high-end smartphone market

    woe to the company that intel decides to "compete" with.

  • This article is full of shit. By the time smartphones become our primary computing platform, we'll be using at least Super Duper Smartphones, if not Super Mega Hyper Fragilistic Smartphones.

It is better to give than to lend, and it costs about the same.

Working...