Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Cellphones Handhelds Portables Programming Stats

An Interesting Look At the Performance of JavaScript On Mobile Devices 157

First time accepted submitter faffod writes "Coming from a background of console development, where memory management is a daily concern, I found it interesting that there was any doubt that memory management on a constrained system, like a mobile device, would be a concern. Drew Crawford took the time to document his thoughts, and though there is room for some bikesheding, overall it is spot on. Plus it taught me what bikeshedding means."
This discussion has been archived. No new comments can be posted.

An Interesting Look At the Performance of JavaScript On Mobile Devices

Comments Filter:
  • by EvanED ( 569694 ) <(moc.liamg) (ta) (denave)> on Sunday July 14, 2013 @07:08PM (#44280687)

    Don't blame Timothy for the dupe [slashdot.org]; no doubt he posted it from his mobile.

  • by Rob_Bryerton ( 606093 ) on Sunday July 14, 2013 @07:17PM (#44280719) Homepage
    From TFS:

    ...and though there is room for some bikesheding, overall it is spot on. Plus it taught me what bikeshedding means."

    But it didn't teach you how to spell it...

  • by goruka ( 1721094 ) on Sunday July 14, 2013 @08:10PM (#44280915)
    I'm probably going to get downvoted as troll, but my experiences with most console developers were often strange (as a developer myself).
    Talks usually end up in most of them dismissing scripting languages, higher level APIs (such as OpenGL), or certain algorithms as useless because they are slow, use too many instructions unnecessarily, waste cache, etc.

    Any attempt to raising a point about how you don't need to optimize everything but only few critical zones of your code (what matters), or that a cache wasting algorithm can end up being faster anyway just because it's more efficient, immediately results in myself being dismissed or treated as ignorant because, something inefficient is obviously inefficient and I must be stupid for not realizing that.

    This article reminds me of that. The author claims (in his first claim) that he is determined to prove that something is less useful because it's slower, and nowhere in that huge long piece of text there is anything useful offered as proof, instead he keeps posting data about how slow Javascript really is.
    • Any attempt to raising a point about how you don't need to optimize everything but only few critical zones of your code (what matters) ... immediately results in myself being dismissed or treated as ignorant

      To be fair, if you were debating with someone who writes applications that really do need the very top levels of performance, and you claimed that optimising trouble-spots would be sufficient to achieve that, then you were ignorant. For most software, being within a factor of 2 or 3 of hand-optimised low-level code is easily sufficient, and a bit of work with the profiler to identify the most serious flaws will go a long way. The rules change when you shift from that kind of performance standard to needing the very top levels, because then the emphasis on speed permeates everything.

    • by Arker ( 91948 ) on Sunday July 14, 2013 @08:50PM (#44281107) Homepage

      They arent useless but they are certainly not always the best (or even simply an appropriate) tool to use. Scripting is great for the trivial-but-useful, particularly a one-off or a mock-up. But any sort of serious computer program is going to be better if you program it and most anything that is trivial-but-useful enough to have a jscript app should be quickly reproducable on any platform that you want in a native fashion. Do you need a native program to have a simple calculator? No, javascript will do. And that's great.

      At the same time, if that simple calculator is something you use very often, make an app. You dont have to be worried about the simple calculator overwhelming your phone to think that makes sense - because we all multitask these days and/or worry about electricity/battery life etc. on many devices.

      It's funny seeing how people are rediscovering the importance of programming as a result of these phones. The reason it makes me chuckle is that I remember working with PCs that had orders of magnitude less system resources than these phones do, so I understand that it is actually the abstraction layers in use that make the hardware seem so limited.

      You kids nowadays think a power-throttled ARM processor and a couple gigabytes of RAM is 'limited resources.' You should try a z80 with 8kb ram. Believe it or not, if you actually program the thing instead of expecting your libraries and abstraction layers to deal with it for you, you could make that work too.

      • by TheDarkMaster ( 1292526 ) on Monday July 15, 2013 @11:27AM (#44285741)
        It's funny seeing how people are rediscovering the importance of programming as a result of these phones

        Very, very well said, sir.
    • by arth1 ( 260657 ) on Sunday July 14, 2013 @08:53PM (#44281125) Homepage Journal

      The problem is that your baby is not the only thing running on the system. When you waste resources, you do it on behalf of everything else that runs too. Even if your baby isn't doing anything critical when you waste it.

      It only takes one selfish programmer to screw up an embedded system. You are he.

      • by goruka ( 1721094 ) on Monday July 15, 2013 @07:54AM (#44283163)

        It only takes one selfish programmer to screw up an embedded system. You are he.

        Even though it's unrelated with my original post, you are saying that not going native is worse because it uses more CPU cycles/battery?
        Explain to me why, for decades, the industry used J2ME, Java (Android) and now ObjC (Apple). I guess the entire mobile industry is selfish and greedy?
        You probably didn't understand GP, though, the message is that you don't need to optimize something that doesn't consume enough cycles be a performance problem.

        • Even though it's unrelated with my original post, you are saying that not going native is worse because it uses more CPU cycles/battery?
          Explain to me why, for decades, the industry used J2ME, Java (Android) and now ObjC (Apple). I guess the entire mobile industry is selfish and greedy?

          Of course they are. But that's beside the point. Development is a trade-off - you have to work with the market you have, within deadlines that means you'll sell, and developers you can find and afford. So yes, you make do with what makes the task feasible.
          But you don't have to make it any worse than necessary by allowing bloat and doing things inefficiently. Adapting a mindset that you do work in a shared embedded environment, and do things frugally doesn't incur a great cost.

          You probably didn't understand GP, though, the message is that you don't need to optimize something that doesn't consume enough cycles be a performance problem.

          It's not just about cycles. It's also about resource use in a shared environment. The key word being shared. Whether something doesn't impact your own application isn't the problem - unless you have thought about how it could impact other applications and the overall system, you haven't done your job.

          • To follow up on my own post, what we see in environments like the Android world is a tragedy of the commons. If everybody played nice, everybody would benefit. But there's no penalty to yourself for being greedy, so you are. And so are all others.

            Android really needs something like strictly enforced cgroups.

            • by goruka ( 1721094 ) on Monday July 15, 2013 @11:55AM (#44286101)
              I understand your point, but I believe it's a little too extremist.
              In the real world, It is always possible to write more efficient code, but the more you optimize, the more difficult to develop, maintain or port it becomes, exponentially.
              So in the end, it's always a trade off between performance and cost of development, added to the fact that not all code needs to be optimized, only the little portions that perform the most critical tasks.
              • added to the fact that not all code needs to be optimized, only the little portions that perform the most critical tasks.

                That this is false is my point - it's only true if your app is the only app on a system. On a shared embedded system, the portions that don't do critical tasks are just as important to optimize for the rest of the system.
                Because there's no penalty to your own app, it becomes a tragedy of the commons [wikipedia.org].

                • by goruka ( 1721094 ) on Monday July 15, 2013 @01:03PM (#44287003)
                  So, what's the difference then, that your phone battery will last 18 hours instead of 20 because you didn't optimize more than the critical tasks?
                  It seems much cheaper to solve this by adding a little more battery capacity, yet keep your phone OS and applications easier and cheaper to develop.
                  No matter how you look at it, I can't see the scenario you describe as being a tragedy..
                  • You didn't follow the link, did you? It's a situation that's called "a tragedy of the commons", which doesn't mean it's a tragedy.

                    And anyhow, it's not about battery life, but applications using more than their fair share of memory, IO or other resources contribute to starvation for other apps that run at the same time, possibly causing crashes in other apps when they cannot allocate memory (because they're well behaved and allocate when needed and free when done), cannot update alarms in time, can't take a phone call(!), can't AV scan an incoming e-mail, or a million other things that can go wrong if too many apps on a system are hogs.

        • Explain to me why, for decades, the industry used J2ME

          Because Sun pushed it on everyone. It sucked big time, though. Did you ever write a J2ME app? It was the kind of platform where everything was an object except for primitives, but memory management was so messed up because of the combination of GC and extremely small heap, that pretty much any serious app didn't use any objects. Instead, you preallocated arrays of primitives, and used that for everything.

          Java (Android)

          You mean, the only mobile platform that still has horrible UI latency?

          and now ObjC (Apple).

          Obj-C compiles to native code and does not have a GC.

          In fact, it's kinda one of the major points in TFA. Are you sure you've actually read it?

          the message is that you don't need to optimize something that doesn't consume enough cycles be a performance problem.

          The problem in question is not a performance problem, it's a responsiveness problem. If you have a GC kick in during a touch-driven animation (e.g. user swiping a list to scroll), and it takes 200ms to walk your object graph and collect unreferenced objects, you've already lost that battle - the user will see lag and jittery animation that desyncs from his finger movement. Yet this exact thing is unavoidable in languages with tracing background GC, like Java or JS, since you don't have any control about when the GC kicks in. The only thing you can really do is avoid object allocations in the first place - which turns Java into an ugly and unwieldy subset of C (J2ME-style), and is utterly impossible in JS since what gets allocated where is an implementation detail there.

    • by dutchwhizzman ( 817898 ) on Monday July 15, 2013 @12:23AM (#44281865)

      I happen to have an average of about 200 tabs open on most of my daily use machines. This tends to eat most of the resources that my machine has, regardless of how modern the machine is. If it's an old clunker, it chokes on less and I generally don't make it that far before I have to kill the browser and restart it. It seems impossible for any OS vendor and any "full featured" web browser to just deal with the limitations of the system and keep the application snappy and usable.

      Sure, it is way faster if you load just a few pages, but I just don't close tabs I think I might need later. Bookmarks accumulate so fast and searching them is bothersome, let alone cleaning them out. Regardless of my reasons why I do it, I see a lot of people hit the limitations of the model in which speed is achieved by unlimited resource claiming and not by efficient coding, if it comes to web browsers. The reason that mobile is in the picture is because a mobile device is way less powerful and battery life is much more of a concern than for a desktop or AC powered laptop.

      I wish they would test "speed" by simultaneously loading 200 tabs of content-rich web pages on a single core system with 512M ram available to the browser and a slow magnetic media drive (5400rpm laptop drive) for storage. In my opinion, it would be a way more realistic test than just loading a single page, keeping everything in RAM and only looking at methods that will bottle neck at CPU/GPU or memory access. I bet that "speed improvements" in the JS engine would suddenly come from very different optimizations than from the current philosophy and that they would greatly benefit "mobile" too.

    • by White Flame ( 1074973 ) on Monday July 15, 2013 @06:06AM (#44282685)

      The funny thing is that he's recommending Automatic Reference Counting instead, which destroys cache much more than GC during regular processing.

    • by UnknownSoldier ( 67820 ) on Monday July 15, 2013 @10:59AM (#44285405)

      > Talks usually end up in most of them dismissing scripting languages, higher level APIs (such as OpenGL),

      A few years back I've implemented OpenGL on the Wii and did maintenance work on our OpenGL version running on the PS2. Hell, even shipped a couple of games with it. OpenGL 1.x _can_ be implemented efficiently on a console if you apply some discipline. People who dismiss a rendering pipeline probably have never implemented one. HOWEVER, their point is that memory manage CAN be an issue if one isn't careful. ( NOTE please don't confuse the misapplication with the theory: Broadcom has a garbage implementation of OpenGL ES 1.x & 2.x across there devices from the ones I've worked with, but there is no reason it needs to be that bad. Apple has a very nice OpenGL ES implementation.)

      > I'm probably going to get downvoted as troll, but my experiences with most console developers were often strange (as a developer myself).
      > Any attempt to raising a point about how you don't need to optimize everything but only few critical zones of your code (what matters),

      No, the reason you should get down modded is BECAUSE your mindset is part of the problem not the solution,

      1. You don't understand the first rule of computing:

      TINSTAAFL: There Is No Such Thing As A Free Lunch.

      I do not know to the extent you are but lazy developers are spoiled by their excessive Virtual Memory, slow high level languages, and including every bloated 3rd party library under the sun and the kitchen sink that OVERALL add to a SLOW machine. Great developers _constantly_ keep optimizations a low priority THROUGHOUT the writing process in the background so they don't have go and clean up all the crap later. Your mentality of "we will fix it later" is the sign of an immature and inexperienced programmer.

      2. Your boss / peers are trying to teach you an important lesson:

      Do It Right The First Time!

      and

      Keep is Simple, silly!

      THAT is the point -- not your uneducated rant about "Who cares about memory usage, memory fragmentation, garbage collection, memory leaks, cpu cycles, etc." -- well your USER does EVEN if they don't understand the technical terms. ONCE you say the user experience is no longer important you have FAILED as a programmer. People are the SOLE reason software even exists in the first place. Please stop this shitty attitude that "performance doesn't matter -- we'll just throw more hardware at it." No! NO! NO! How about doing the best* with what you have instead?? ALL your code should be (relatively) clean, simple, fast, efficient. This has the side-effect that it is EASY to maintain to boot! Who wouldn't want. And you are going to make excuses that you can't be bothered???

      * Apply the 80/20 rule for time-management in case it wasn't obvious.

      To bring this back on topic, Javascript is a badly designed & implemented language because of

      a) lack of memory control which leads to memory fragmentation and bloat, and
      b) lack of types and which forces the run-time to do extra unnecessary work if they had designed the language properly in the first place.

      There is a time and place for everything. Just never forgot to keep asking the questions: Can we do this better?

      --
      "Necessity is the Mother of Invention, but Curiosity is the Father." -- Michaelangelo

  • by slashmydots ( 2189826 ) on Sunday July 14, 2013 @08:24PM (#44280981)
    I learned C++ then VB.NET then C# then JavaScript. I was shocked at how "anything goes" it was and I assumed it was a memory nightmare. Considering my i5-2400 sometimes maxes out on pages with complicated javascript, I'm not surprised. I heard that JS can take 10x more memory than it needs at any given time realistically.
    The one thing I can't wrap my head around is if it was made a "real" language, would it be a gigantic security disaster? Or could it be limited enough to not turn into Flash, Java, etc?
    • by Rockoon ( 1252108 ) on Sunday July 14, 2013 @10:06PM (#44281403)
      I started programming at a time when GOTO was still considered "kosher" (in C, no less) as a lot of algorithms were designed as state machines. To this day I still sometimes consider a goto, but even I stand in awe of the complete retardedness that is javascript.

      What happened to languages that pick some things and then do them well? JavaScript seems to have evolved to try to do everything, and yet it doesnt do anything even close to well at all.
    • node.js (Score:5, Funny)

      by dutchwhizzman ( 817898 ) on Monday July 15, 2013 @12:25AM (#44281873)
      Try looking into node.js and see all your fears come true.
    • by TheDarkMaster ( 1292526 ) on Monday July 15, 2013 @11:59AM (#44286157)
      Good question. My educated guess is that the first problem would be security, after all you would be running a complete application simply by accessing the page. The second problem would be that as each one have a favorite language, in deciding what would be the "lingua franca" of the Web we would have a Digital World War
  • Crisis Averted! (Score:5, Insightful)

    by GreyLurk ( 35139 ) on Sunday July 14, 2013 @08:31PM (#44281021) Homepage Journal
    Whew, I'm glad we managed to get everyone to switch off of memory-leaking and CPU intensive Flash stuff over to standards compliant HTML5 and JavaScript then!
  • Animated GIFs.

    I think they're of the devil, but for some reason a lot of baseball stat heads still use them instead of a video format when they want to post a few seconds of a game for illustrative purposes. It's weird because these are generally young guys, not the old farts who you'd expect not to have changed their workflow since 1995...

    JavaScript on the iPad? That doesn't seem slow - certainly not enough to where it registers anyway.

    • for some reason a lot of baseball stat heads still use [GIF animations] instead of a video format

      Some browsers can view only H.264 and animated GIF. Other browsers can view only Theora, WebM, and animated GIF. Some, such as the latest version of Internet Explorer that runs on Windows XP, can't view anything but animated GIF without plug-ins that may or may not be installed and that the current user may or may not have privileges to install. If the only video format supported by all browsers is animated GIF, what should a site use to reach the most viewers?

      • by dutchwhizzman ( 817898 ) on Monday July 15, 2013 @12:30AM (#44281879)
        True, animated gif is the most widely supported "movie" format if you look at all target platforms. However, there currently is no technology implemented in browsers that will take an animated gif, re-render it into something that can be accelerated by the video card and use that for output. This results in the browser pumping all the frames non-accelerated to the video card. Devices with limited (read, all devices that have more than a few tabs open, or mobile devices) will hit limitations of the hardware pretty fast with this sort of animation.
  • More Full Response (Score:5, Insightful)

    by El Royo ( 907295 ) on Sunday July 14, 2013 @09:41PM (#44281309) Homepage
    I made a comment [slashdot.org] to a poster over on the original posting of this. I think it's worth expanding upon in case people are persuaded by the arguments in the paper.

    First off, just as TFA predicts, I'm not going to try to conquer his mountain of facts and experts by presenting a mountain of citations. Instead, I'm going to point out where his conclusions are not supported by his facts and point out his straw man arguments and his attempt to convince us through overwhelming expert opinion.

    The straw man: In the article, he presents two scenarios (photo editing and video streaming) and claims that you can't reasonably do those because of memory limitations (on the iPhone/iPad). He then concludes you can't produce useful apps because you can't do those two. I couldn't find any citations of people attempting to do this on mobile using JavaScript. Choose the right tool for the job here. I'll give him these two use cases (and several others: 3D Games, audio processing, etc), however to extrapolate from here that no useful apps can be produced (ever!) using JavaScript is a leap too far.

    Next, he spends a lot of time diving into the particulars of garbage collection (GC). I'm going to grant him practically every point he made about GCs. They're true. And, it's true that mobile is a constrained environment and you must pay attention to this. But, this is largely known by developers who are trying to write high-performance JavaScript applications on mobile. Hell, -anyone- writing high-performance apps in any language need to be aware of this. If you allocate memory during your animation routines in a game you're asking for trouble, regardless of the language. So, to me, this part is just a call to pay attention to your memory usage in your apps. This is really useful advice and I will be paying even more attention to the new memory tools available in the latest Google Chrome dev tools.

    One of the biggest problems in the rant is the comparison of pure computing performance and his claim that ARM will never be as fast as desktop. I'm going to again grant that this is true. However, this means crap-all for most apps. Tell me: How many apps do you have one your phone that are processor bound? None? One? Two? The vast majority of apps spend their time either waiting on the user or, possibly, waiting on the network. You can write a lot of really useful apps even given constrained processor and memory. Anyone remember the Palm Pre? The TouchPad? Most of those apps were JavaScript and they worked just fine.

    This brings me to the point of all this, TFA's author focuses on performance. However, users focus on responsiveness. JavaScript is perfectly capable of producing responsive applications. Sometimes, it takes attention to detail. Nothing is ever 100% free and easy. JavaScript is not a magic solution and those of us who think that JavaScript has a future in mobile app development know this. This is why programmers get the big bucks. Writing mobile apps, you need to be aware of the effects of CSS, memory, processor, responsiveness and more.
    • by CadentOrange ( 2429626 ) on Monday July 15, 2013 @03:04AM (#44282269)

      One of the biggest problems in the rant is the comparison of pure computing performance and his claim that ARM will never be as fast as desktop. I'm going to again grant that this is true. However, this means crap-all for most apps. Tell me: How many apps do you have one your phone that are processor bound? None? One? Two?

      The other issue you're not addressing it runtime memory requirements. From the GC performance chart, the best performing GC that provides near native performance does so by requiring 5x more memory. This is going to impact the number of concurrently running apps on your phone/tablet before things start slowing down. Users prize snappy interfaces and if their mobile device slows down then the knowledgable users will bring up a task manager to figure out what's going on.

      Do you want to stick around to see what they'd write on your app's page in the App Store?

    • by Luyseyal ( 3154 ) <swaters@luy.i n f o> on Monday July 15, 2013 @11:09AM (#44285503) Homepage

      I think you're overstating the case of performance verses responsiveness. He does specifically point out that GC is negatively impacting UI response times and that that is not going to work.

      I agree with you that people who work in the problem space know about memory management by experience. However, that JavaScript makes it so difficult to manually manage memory seems to be his real point.

      Lastly, if we could solve the network speed problem, you would just outsource real CPU/memory apps to the server and simply RDP* the relevant bits back to the mobile device.

      -l

      * RDP or whatever protocol makes the most sense. I just used RDP to get the example across.

    • by shutdown -p now ( 807394 ) on Monday July 15, 2013 @03:08PM (#44288471) Journal

      If you allocate memory during your animation routines in a game you're asking for trouble, regardless of the language.

      That's not true at all. If you allocate memory during animation in a language with deterministic memory management, you have a pretty good understanding of what it'll cost and whether you can afford it (and in many cases, the answer is yes).

      Note that animations are not specific to games. One common case where you allocate memory during an animation is when the user is scrolling a list that is backed by a dynamic data store (i.e. items are generated "on the fly").

      More importantly, the problem with GC is not allocation, it's deallocation (or rather the object graph walk that precedes it). You might have not allocated anything inside your animation loop, but you have surely allocated something at some point in the past. You have no guarantee that the GC does not decide to kick in, pause your loop, and do a graph walk to clean that up, if e.g. the OS reports that it's low on memory.

      This brings me to the point of all this, TFA's author focuses on performance. However, users focus on responsiveness. JavaScript is perfectly capable of producing responsive applications.

      I would argue it's exactly the other way around: author focuses on responsiveness, and brings performance largely as part of the discussion about responsiveness. That's precisely why he dedicates so much time specifically to GC, and comparatively little on other things that affect raw perf (like dynamic typing) - because GC and responsiveness don't mix well.

  • TLDR version (Score:5, Informative)

    by Rob_Bryerton ( 606093 ) on Sunday July 14, 2013 @09:51PM (#44281345) Homepage
    Actually a pretty well written piece, if a bit wordy. I see a lot of people commenting here are perhaps missing the point, thinking that the author's angle was JS=BAD. Not at all. My take was his issue was not so much with JavaScript, but with Garbage Collected languages in general.

    An important point he made regarding GC routines and how they tend to be unpredictable in terms of when and how long they run. Also, much was discussed on his observations that, if you have several times more memory available than what your app needs, the GC routines are very non-intrusive. However, when you get into a low memory situation, the performance hit from GC is huge and causes obvious stutters in the application and/or it's UI.

    Also, some discussion on the irony of working around (or trying to "spoof") the GC by using various manual techniques, and how that almost amounts to manual memory management. All in all, a really interesting read.
  • by vikingpower ( 768921 ) on Monday July 15, 2013 @01:29AM (#44282037) Homepage Journal
    The very same blog post was already the subject of an article on Slashdot last week. What is going on here ??
  • Opera browser (versions up to 12) was shown on one graph (iphone 4S) in the article and it kicked some serious a$$ but never mentioned again.

    One reply to the article was about Opera being only browser to run Google Wave, kind of... "The only browser that ran it with anything resembling
    “speed” for it’s first year or so was Opera, and Opera never really worked very well with it anyway."

    I enjoy security through obscurity but Opera is just too good a browser to ignore.

  • Bottom line: just because you're using a GC'd language and you CAN ignore memory management, doesn't mean you SHOULD. That goes for JS, Java or any other GC'd language in existence.

    I hate to go off on a tangent... but that won't stop me from doing so, because I think it's actually the core of the issue and is entirely non-technical:

    This all goes back to the abysmal state of many (most?) "modern" developers.

    If you grew you with computers at the time I did, the late 70's/early 80's, and you learned to program those early 8-bit home computers, you kinda take this stuff for granted (memory management I mean). You just inherently think differently than "modern" developers do. You see things at a much lower level... even when you're working at a high level of abstraction, your mind automatically goes lower... instantiating an object in Java? You're mind at some level is thinking about how memory is being allocated, how the object reference is being stored, etc. Hell, you even start to think about the messages the OS is passing around, how those messages must map to C functions, and how those functions ultimately resolve down to assembly.

    I'm NOT saying you KNOW all those details... not really... you just know the concepts... and I'm certainly not saying such details are relevant most of the time because they're not... I'm just saying that's the way our brains work... we can "see" all the levels below the one we're actually working on in our minds' eye, if only in a conceptual sense, and it happens without trying.

    I's because we generally started learning at those low levels and everything over the years has built up logically from there. Most of us started with BASIC but quickly jump to Assembly because that was the only way to achieve what we really wanted to (games, mostly). Once you're at that level, it's an entirely different mindset. Those of us that also had an electronics background go a step further because we even go below the Assembly level sometimes (and that wasn't all that uncommon back then... of course, the electronics were considerably simpler and easier to understand than they are now).

    That's a VERY different evolution than the kid that STARTS with Java or JavaScript or whatever now, then goes to school and learns more high-level stuff. And it shows in daily work life all the time! I see people constantly in my career who aren't really bad developers, but they are, somehow, lacking... it usually shows when things aren't working as expected. They have a difficult time breaking things down and figuring out what's going on. Oh, they can Google an answer as well as anyone, and hey, probably 9 times out of 10 that's sufficient. But they're just stumped beyond belief that one time... they just can't get into the details and work the problem at a fundamental level. They don't REALLY understand how these machines, these operating systems, work. And that's a really bad state of affairs.

    (to be fair, some of us that learned in the "ground-up" way sometimes have difficulty STAYING at a high level... we sometimes trip over discussions that are too abstract because our brains are searching for the details that aren't there, and really aren't even relevant... that's a whole other discussion, but it's a true phenomenon).

    All of this... to try and pull it back to topic relevance... means that relatively simple things like designing your code to minimize object allocation and deallocation seems mysterious to a lot of modern developers... they don't always get why it's important, and it seems like some black art or something even when they do... to us old-schoolers, I guess that's what we are now, it's actually quite natural to think that way. Even in JavaScript, where I've done considerable work, and highly complex work, GC has never presented a big issue for me, primarily because I've ALWAYS thought about it and know how to avoid it at the right times. The language itself isn't flawed, modern developers' ability to use it effectively is.

    We're proba

  • by yusing ( 216625 ) on Monday July 15, 2013 @04:55PM (#44289601) Journal

    "How much memory is available on iOS? It’s hard to say exactly."

    Exactly. Because it's an appliance. And you're either willing to take it apart and learn how it works so you can make it do what's needed, or you accept that you're an appliance operator and stop bitching about it.

Would you people stop playing these stupid games?!?!?!!!!

Working...