Forgot your password?
typodupeerror
Programming

Why JavaScript On Mobile Is Slow 407

Posted by Soulskill
from the i-blame-the-schools dept.
An anonymous reader writes "Drew Crawford has a good write up of the current state of JavaScript in mobile development, and why the lack of explicit memory handling (and a design philosophy that ignores memory issues) leads to massive garbage collection overhead, which prevents HTML5/JS from being deployed for anything besides light duty mobile web development. Quoting: 'Here’s the point: memory management is hard on mobile. iOS has formed a culture around doing most things manually and trying to make the compiler do some of the easy parts. Android has formed a culture around improving a garbage collector that they try very hard not to use in practice. But either way, everybody spends a lot of time thinking about memory management when they write mobile applications. There’s just no substitute for thinking about memory. Like, a lot. When JavaScript people or Ruby people or Python people hear "garbage collector," they understand it to mean "silver bullet garbage collector." They mean "garbage collector that frees me from thinking about managing memory." But there’s no silver bullet on mobile devices. Everybody thinks about memory on mobile, whether they have a garbage collector or not. The only way to get "silver bullet" memory management is the same way we do it on the desktop–by having 10x more memory than your program really needs.'"
This discussion has been archived. No new comments can be posted.

Why JavaScript On Mobile Is Slow

Comments Filter:
  • always (Score:5, Insightful)

    by Spaham (634471) on Wednesday July 10, 2013 @05:50PM (#44244249)

    You always need to think about memory. Like you need to think about what you're doing.
    Too bad for the "write app get rich" idiots.

  • Re:Easy (Score:5, Insightful)

    by Anonymous Coward on Wednesday July 10, 2013 @06:08PM (#44244453)

    We already know how to "really code". We just got sick of reinventing the wheel every time we start a new project. Now we let the libraries do the tedious crap, and we focus our attention on where it's actually needed.

    You're going to use our library-heavy code, and you're going to like it. You already do, in fact. You're lying when you pretend otherwise.

  • Re:Easy (Score:5, Insightful)

    by girlintraining (1395911) on Wednesday July 10, 2013 @06:09PM (#44244471)

    Stop loading dozens of fucking libraries and frameworks and learn to really code.

    If memory management was so easy, we wouldn't have devoted so much of our programming guides, style manuals, etc., to it. It's not a simple matter of "I wave my hand and the problem goes away." It has existed since before there were "dozens of fucking libraries and frameworks" and at a time when people did know how to "really code"... it has existed since the very. first. computer. And it hasn't been solved to this day.

    The main reason, I suppose, is the same reason why we haven't yet found The One True Concrete that all things can be built out of, or the One True Operating System upon which everything can run, or the One True... you get the damn idea. Men much smarter than you have devoted their entire careers to trying to solve the problem, and it's incredibly pretentious of you to toss off a one liner like it's (puts on sunglasses) just a simple matter of programming.

  • Re:Easy (Score:1, Insightful)

    by Anonymous Coward on Wednesday July 10, 2013 @06:12PM (#44244507)

    Stop loading dozens of fucking libraries and frameworks and learn to really code.

    YEAH, anything beyond C is OVERHEAD!

  • by tokizr (1984172) on Wednesday July 10, 2013 @06:15PM (#44244539)
    You could just 'force' people to use a language with explicit memory management, like by offering [better] support for that particular language (C/C++ is best but I understand people do not enjoy these lower level languages as much). I always thought that the best form of garbage collection is not having garbage collection at all, but managing your memory efficiently and having good allocators. Yet even on languages such as Java/Javascript you can be smart about your objects so to minimize the underlying allocations. I would suppose javascript may be a little harder since it's not strongly typed but it should still be possible.
  • by slacka (713188) on Wednesday July 10, 2013 @06:18PM (#44244581)

    This is one of major flaws behind these Web based Mobile OS’s, you think that after WebOS, beautiful as it was, Mozilla would have learned their lesson. Instead, they’re trying to drive underpowered hardware with a HTML/JS. All the web technologies are being shoehorned into areas they were never designed for. From DOM being used for Applications to the lightweight scripting language, JavaScript, being used for Apps, to a bloated HTML render as the platform's UI toolkit.

    JavaScript is a nice little scripting language that’s got some nice functional programming features. When you need to need to write heavy applications that require performance, low memory usage, and multithreading, it’s the wrong choice.

  • by ShanghaiBill (739463) on Wednesday July 10, 2013 @06:39PM (#44244813)

    Even if you don't use jQuery, for example, you're going to need to find and then use some other library that does the same thing

    Furthermore, popular libraries like jQuery, Mobile-jQuery, etc. are much more likely to have clean, efficient, memory-lite implementations that some "roll-your-own" code. If you choose your libraries carefully, learn to use them, and avoid "rolling-your-own" unless really necessary, your code will certainly be smaller and cleaner, and usually be faster, smaller, and use less memory.

  • by ducomputergeek (595742) on Wednesday July 10, 2013 @06:55PM (#44244959)

    Most the "Apps" I'm being hired to write are basically CRUD form apps that are designed to read info from tables in a database. Usually to take forms already in use by desktops written in Java or .Net or in some cases god only knows what and adapt them for use on mobile devices.

    I've frankly found jQueryMobile + HTML5 + Phonegap/Cordova makes this task farily easy to undertake client side. Actuallly in most cases the cost is still developing and deploying the API side in your choice of server side scripting language. And often that's based upon a perl script that I wrote circa 2000 to take form input, validate, and then go fetch data from a database and return in XML, YAML, or JSON these days. Other projects, the server side is in PHP or C# or Java. Just depends on what the client already has.

    Now I can see trying to buld other types of apps using HTML5/JS is asking for disaster.

    Sorry, I'm an old perl guy who thinks use the right tool for the job and there is still more than one way to do it.

  • by Nerdfest (867930) on Wednesday July 10, 2013 @06:59PM (#44244997)

    Learning a flaky, inconsistent language is only prolonging the problem. The web needs to move to something sane. As I said to someone the other day, it's extremely sad that the two most popular languages used for web development are two of the worst languages around (JavaSCript & PHP). It does go a ways towards explaining the quality of web software in general.

  • Re:always (Score:2, Insightful)

    by Anonymous Coward on Wednesday July 10, 2013 @07:03PM (#44245037)

    Garbage collected languages have live-leaks that can have exactly the same memory bloat consequences that other memory leaks do. It's where you keep around a reference to an object sub-graph that you aren't actually using. This gets extra bad if you end up building up a linked list of such leaked objects and the linked list grows in size as your application runs. So you do need to think about live leaks every time you store a reference, it's just that the consequences are likely to be less dire if you get it wrong. If you are only thinking about this once a year, that probably means your code is riddled with this issue. So the GP is exactly right in stating that "memory leaks will still bring your system to a crawl" even for garbage collected languages.

  • by loufoque (1400831) on Wednesday July 10, 2013 @07:05PM (#44245061)

    Yet C developers have no problem using C, which is much more minimal language, to do much more than what you do with JavaScript, and they rarely depend on shitloads of libraries.

  • by amicusNYCL (1538833) on Wednesday July 10, 2013 @07:05PM (#44245067)

    You'll find that just about every feature your "essential" library provides has a native equivalent that works across browsers -- even as far back as IE 8.

    That's a pretty naive view that over-simplifies the situation. One major use for a framework, for example, is to normalize the behavior of different browsers. Another major use is to provide implementations to create interface elements. Now, obviously, everything is natively supported because the Javascript framework is right there doing it, natively. But why should I write the necessary logic to create a draggable window, or a tree view, or sortable grid, when I can just pull that in from a framework? ExtJS [sencha.com] is the kind of framework I'm thinking of. Why should I implement ajax-style uploads inside an iframe when they already did that for me, and I can just set up a form panel, indicate it is a file upload form, and write the important stuff?

    Even though I can use a massive ExtJS application on a phone, we're not talking about massive applications per se, we're talking about mobile Javascript. So there are things like Sencha Touch [sencha.com] for that. Sure, I could write native applications for every device listed under the supported devices section, but why is it smart to do that when I can write a single codebase that I can package for multiple devices?

    Or maybe I'm just not "familiar" with Javascript, or development concepts in general. Hopefully you can enlighten me on the merits of reinventing the wheel every time you create something.

  • by WaffleMonster (969671) on Wednesday July 10, 2013 @07:29PM (#44245259)

    I've found the best way to get developers to stop being lazy is to give them shit hardware.

    The mistake is buying the latest quad core 2GB android goodness... Give them a 5 yr old piece of shit and the resulting mobile apps will rock.

  • by Nerdfest (867930) on Wednesday July 10, 2013 @07:35PM (#44245305)

    Compared to those other two, Java is a dream language.

  • Re:always (Score:4, Insightful)

    by Anonymous Coward on Wednesday July 10, 2013 @07:37PM (#44245323)

    I write C code all day everyday and never once worry about whether something will leak or double-free. You quickly learn patterns which make object lifetime management safe and simple. In C++ they generically call it RAII. But the basic patterns are simple and are trivial to follow in C, there's just less syntactic sugar (which often is a good thing because you then tend to economize).

    The number of times I've leaked memory in C is probably less than I've leaked memory in a GCd language (yes, you can leak memory in GC). If memory management is anything more than a simple chore, then you're doing it wrong. It does take practice, though.

    Don't get me wrong, I also use mark+sweep (e.g. Lua) and reference-counted (e.g. Perl) garbaged collected languages. Good times. You just can't use a GC'd languages for anything that will use a lot of memory in a single process. This is why people complain about Java all the time. The time spent walking the object tree grows faster than linearly. The article is fails to grasp that point, although the Microsoft C# engineer nailed it when he mentioned that object references can grow exponentially. The reason why more memory makes it seem like GC is faster is simply because with more memory you can just let junk sit around longer, and then destroy the entire heap all at once. At least, for web pages, which tend to be ephemeral. That doesn't work well for long-lived server processes though, in Java or C#, which can't avoid constantly cycling through all memory.

    If you must use a GCd language for memory intensive stuff, you need to use multiple processes and IPC, not just threads, so you can partition your heap memory. Partitioning that way will improve GC algorithmic performance better than linearly. Unfortunately Java and C# don't make multiprocess architectures as easy to implement as in Unix+C, where you can create nameless socket pairs, pre-fork, and pass file descriptors, etc, over the channels, all without polluting any external namespaces--i.e. no loopback ports or temporary files.
     

  • by narcc (412956) on Wednesday July 10, 2013 @07:48PM (#44245403) Journal

    I clicked on the first link and scrolled a down a bit.

    The answer appears to be "most of them" and "most of the remainder if you know how to write for loop".

    Like I said earlier: do the web a favor and just learn JavaScript.

    You'll find that it's not only easier, but your code will be significantly faster. If you're dropping jQuery, you'll find that your code is also significantly easier to read and maintain. No need to make giant chains just to get your performance from "horrible" to "terrible" -- you get "acceptable" automatically, and "good" or "fantastic" once you have a better understanding of the language.

    Yeah, we know about document.querySelector()

    Apparently not. Take a look around the web. You'll find that the bulk of jQuery use can be replaced by querySelector and querySelectorAll -- often just by getElementById and getElementsByClassName.

    Really, I've seen lot's of sites and code samples that use jQuery just to select a single element by id! All because the author either didn't know about getElementById or was too lazy to type it out. It's horrifying.

    There are other stupid uses as well. Dropdown menus built with jQuery, like the popular superFish menu. What makes this particularly crazy is that it's trivial to build a dropdown menu without jQuery, or any JavaScript code at all! All you need is a little CSS. (Just a few lines, as it turns out.) If you don't have the 10 minutes it takes to figure it out yourself the first time, there are several websites that will generate a cross-browser pure CSS dropdown menu for you with just a few simple clicks!

  • by Anonymous Coward on Wednesday July 10, 2013 @07:53PM (#44245425)

    How is this informative? My guess is that it was modded up by people who, like the poster, have only basic experience writing JavaScript.

    When you're writing code in any language as a hobby or just getting your feet wet, you love the simplicity of minimal overheads and writing everything yourself, because of course you can do a better job than all the other shmucks. When you're writing code for a living or you advance beyond variations of hello world, you come to realise that you're not actually more enlightened than the pro's and that frameworks actually do make for more maintainable code which is cleaner and more efficient than you could have written yourself, as well as more rigorously tested and secure. When you're writing significant code, you see, you are taking advantage of large parts of the core of all frameworks, which you would otherwise have had to reinvent yourself in a far poorer fashion.

    In short, if your needs extend as far as simple form validation, maybe the framework is overkill, but once you're starting to write substantial web (and non-web) applications, you'll really learn to appreciate frameworks. Frameworks are also used widely for simpler sites because no-one is prepared to pay you to reinvent the wheel. In these cases, one can usually take advantage of subsets of Frameworks, to avoid loading lots of unnecessary code.

    For what it's worth, this argument applies to all languages, and you won't find any sensible programmers writing directly to win32 or X api. Instead, they're using .NET, GTK, QT, Wx, etc. because they know that they would spend their life writing essentially equivalent implementations themselves to cover the features that they need. Sure, one might use win32 directly for a simple utility (been there recently, in-fact), but not for any serious application.

  • by exomondo (1725132) on Wednesday July 10, 2013 @07:57PM (#44245447)

    Learning a flaky, inconsistent language

    JavaScript is "flaky" and "inconsistent"?

    What on earth are you talking about?

    He means it's gained popularity.

  • Re:always (Score:3, Insightful)

    by EvanED (569694) <evaned&gmail,com> on Wednesday July 10, 2013 @08:04PM (#44245505)

    The reason why more memory makes it seem like GC is faster is simply because with more memory you can just let junk sit around longer, and then destroy the entire heap all at once. At least, for web pages, which tend to be ephemeral. That doesn't work well for long-lived server processes though, in Java or C#, which can't avoid constantly cycling through all memory.

    It's more than that though, and it's not just that it seems faster -- it is faster. And it's not just because your program may end before needing to collect or something like that.

    The reason that GCs are memory intensive is because of the design of good GCs, which are usually generational. They have multiple memory spaces, and move objects between them. The presence of those spaces are what increases the memory requirement, because you need to make them big enough to contain a bunch of dead objects too or you'll hit the performance wall I'll talk about in a second. The GC will actually move objects from one space to another.

    The reason that having too little memory hurts is not just because your GC runs more frequently, but you start losing out on the benefits of having those multiple spaces. The division between long-lived and short-lived objects starts to diminish, because an object doesn't have to live as long to get promoted to the next generation. You might promote objects to the oldest generation sooner, and the older generation loses out on some of the mitigating reasons why GC doesn't have to be so bad.

    In short, GC actually works well speedwise even for long-lived processes, in spite of them churning through memory. They just need more memory (3-5x) to get the same performance.

  • by phantomfive (622387) on Wednesday July 10, 2013 @08:07PM (#44245517) Journal
    Try picking a random package and getting to cross-compile for a platform like, for example, Windows. You'll suddenly realize how many dependencies there actually are.
  • by wisnoskij (1206448) on Wednesday July 10, 2013 @08:18PM (#44245587) Homepage

    Better than spending months trying to get rid of all the memory leaks.

  • by Rockoon (1252108) on Wednesday July 10, 2013 @08:26PM (#44245647)

    JavaScript is "flaky" and "inconsistent"?

    What on earth are you talking about?

    I think that hes talking about JavaScript being flaky and inconsistent.

  • by narcc (412956) on Wednesday July 10, 2013 @09:40PM (#44246045) Journal

    It's pretty well-known that jQuery is an absolute mess. A lot of effort has gone in to improving it, sure, but that sort of makes the point, doesn't it? Take a look through the code yourself. It still isn't pretty.

    To call it "memory-lite" is just absurd. (Get a profiler and run some tests if you have trouble believing that.)

    It's also hard to argue that a complex generalized solution to some problem will be faster than one written with a specific case or set of cases in mind. The abysmal performance of jQuery for even simple operations is evidence enough of that. (See any one of a zillion tests, or run a few of your own if you can't find one to your liking and you'll see what I mean.) [jsperf.com]

    jQuery isn't stellar, obviously, but it makes jQuery UI and jQuery Mobile look positively light-weight in comparison! jQuery has often been blamed for PhoneGap's performance problems. Again, this is something you can see for yourself.

    Even the most ardent jQuery fan will acknowledge that jQuery (expecially jQuery UI and Mobile) is a performance killer, they'll just say "it doesn't matter because computers are getting faster every year" or something equally silly in defense of their favorite library. To claim that jQuery is actually *faster* than a native solution is just crazy.

    Just for fun [vanilla-js.com]

  • by TopSpin (753) on Thursday July 11, 2013 @12:50AM (#44247149) Journal

    The market decided long ago that fewer programmer hours was better than users waiting a few seconds everyday for their device to GC.

    No, actually, that's not what happened. As the summary and the story itself (both of which went unread apparently,) point out, one of the most successful systems to emerge in the market recently, iOS, is not a GC environment.

    Over here [apple.com] you may learn about iOS memory management. Without getting too far into that wall of text one discovers the following:

    If you plan on writing code for iOS, you must use explicit memory management (the subject of this guide).

    Ok, so your claim that GC is the only viable solution for contemporary application development is demonstrably false. Lets look some other assertions:

    programmers are inherently bad at memory management. Memory will leak [if programmers must manage it].

    First, the vast number of iOS applications not leaking shows that a non-GC system doesn't necessary have to leak. At least not badly enough to compromise the viability of the platform, which is the only meaningful criteria I can think of when it comes to the market.

    Second, why assume programmers are inherently bad at a thing when that thing has traditionally been exposed via terrible, error prone, demonstrably awful mechanisms? It seems to me that among widely used tools we leaped from 'systems' languages with truly heinous MM primitives (C/C++) directly into pervasive GC systems. Aside from Objective C+ARC there just aren't enough good non-GC systems to make broad generalizations. Thus, you may be right about programmers, but you can't prove it, and I doubt it.

    Finally, what proof is there that pervasive GC is better at not leaking than a good explicit MM system? Anyone with an Android system and a bunch of apps will quickly discover that pervasive GC does not eliminate leaks.

    [some phone] comes with a whopping 2GB of RAM

    Goggle Glass has 682mb of RAM. There is always a new platform into which we much fit our software and the new platform is usually resource constrained, so there will never be a day when questioning the cost of GCs is wrong. Maybe the wearable you eventually put on will have 8 GB of RAM. The computers you swallow or implant or sprinkle around the lawn probably won't. The fact the next generation of phones can piss away RAM to greedy GCs just isn't particularly informative.

Some people have a great ambition: to build something that will last, at least until they've finished building it.

Working...