Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Cellphones Handhelds Programming

Former Sun Mobile JIT Engineers Take On Mobile JavaScript/HTML Performance 106

First time accepted submitter digiti writes "In response to Drew Crawford's article about JavaScript performance, Shai Almog wrote a piece providing a different interpretation for the performance costs (summary: it's the DOM not the script). He then gives several examples of where mobile Java performs really well on memory constrained devices. Where do you stand in the debate?"
This discussion has been archived. No new comments can be posted.

Former Sun Mobile JIT Engineers Take On Mobile JavaScript/HTML Performance

Comments Filter:
  • I blame the DOM too (Score:5, Interesting)

    by Anonymous Coward on Monday July 15, 2013 @01:11PM (#44287095)

    An arbitrary tree of arbitrary objects with arbitrary methods of manipulating them thanks to years of incremental changes that are never properly documented (quick! point to the document showing that a select tag has a value attribute!) and are never deprecated.

    • by pspahn ( 1175617 ) on Monday July 15, 2013 @01:40PM (#44287399)

      quick! point to the document showing that a select tag has a value attribute!

      I'm not sure what you're looking for here. Are you saying that having a value attribute on a select element is something that is simply undocumented but valid?

      Or, are you saying that this is a deprecated attribute still found in the wild and there is no doc to explain this?

      Certainly invalid attributes are going to add some overhead to the DOM, but I don't think that's necessarily the reason a sluggish UI can be blamed on the DOM. I would imagine that a simple DOM with ten elements, each element with ten attributes would still be faster than a DOM with 100 elements, each with one attribute. Of course, this largely depends on what you're going to be doing with the elements and the attributes, but in the case of simple UI updates to the DOM, the elements are going to change more often than all of the attributes. You might update a couple attributes here and there, but the rest of the attributes are probably left as they were since they are likely unrelated to whatever UI update you are performing.

      • by Livius ( 318358 )

        Or maybe just most webpages are ridiculously bloated.

    • by phantomfive ( 622387 ) on Monday July 15, 2013 @01:49PM (#44287487) Journal
      What does your comment have to do with performance? The DOM has some awkward parts, certainly; and more annoying it has cross-platform incompatibilities.

      But what does that have to do with performance? How would documenting the value attribute of the select tag make things run faster?
      • by Xest ( 935314 )

        I think what he's saying is that the DOM is poorly designed and that's partly because of and partly contributes to poor documentation.

        The net result is a clusterfuck with many other clusterfucks built on top of it and we all know that clusterfucks stacked on clusterfucks are especially slow. Or something.

        No seriously, I think there's some merit in the point, that if it's not well designed and everyone and anyone has just thrown their own bits and pieces in and much of it isn't well documented then it become

        • I'm not sure, I'm having trouble thinking of what could be removed to make things more optimized......a parallel example is x86, which has many legacy commands, but you don't need to use them.
    • by maxwell demon ( 590494 ) on Monday July 15, 2013 @01:49PM (#44287489) Journal

      (quick! point to the document showing that a select tag has a value attribute!)

      Here it is. [slashdot.org]

    • Agreed. The old trade-off between:

      < -- fast & rigid - - - slow & features/flexibility -- >

    • quick! point to the document showing that a select tag has a value attribute!

      It's in HTML [whatwg.org].

      This very much is one of the major achievements of HTML5: specifying what is interoperable and required to avoid breaking the web, but historically undefined. One couldn't practically launch a web browser without reverse-engineering others.

  • by Anonymous Coward on Monday July 15, 2013 @01:21PM (#44287203)

    I think Drew's article wasn't that performance had to suffer, it was that garbage collection isn't free. It has to take place, though, so it's not an O(GC)=0 component. If garbage collection takes place in a lot of memory, but not -enough- memory, it takes a very long time, in real time. Depending heavily on the application, it may be very visible at the UI level.

    Programmers intent on using all of the resources available, and performing intensive tasks, should think about means other than garbage collection.

    • Completing, he also tries to make as clear as possible the importance of managing your memory usage (using GC or not), and how Javascript goes completely against this idea.
    • by Anubis IV ( 1279820 ) on Monday July 15, 2013 @03:19PM (#44288629)

      That's not the only issue with his statements about garbage collection.

      Drew made an argument that garbage collection on mobile performs poorly due to memory constraints on the platform. Almog countered by pointing out that Drew used a desktop garbage collector rather than a mobile garbage collector, which is important, since mobile garbage collectors are more aggressive at cleaning up stuff like unused paths from JIT, meaning that they make better use of their available memory. Almog was quick to note that being more aggressive also means that the garbage collector performs worse than its desktop counterpart.

      Stop and re-read that last sentence, because if I'm understanding this correctly, Almog is basically making the argument that we can avoid the memory concerns Drew brought up -- concerns which were only brought up to explain why there are performance issues -- by using a garbage collector that performs worse.

      It's possible, I suppose, that the performance hit from being more aggressive is less severe than the performance hit from running up against the limitations of memory more often, but if that's the case, Almog really should have said so, since right now it sounds like he's completely undermining his own point.

      • by swilly ( 24960 )

        The original claim is that performance is worse by orders of magnitude in a memory constrained environments. It doesn't sound like the mobile optimized GC is orders of magnitude worse than a desktop optimized GC, so in a memory constrained environment the mobile GC would perform better than the desktop GC.

        Of course, the author doesn't provide any numbers, so all we have to go by are his expertise and that his arguments are reasonable. Further research will be necessary.

      • by Xest ( 935314 )

        I don't really see what's surprising about all that. The mobile GC does more to prevent itself hitting memory constraints, doing more is always going to cost more than doing less so of course the mobile GC will perform worse than the desktop GC.

        But I don't think he's saying it performs worse than the desktop one would when it hits memory constraints of the device and that's really the key.

        Optimisation is often about the trade-off between between memory/storage and processing. If memory/storage/bandwidth is

      • The real problem is not with raw amortized performance, in any case. It's about responsiveness, which is basically performance at any particular point in time. If said point in time happens to be during a GC cycle, then your latency will be abysmal. Unfortunately, it's very hard if not impossible to ensure that a GC cycle does not occur at the point where latency is critical, especially in a language like JS which gives absolutely no guarantees about what may trigger a GC (or even a memory allocation).

    • Programmers intent on using all of the resources available, and performing intensive tasks, should think about means other than garbage collection.

      This debate is as old as the hills. I'll just point out that it's not so much that GC is terrible, so much as it's indelibly associated with managed languages that either are Java or use very Java-inspired designs (like C#) in which objects and heap allocation is treated as being nearly free.

      To prove my point, I cite Unreal Engine, a serious piece of code with ve

      • The core issue is that GC vs no GC is a false dichotomy. You can't get away without GC, the question is whether you use a general-purpose algorithm, like tracing or reference counting with cycle detection, or a special-purpose design. This is especially true on mobile: if a desktop application leaks memory then it will take a while to fill up swap, but it might not be noticed. Apple provides a very coarse-grained automatic GC for iOS: applications notify the OS that they have no unsaved data and then can

        • I think that a more convincing argument is that memory is just one of many scarce resources, and GC only helps you with memory and not everything else. For the rest of it, we still haven't come up with anything better than deterministic finalization (manual or RAII). And once you're forced to use it for everything else, why wouldn't you also use it for memory, especially if it gives you a better memory profile?

          • The counter argument to this is simple: Memory allocations accounts for 99% of all scarce resource allocation in a typical program (and all of the resources that they're actually likely to exhaust: when was the last time you saw a program that had so many file descriptors open at once that it was hard to keep track of them and they came anywhere close to the system limit? It happens, but in very unusual code). Saying 'well, I have to do it for 1%, I may as well do it for the other 99%' is really not a ver
            • I think the 1/99 is not a realistic proportion. The problem with resource management in the context of OO design is that you tend to aggregate data (objects owned by other objects). When that happens, as soon as you've got one owned object that needs to release a non-memory resource, you need to use deterministic disposal for the entire ownership tree all the way to the top.

              This also makes it very unpredictable to figure out what needs deterministic releases from the perspective of your API clients, because

      • To prove my point, I cite Unreal Engine, a serious piece of code with very tight performance constraints. It's capable of hitting high, smooth frame rates, and it uses a garbage collected heap for the core game state (lots of objects with lots of pointers between them).

        Thing is, no-one is trying to do animations in UnrealScript - it's there to script relatively high-level object interactions. All the rendering code and such is implemented in C++, and at that point it is in full control of memory allocation (i.e. it won't get suddenly interrupted by a GC deciding to walk the object graph in a middle of a drawing routine).

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Monday July 15, 2013 @01:26PM (#44287253)
    Comment removed based on user account deletion
    • Re:No rebuttal (Score:5, Interesting)

      by gl4ss ( 559668 ) on Monday July 15, 2013 @01:52PM (#44287519) Homepage Journal

      I'm just blabberfastebasted about the summary.

      first it goes on about javascript and then about mobile java performing really well on some devices.
      j2me - commonly known as mobile java, being something entirely different from javascript.. and j2me did pretty well. you could do wolfenstein clones in 128kbytes(was a common .jar size limit, earlier than that 64kbytes was pretty common) on machines with ~300-400kbytes of ram available for you.. and gc worked just fine, so long as you didn't abuse String or do shit like that - but that's a bit easier to code for in java than in javascript imho(reusing objects etc).

      dom though.. of course, it's ridiculous to use for high responsive stuff, while javascript itself can run pretty nicely, lacking real threads certainly doesn't help etc of course.. but it's pretty obvious how you can do things with canvas that would bog the dom renderer.

      the article is about how javascript on mobile should be doable ok because j2me worked remarkably well? since j2me did work remarkably well on the lower level(really, it did). what did not work out was their api jsr process for j2me which zombied the whole platform and made almost all extensions useless or half useless - had they not fucked it up like that we would not have needed android.

      funny thing about j2me is that most of the mobile stuff I have coded over last 10 years is such that you cannot go to a shop now and buy a device that would run them - except for the j2me code, I could still buy a cheapo nokia and run them.

      • It says that JavaScript is inherently slow because of DOM. It says that this should not be applied as a sweeping generalization to all managed languages e.g. Java. Then it gives examples including mobile Java performance where small heap devices work just fine.

      • you could do wolfenstein clones in 128kbytes(was a common .jar size limit, earlier than that 64kbytes was pretty common) on machines with ~300-400kbytes of ram available for you.. and gc worked just fine, so long as you didn't abuse String or do shit like that

        A typical J2ME game from that era preallocated a bunch of arrays, and used those arrays to store all game data, since actual objects were so expensive (both raw-size-wise and GC-cost-wise). So GC basically "worked just fine" if it had nothing much to work with, otherwise it didn't and your game crashed on half the phones out there with OOM. The resulting language subset, while technically still Java, was more akin to C without pointers. Which is to say, it was messed up.

    • This whole conversation should have been retitled "Why web apps are slow on mobile", and not about JavaScript at all.

      The comparison between Objective C and Java is totally ridiculous and beside the point; it's the only thing which ties this article to actual mobile Apps instead of web apps, and fails to address the original articles comments on JavaScript.

      In principle, Objective C the language can be used for dynamic binding; in practice, the Objective C runtime, as represented in crt1.o, and in the dyld an

      • In principle, Objective C the language can be used for dynamic binding; in practice, the Objective C runtime, as represented in crt1.o, and in the dyld and later dyle dynamic linkers, it can't be. This was an intentional decision by Apple to prevent a dynamic binding override from changing aspects of the UI, and to prevent malicious code being injected into your program - this is a position Apple strengthened by adding code signing.

        I have to wonder what you're talking about here. First of all, the Objective-C runtime is not in crt1.o. This contains some symbols that allow dyld to find things in the executable. The Objective-C runtime is in libobjc.dyld. Every message send (method invocation) in Objective-C goes via one of the objc_msgSend() family of functions and these all do late binding. You can define a category on a system class and override methods, or you can use method swizzling via explicit APIs such as class_replaceMeth

    • Re:No rebuttal (Score:5, Interesting)

      by slacka ( 713188 ) on Monday July 15, 2013 @03:59PM (#44289097)

      While this post is a valuable addition to Drew's analysis, I feel it's not really a rebutal at all.

      Yes, JavaScript is slow for the reasons Drew mentioned and yes, the DOM is a nightmare to optimize for responsive UIs. They're both right.

      The key issue here is that these web technologies are being shoehorned into areas they were never designed for. From Document Object Model being used for Applications to the lightweight scripting language, JavaScript, being used for heavy weight programming of course the end result is going to be a poor mobile experience.

      If W3C and WHATWG seriously want to compete with Android and iOS Apps, they should consider throwing out the DOM and CSS standards and starting over. JavaScript can be fixed, but DOM/CSS are so technically flawed to the core, the sooner they're depreciated, the sooner the web has a real chance as being a generic platform for applications.

    • For some reason people read this as a rebuttal of Drew Crawford article, it is not. It is merely a response, I accept almost everything he said but have a slightly different interpretation on some of the points.
      http://www.codenameone.com/3/post/2013/07/why-mobile-web-is-slow.html [codenameone.com]
  • by Hamfist ( 311248 ) on Monday July 15, 2013 @01:30PM (#44287283)

    I dislike the separation of 'Perceived' vs 'Actual' performance. If I perceive it to be slow, it's slow. This reminds of of the Firefox devs that spent years saying how if an add-on makes their browser a memory hog and a slowpoke, it's not their problem because their performance is fine.

    Devs.. If it's slow, it's slow. Call it perceived, call it actual, call it the Pope for all I care. It's a Slow Pope.

    • Perhaps it's a difference between throughput and latency. Nine moms can make nine babies in nine months, offering nine times the throughput of one mom, but each baby still takes nine months from conception to completion. Users tend to notice latency more than throughput unless an operation already takes long enough to need a progress bar. Some algorithms have a tradeoff between throughput and latency, which need fine tuning to make things feel fast enough.

      There are also a few ways to psychologically hide latency even if you can't eliminate it. The "door opening" transition screen in Resident Evil is one example, hiding a load from disc, as are some of the transitions used by online photo albums to slowly open a viewer window while the photo downloads.

    • The separation is useful to understand where the optimization is necessary. JavaScript could be made less painful if it didn't have DOM manipulation to contend with obviously that's not very practical.

  • by 2megs ( 8751 ) on Monday July 15, 2013 @01:31PM (#44287299)

    Crawford brought in lots of data on real-world performance. (e.g. http://sealedabstract.com/wp-content/uploads/2013/05/Screen-Shot-2013-05-14-at-10.15.29-PM.png [sealedabstract.com])

    Almog's rebuttal has a lot of claims with no actual evidence. Nothing is measured; everything he says is based on how he thinks things should in theory work. But the "sufficiently smart GC" is as big a joke as the "sufficiently smart compiler", and he even says "while some of these allocation patterns were discussed by teams when I was at Sun I don't know if these were actually implemented".

    Also:

    (in fact game programmers NEVER allocate during game level execution)....This isn't really hard, you just make sure that while you are performing an animation or within a game level you don't make any allocations.

    I'm a professional game programmer, and I'm laughing at this. If you're making Space Invaders, and there's a fixed board and a fixed number of invaders, that statement is true. If you're making a game for this decade, with meaningful AI, an open world that's continuously streamed in and out of memory, and dynamic, emergent, or player-driven events, that's just silly. For Mr. Almog to even say that shows how much he doesn't know about the subject.

    • by digiti ( 200497 )

      It's not a rebuttle, in fact he didn't refute any claim other than the GC article. Read the comments where game programming is also discussed.

    • by Shai Almog ( 2984835 ) on Monday July 15, 2013 @01:56PM (#44287569)
      I actually agree with almost everything Drew wrote with the exception of his GC statements, I worked for an EA contractor in the 90's doing large scale terrain streaming on what today would be a a computer less powerful than an iPhone so while my game programming experience might be outdated its still valid. Saying that I don't know if its actually implemented only referred to the last section. Like I said, I actually worked on the VM code as well as the elements on top of it. As I said in the comments to the article never might have been harsh but I pretty much stand by it. If you use a GC you need to program with that in mind and design the times where GC occurs (level loading). Most of your dynamic memory would be textures anyway which are outside the domain of the GC altogether and shouldn't trigger GC activity. To avoid overhead in a very large or infinite world you swap objects in place by using a pool, no its not ideal and you do need to program "around" the GC in that regard. OTOH when we programmed games in C++ we did exactly the same thing, no one would EVER allocate dynamic memory during gameplay to avoid framerate cost.
      • Re: (Score:3, Interesting)

        FYI stack allocation (the optimisation you refer to) is implemented in the JVM for some time already. It is capable of eliminating large numbers of allocations entirely on hot paths [stefankrause.net]. Of course, there is a lot of memory overhead to all of this - the JVM has to do an escape analysis and it has to keep around bookkeeping data to let it unoptimize things.

        For some reason they call this optimisation scalar replacement. I'm not sure why. In theory this can help close the gap a lot, because a big part of the reason

        • by Anonymous Coward

          Memory pools and stack allocation are not the same thing. Memory pools contain many same-sized buffers of memory. Stack-allocation puts data on the stack. Usually those are mutually exclusive.

        • Thanks. The thing I was unsure about here is whether this is practical in a mobile VM with the added constraint of very limited CPU cache which is the really interesting bit here.
          • I don't believe Dalvik does any kind of escape analysis. It might be something they could put into dexopt and do ahead of time (at install time not runtime).

      • by ras ( 84108 )

        I actually agree with almost everything Drew wrote with the exception of his GC statements

        I'm courious. Drew said two things about GC:

        • - It's slower than manual memory allocation in memory constrained environments.
        • - It's faster than manual memory allocation where there 5x or more of actual memory usage.

        He didn't say GC always introduces huge latencies, probably because given an incremental GC and enough memory it doesn't. So which of the two assertions are you disagreeing with?

        Or to put is another way,

      • If you use a GC you need to program with that in mind and design the times where GC occurs (level loading).

        Are you saying that GC should only run during level loading (IIRC, this was actually done in UT 2003)?

        If so, then this would require the language/runtime in question to give programmer explicit control over GC (at least to the extent of temporarily disabling it). Which is certainly doable in theory, but I'm not aware of any JS runtime that exposes such a thing to the JS code that runs on top of it.

    • by gl4ss ( 559668 )

      with some j2me phones there was a trick to run the gc often enough by calling system.gc(), if you did it every 20-50 usual sprite objects(which had like 20 bytes of data attached per object) it never became a problem and a problem would have been noticing a jerk in animation, scrolling or whatever. though even then you would mostly try to reuse objects. memory was tight on those early devices, but remarkably the garbage collector didn't often become a problem.

      I wouldn't trust sun guys to know shit about gam

    • And as another professional game programmer I disagree: Good Luck tracking down memory fragmentation.

      Real game programmers never call malloc / new from inside the main game loop. They should be allocating from memory pools / heaps. What do you mean you don't have a memory budget for ALL objects, effects, and events ??

    • I too am a game developer... no, not professional as you are, but I've written almost a dozen games on a number of platforms over nearly 20 years, sold most of them and even had two nominated for some awards years ago. I won't put myself on the same level as you, but I do have some relevant experience.

      I would agree if you said the "never" statement is hyperbole... but you wouldn't argue the underlying gist of it, would you? Certainly it's true that a game programmer will seek to MINIMIZE object allocation

    • by caywen ( 942955 )

      I'm with you, and I think a lot of what is said boils down to trying to paint scenarios in black and white. The word "allocation" really doesn't mean anything until one specifies how and what is allocated. An allocation can come from a recycling allocator, slab allocator, or just plain old unoptimized malloc(). To say game programmers never alloc during "game level execution" (whatever that means) is just a gross facepalm statement. But it's certainly common for game developers to develop allocators well op

  • by sl4shd0rk ( 755837 ) on Monday July 15, 2013 @01:44PM (#44287443)

    I guarantee Javascript will perform much better once we get to 16 cores and 3.6Ghz on the standard mobile device.

  • I find it very amusing that we are having this conversation on a website that just deployed the slowest suckiest mobile website I have ever seen.

    • by chill ( 34294 )

      Do you get an infinite comment loop?

      On Android, if I click on a link in an e-mail to a response to a comment of mine it takes me to the Slashdot mobile site with the parent of my comment.

      Scrolling down I get to my comment, then the reply, then my comment again, then the reply again, ad infinitum (or ad crash really).

      • by mu22le ( 766735 )

        Do you get an infinite comment loop?

        On Android, if I click on a link in an e-mail to a response to a comment of mine it takes me to the Slashdot mobile site with the parent of my comment.

        Scrolling down I get to my comment, then the reply, then my comment again, then the reply again, ad infinitum (or ad crash really).

        I do not event get to the comments. The frontpage takes forever to load and cannot be scrolled.

  • Do we want them too?

    Java client's are not notoriously strong in the performance department, period. I mean this is why my Blu-ray player sucks as a Netflix client because its Java based. Also no popular "smartphone" created in the last 6 years uses Java as a front end so the reason while mobile devices improved in performance is because those companies avoided using Java in the first place.

    Sure, maybe some throwback clamshell feature phone might run Java and perform well, but you are hardly playing Angry

  • Apps seem to launch instantly since they have hardcoded splash images

    What? iOS-Apps are made to lie regarding their startup-performance? That's golden. 10/10.

  • is that you can still write a phonegap app that works on damn near anything you want it to in the time it takes the average Java team to tie their shoes and 95% of the time it's not going to perform adequately, it's going to perform better than what they tried to do in the native language. We can talk benchmarks until we're blue in the face, but in six years of web dev I've seen a lot of Java code bases and not one was competent. Modern JS JITs kick ass. Modern Java (and I'd argue C# as well) devs at the me

Ummm, well, OK. The network's the network, the computer's the computer. Sorry for the confusion. -- Sun Microsystems

Working...