Follow Slashdot stories on Twitter


Forgot your password?
Cellphones Handhelds Programming

Former Sun Mobile JIT Engineers Take On Mobile JavaScript/HTML Performance 106

First time accepted submitter digiti writes "In response to Drew Crawford's article about JavaScript performance, Shai Almog wrote a piece providing a different interpretation for the performance costs (summary: it's the DOM not the script). He then gives several examples of where mobile Java performs really well on memory constrained devices. Where do you stand in the debate?"
This discussion has been archived. No new comments can be posted.

Former Sun Mobile JIT Engineers Take On Mobile JavaScript/HTML Performance

Comments Filter:
  • I blame the DOM too (Score:5, Interesting)

    by Anonymous Coward on Monday July 15, 2013 @02:11PM (#44287095)

    An arbitrary tree of arbitrary objects with arbitrary methods of manipulating them thanks to years of incremental changes that are never properly documented (quick! point to the document showing that a select tag has a value attribute!) and are never deprecated.

  • No rebuttal (Score:5, Interesting)

    by arendjr ( 673589 ) on Monday July 15, 2013 @02:26PM (#44287253) Homepage

    While this post is a valuable addition to Drew's analysis, I feel it's not really a rebutal at all.

    Yes, JavaScript is slow for the reasons Drew mentioned and yes, the DOM is a nightmare to optimize for responsive UIs. They're both right.

    While this blog also provides some nice insight into how you can have acceptable performance with a GC on mobile, it's not offering any workable alternative that would work for JavaScript. So Drew's article still comes out pretty strong, IMO.

  • by tepples ( 727027 ) <> on Monday July 15, 2013 @02:38PM (#44287373) Homepage Journal

    Perhaps it's a difference between throughput and latency. Nine moms can make nine babies in nine months, offering nine times the throughput of one mom, but each baby still takes nine months from conception to completion. Users tend to notice latency more than throughput unless an operation already takes long enough to need a progress bar. Some algorithms have a tradeoff between throughput and latency, which need fine tuning to make things feel fast enough.

    There are also a few ways to psychologically hide latency even if you can't eliminate it. The "door opening" transition screen in Resident Evil is one example, hiding a load from disc, as are some of the transitions used by online photo albums to slowly open a viewer window while the photo downloads.

  • Re:No rebuttal (Score:5, Interesting)

    by gl4ss ( 559668 ) on Monday July 15, 2013 @02:52PM (#44287519) Homepage Journal

    I'm just blabberfastebasted about the summary.

    first it goes on about javascript and then about mobile java performing really well on some devices.
    j2me - commonly known as mobile java, being something entirely different from javascript.. and j2me did pretty well. you could do wolfenstein clones in 128kbytes(was a common .jar size limit, earlier than that 64kbytes was pretty common) on machines with ~300-400kbytes of ram available for you.. and gc worked just fine, so long as you didn't abuse String or do shit like that - but that's a bit easier to code for in java than in javascript imho(reusing objects etc).

    dom though.. of course, it's ridiculous to use for high responsive stuff, while javascript itself can run pretty nicely, lacking real threads certainly doesn't help etc of course.. but it's pretty obvious how you can do things with canvas that would bog the dom renderer.

    the article is about how javascript on mobile should be doable ok because j2me worked remarkably well? since j2me did work remarkably well on the lower level(really, it did). what did not work out was their api jsr process for j2me which zombied the whole platform and made almost all extensions useless or half useless - had they not fucked it up like that we would not have needed android.

    funny thing about j2me is that most of the mobile stuff I have coded over last 10 years is such that you cannot go to a shop now and buy a device that would run them - except for the j2me code, I could still buy a cheapo nokia and run them.

  • by Shai Almog ( 2984835 ) on Monday July 15, 2013 @02:56PM (#44287569)
    I actually agree with almost everything Drew wrote with the exception of his GC statements, I worked for an EA contractor in the 90's doing large scale terrain streaming on what today would be a a computer less powerful than an iPhone so while my game programming experience might be outdated its still valid. Saying that I don't know if its actually implemented only referred to the last section. Like I said, I actually worked on the VM code as well as the elements on top of it. As I said in the comments to the article never might have been harsh but I pretty much stand by it. If you use a GC you need to program with that in mind and design the times where GC occurs (level loading). Most of your dynamic memory would be textures anyway which are outside the domain of the GC altogether and shouldn't trigger GC activity. To avoid overhead in a very large or infinite world you swap objects in place by using a pool, no its not ideal and you do need to program "around" the GC in that regard. OTOH when we programmed games in C++ we did exactly the same thing, no one would EVER allocate dynamic memory during gameplay to avoid framerate cost.
  • by Anubis IV ( 1279820 ) on Monday July 15, 2013 @04:19PM (#44288629)

    That's not the only issue with his statements about garbage collection.

    Drew made an argument that garbage collection on mobile performs poorly due to memory constraints on the platform. Almog countered by pointing out that Drew used a desktop garbage collector rather than a mobile garbage collector, which is important, since mobile garbage collectors are more aggressive at cleaning up stuff like unused paths from JIT, meaning that they make better use of their available memory. Almog was quick to note that being more aggressive also means that the garbage collector performs worse than its desktop counterpart.

    Stop and re-read that last sentence, because if I'm understanding this correctly, Almog is basically making the argument that we can avoid the memory concerns Drew brought up -- concerns which were only brought up to explain why there are performance issues -- by using a garbage collector that performs worse.

    It's possible, I suppose, that the performance hit from being more aggressive is less severe than the performance hit from running up against the limitations of memory more often, but if that's the case, Almog really should have said so, since right now it sounds like he's completely undermining his own point.

  • Re:No rebuttal (Score:5, Interesting)

    by slacka ( 713188 ) on Monday July 15, 2013 @04:59PM (#44289097)

    While this post is a valuable addition to Drew's analysis, I feel it's not really a rebutal at all.

    Yes, JavaScript is slow for the reasons Drew mentioned and yes, the DOM is a nightmare to optimize for responsive UIs. They're both right.

    The key issue here is that these web technologies are being shoehorned into areas they were never designed for. From Document Object Model being used for Applications to the lightweight scripting language, JavaScript, being used for heavy weight programming of course the end result is going to be a poor mobile experience.

    If W3C and WHATWG seriously want to compete with Android and iOS Apps, they should consider throwing out the DOM and CSS standards and starting over. JavaScript can be fixed, but DOM/CSS are so technically flawed to the core, the sooner they're depreciated, the sooner the web has a real chance as being a generic platform for applications.

  • by IamTheRealMike ( 537420 ) on Monday July 15, 2013 @05:00PM (#44289105)

    FYI stack allocation (the optimisation you refer to) is implemented in the JVM for some time already. It is capable of eliminating large numbers of allocations entirely on hot paths []. Of course, there is a lot of memory overhead to all of this - the JVM has to do an escape analysis and it has to keep around bookkeeping data to let it unoptimize things.

    For some reason they call this optimisation scalar replacement. I'm not sure why. In theory this can help close the gap a lot, because a big part of the reason GC is seen as slow is just because the languages that use it put so much pressure on the heap due to their library and language designs encouraging tons of tiny objects. If you can put them onto the stack then things can get much faster. I use some pretty large and complicated Java apps these days (like IntelliJ) and they seem to perform well, so perhaps things like this have turned the tide somewhat.

Truth is free, but information costs.