Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Cellphones Handhelds Programming

Former Sun Mobile JIT Engineers Take On Mobile JavaScript/HTML Performance 106

First time accepted submitter digiti writes "In response to Drew Crawford's article about JavaScript performance, Shai Almog wrote a piece providing a different interpretation for the performance costs (summary: it's the DOM not the script). He then gives several examples of where mobile Java performs really well on memory constrained devices. Where do you stand in the debate?"
This discussion has been archived. No new comments can be posted.

Former Sun Mobile JIT Engineers Take On Mobile JavaScript/HTML Performance

Comments Filter:
  • by Hamfist ( 311248 ) on Monday July 15, 2013 @02:30PM (#44287283)

    I dislike the separation of 'Perceived' vs 'Actual' performance. If I perceive it to be slow, it's slow. This reminds of of the Firefox devs that spent years saying how if an add-on makes their browser a memory hog and a slowpoke, it's not their problem because their performance is fine.

    Devs.. If it's slow, it's slow. Call it perceived, call it actual, call it the Pope for all I care. It's a Slow Pope.

  • by 2megs ( 8751 ) on Monday July 15, 2013 @02:31PM (#44287299)

    Crawford brought in lots of data on real-world performance. (e.g. http://sealedabstract.com/wp-content/uploads/2013/05/Screen-Shot-2013-05-14-at-10.15.29-PM.png [sealedabstract.com])

    Almog's rebuttal has a lot of claims with no actual evidence. Nothing is measured; everything he says is based on how he thinks things should in theory work. But the "sufficiently smart GC" is as big a joke as the "sufficiently smart compiler", and he even says "while some of these allocation patterns were discussed by teams when I was at Sun I don't know if these were actually implemented".

    Also:

    (in fact game programmers NEVER allocate during game level execution)....This isn't really hard, you just make sure that while you are performing an animation or within a game level you don't make any allocations.

    I'm a professional game programmer, and I'm laughing at this. If you're making Space Invaders, and there's a fixed board and a fixed number of invaders, that statement is true. If you're making a game for this decade, with meaningful AI, an open world that's continuously streamed in and out of memory, and dynamic, emergent, or player-driven events, that's just silly. For Mr. Almog to even say that shows how much he doesn't know about the subject.

  • by Anonymous Coward on Monday July 15, 2013 @03:53PM (#44288293)

    You're wrong. But not in the way you think. The ability to write abstractions, and an obsession with design patterns, often results in needless abstraction and a loss of focus.

    Libraries like C++ STL and Java Collections only make this worse. Why? Because your application is basically one giant data structure. I forgot whether it was Dennis Ritchie, or Ken Thompson, or whomever, but it was once that said good programming is about implementing the correct data structures. Note that this doesn't mean reusing generic data structures; it means drawing upon the universe of conceptual data structures to craft a narrowly focus data structure and algorithm that fits your problem.

    If you start out with generic data structures, then you end up working backwards. You have lots of boilerplate code just to use those generic data structures, which you then aggregate and glue together in the same way you would have done from scratch, but with far less code. Programming is not about splicing hash tables together with trees ad hoc. This is a ridiculously reductive perspective on what is programming and how good programs are written.

    This is why, almost without fail, real-world applications developed in high-level languages tend to have ridiculously high source line counts. People get carried away with abstraction, including the language implementors with their plethora of generic data structure libraries.

    IME, C is *the* sweet spot, if you had to pick a single language. However, you can do better by mixing and matching different languages, especially where they offer unique characteristics--functions as first-class objects + real closures (i.e. not C++ lambdas), coroutines, exceptional optimizing support (e.g. Fortran for iterative numerical algorithms), and functional language characteristics which make is easier to express some kinds of data structures and algorithms. This is why I stay away from C++; it's too much of a pain to mix C++ with various other languages, where almost every language in existence has strong C support.

    If people spent as much time maintaing proficiency in different languages instead of trying to do everything in a single language (and grappling with the inevitably cognitive dissonance when defending their choices on web forums), then the world would be a far more efficienct and less bug-prone place.

Never test for an error condition you don't know how to handle. -- Steinbach

Working...