Why JavaScript On Mobile Is Slow 407
An anonymous reader writes "Drew Crawford has a good write up of the current state of JavaScript in mobile development, and why the lack of explicit memory handling (and a design philosophy that ignores memory issues) leads to massive garbage collection overhead, which prevents HTML5/JS from being deployed for anything besides light duty mobile web development. Quoting: 'Here’s the point: memory management is hard on mobile. iOS has formed a culture around doing most things manually and trying to make the compiler do some of the easy parts. Android has formed a culture around improving a garbage collector that they try very hard not to use in practice. But either way, everybody spends a lot of time thinking about memory management when they write mobile applications. There’s just no substitute for thinking about memory. Like, a lot. When JavaScript people or Ruby people or Python people hear "garbage collector," they understand it to mean "silver bullet garbage collector." They mean "garbage collector that frees me from thinking about managing memory." But there’s no silver bullet on mobile devices. Everybody thinks about memory on mobile, whether they have a garbage collector or not. The only way to get "silver bullet" memory management is the same way we do it on the desktop–by having 10x more memory than your program really needs.'"
Easy (Score:5, Interesting)
Stop loading dozens of fucking libraries and frameworks and learn to really code.
Re: (Score:3)
That's just not a viable option. (Score:4, Informative)
Since JavaScript is so damn lacking, those libraries are ESSENTIAL for anything beyond the smallest JavaScript app.
Even if you don't use jQuery, for example, you're going to need to find and then use some other library that does the same thing, or write a whole shitload of code yourself to implement the same functionality. Zepto works as an alternative for some people, but even it still has some overhead.
That applies to almost anything you want your app to do. If you want to work with objects, arrays or even strings in any way beyond the simplest of manipulations, you're going to need to use some third-party code, or write a whole lot of it yourself.
JavaScript developers are so wholly dependent on these third-party libraries because the JavaScript implementations themselves are so bloody lacking. It's totally different than a language like Python, where there's a rich, yet still compact and efficient, standard library that developers know will be available on just about every user's system. JavaScript programmers have to provide this basic infrastructure with each and every app they write.
Re: (Score:2)
I know it is semantics, but it really bugs me that JS gets such a bad rep because of the browsers' shortcomings.
Re:That's just not a viable option. (Score:5, Insightful)
Even if you don't use jQuery, for example, you're going to need to find and then use some other library that does the same thing
Furthermore, popular libraries like jQuery, Mobile-jQuery, etc. are much more likely to have clean, efficient, memory-lite implementations that some "roll-your-own" code. If you choose your libraries carefully, learn to use them, and avoid "rolling-your-own" unless really necessary, your code will certainly be smaller and cleaner, and usually be faster, smaller, and use less memory.
Re:That's just not a viable option. (Score:5, Interesting)
I'm giving up moderation to post this but it has to be said. You keep claiming jQuery is slow and crappy because a few frameworks that exist on top of it are slow. Both jQueryUI and jQueryMobile are designed to completely change (and unify) what the browser controls look like. Of course they are slower than native. I even went to your little site and ran this one : http://jsperf.com/jquery-body-vs-document-body-selector [jsperf.com]. jQuery came up about 5%-6% slower than native. If you give up a unified, well tested framework and a tenth of the development time for a 5% speedup you are either working on something very special or need to be fired immediately.
jQuery is not a performance killer. If it was you wouldn't see it on nearly every website more complicated than "hi my name is narcc". What it does do, however, is cut development time considerably. Provides a consistent experience across most browsers and gracefully falls back when browsers don't provide native solutions. And provides far more web-specific features that Javascript does not (otherwise people wouldn't use it).
I won't claim that jQuery is faster than every native solution. But it is probably faster than your native solution. And infinitely more maintainable.
Just for fun: Just look at how readable and maintainable that code is. I'd love to try to figure out why a $100000 web-application isn't working in BroswerX 10 months after somebody else wrote it written like that.
Additionally: if you are doing 2+ million operations than yes maybe you might want to devote some time to writing a specialized function. But normally you're only doing less than 50 on a fairly complicated web-app so your benchmarks won't show too much of a difference there.
Just for my own fun. What happens if you go : document.querySelector('.menu > a:last') in IE9?
Re:That's just not a viable option. (Score:4, Informative)
Or do you mean code written using jQuery? Now that's impossible to maintain! (For reasons mentioned earlier and later.) Add to it that jQuery code is mostly written by amateurs who don't know any better (or professionals that don't want to face the simple fact that JavaScript is not C# and they'll need to learn some new concepts). When you see jQuery, you can safely assume that the code is a mess anyway.
Pure bullshit. Ignoring the glaring fallacy of "I've seen amateurs do bad things so jQuery is bad" I will just comment on it being difficult to maintain. Javascript in general is difficult to maintain in a large web application. jQuery makes things easier because it provides a fairly consistent syntax for common things you would likely do in a web application.
Oh, and did you hear? They're dropping support for IE8 and below. Not that it did a great job of supporting those browsers anyway, but it's yet another reason that jQuery has LONG outlived its utility.
Their future branch has cut off support for legacy browsers. This is for people who don't need them and to ease production of new features. They still maintain the 1.x line that fully supports IE8. This is where it is clear you are talking from inexperience and flat out lying to avoid admitting you've been less than truthful. It is very rare you ever see jQuery do something inconsistently between the common browsers. Definitely less than you see vanilla javascript being inconsistent.
The ONLY reason you see jQuery used today is that those same developers never bothered to learn JavaScript. They assume jQuery saves them time and effort (it does not, it costs them time both early and in the long term) because that's what they were told years ago.
Explain how jQuery doesn't save time? It is a framework that provides common functionality that people want from Javascript. So without a framework your options are to re-write that functionality every time you want it or build your own framework. Both of those options will be less maintainable in the long-run and far more bug-prone (being less tested etc). You can't just keep claiming "jQuery bad; reinvent wheel good" with some handwaving about performance and amateurs and blathering on about how much smarter you are for using vanilla Javascript instead.
Fortunately, developers are starting to realize that they've been fooled and are actually starting to learn JavaScript.
Or maybe Javascript has started to become consistent enough and functional enough for specific tasks that people can easily use it for simple tasks now. People that don't have to worry about any kind of legacy support that is.
With any luck, by 2015 we might not have jQuery bogging down the web.
With any luck by 2015 we won't have Javascript bogging down the web. It is a language that is not a good match for what it is used for in this case.
Re: (Score:3)
Re: (Score:3)
I've already explained why these libraries are well"enough" implemented and reasonably fast. Supporting multiple and different implementations of JS across multiple browsers has traditionally been a real pain. That is where all the extra crap comes in on these libraries. You can't just keep reiterating a lie. A 5%-10% slowdown is well worth a 50% reduction in development time.
I experimented using the jQuery to do Ajax calls, but the way to do this is... atrocious. Works? Works, but what a cost?
$.get('url', callback);
Too hard for you? More options are available to you with easy to read and well documented labels. I'd mu
Re: (Score:3, Interesting)
I really do appreciate a good framework... My favorite one currently is Vanilla-js http://vanilla-js.com/ [vanilla-js.com]
Check it out. Amazing performance.
Re: (Score:2, Insightful)
Yet C developers have no problem using C, which is much more minimal language, to do much more than what you do with JavaScript, and they rarely depend on shitloads of libraries.
Re:That's just not a viable option. (Score:5, Funny)
> Yet C developers... rarely depend on shitloads of libraries.
Re: (Score:2)
You realize you have thousands of packages installed, right? That's not a couple of Javascript apps.
Re:That's just not a viable option. (Score:4, Insightful)
Re: (Score:2)
HAH! In fact, it's pretty much the opposite in practice. C requires shitloads more libraries (from the application developer's point of view) to do the same things that (browser-based) Javascript does, since Javascript/HTML already has shitloads of built-in functionality.
Just try to do "much more than what you do with JavaScript" in C by only linking with libc.
The problem with "libraries" in Javascript is they are really pretty much just script includes, and most Javascript apps just load them all into th
Re: (Score:2)
You simply code what you need instead of depending on framework that do everything for you.
Re: (Score:2)
Python..... efficient....
First time I've heard those two words together in sentence without "is not" between them.
Re:That's just not a viable option. (Score:5, Insightful)
Learning a flaky, inconsistent language is only prolonging the problem. The web needs to move to something sane. As I said to someone the other day, it's extremely sad that the two most popular languages used for web development are two of the worst languages around (JavaSCript & PHP). It does go a ways towards explaining the quality of web software in general.
Re: (Score:2)
Learning a flaky, inconsistent language
JavaScript is "flaky" and "inconsistent"?
What on earth are you talking about?
Re: (Score:3, Insightful)
Learning a flaky, inconsistent language
JavaScript is "flaky" and "inconsistent"?
What on earth are you talking about?
He means it's gained popularity.
Re:That's just not a viable option. (Score:4, Insightful)
JavaScript is "flaky" and "inconsistent"?
What on earth are you talking about?
I think that hes talking about JavaScript being flaky and inconsistent.
Re:That's just not a viable option. (Score:5, Insightful)
Compared to those other two, Java is a dream language.
Re:That's just not a viable option. (Score:5, Insightful)
You'll find that just about every feature your "essential" library provides has a native equivalent that works across browsers -- even as far back as IE 8.
That's a pretty naive view that over-simplifies the situation. One major use for a framework, for example, is to normalize the behavior of different browsers. Another major use is to provide implementations to create interface elements. Now, obviously, everything is natively supported because the Javascript framework is right there doing it, natively. But why should I write the necessary logic to create a draggable window, or a tree view, or sortable grid, when I can just pull that in from a framework? ExtJS [sencha.com] is the kind of framework I'm thinking of. Why should I implement ajax-style uploads inside an iframe when they already did that for me, and I can just set up a form panel, indicate it is a file upload form, and write the important stuff?
Even though I can use a massive ExtJS application on a phone, we're not talking about massive applications per se, we're talking about mobile Javascript. So there are things like Sencha Touch [sencha.com] for that. Sure, I could write native applications for every device listed under the supported devices section, but why is it smart to do that when I can write a single codebase that I can package for multiple devices?
Or maybe I'm just not "familiar" with Javascript, or development concepts in general. Hopefully you can enlighten me on the merits of reinventing the wheel every time you create something.
Re: (Score:3)
I built an online shopping cart once that was originally 42k of javascript. Handled all the tracking of items, tax, and shipping in a variety of ways and even developed a multi-vendor capable version that was still under 55k. It powered an online ordering system for restaurants and coffee shops and it worked amazingly fast. All the server had to do was render pages from the products database in HTML. There weren't any writes to the database until the user clicked "order". It ran amazingly fast. Of cou
Re:That's just not a viable option. (Score:4, Informative)
The last major jQuery jump dropped IE8 & older support because there were too many quirks they didn't want to bloat everyone's use of the lib with.
Re: (Score:3, Insightful)
How is this informative? My guess is that it was modded up by people who, like the poster, have only basic experience writing JavaScript.
When you're writing code in any language as a hobby or just getting your feet wet, you love the simplicity of minimal overheads and writing everything yourself, because of course you can do a better job than all the other shmucks. When you're writing code for a living or you advance beyond variations of hello world, you come to realise that you're not actually more enlight
Re: (Score:3)
I think the biggest point against using JS libraries is that most sites load the entire library in order to use one feature. Certainly, these libraries can reduce the amount of JS used on a site/page/app - but when the entire library is loaded for a single trivial feature, the advantage is lost.
Re: (Score:3)
Re: (Score:3)
Re:Easy (Score:5, Insightful)
We already know how to "really code". We just got sick of reinventing the wheel every time we start a new project. Now we let the libraries do the tedious crap, and we focus our attention on where it's actually needed.
You're going to use our library-heavy code, and you're going to like it. You already do, in fact. You're lying when you pretend otherwise.
Re:Easy (Score:5, Insightful)
Stop loading dozens of fucking libraries and frameworks and learn to really code.
If memory management was so easy, we wouldn't have devoted so much of our programming guides, style manuals, etc., to it. It's not a simple matter of "I wave my hand and the problem goes away." It has existed since before there were "dozens of fucking libraries and frameworks" and at a time when people did know how to "really code"... it has existed since the very. first. computer. And it hasn't been solved to this day.
The main reason, I suppose, is the same reason why we haven't yet found The One True Concrete that all things can be built out of, or the One True Operating System upon which everything can run, or the One True... you get the damn idea. Men much smarter than you have devoted their entire careers to trying to solve the problem, and it's incredibly pretentious of you to toss off a one liner like it's (puts on sunglasses) just a simple matter of programming.
Re: (Score:3)
"... and it's incredibly pretentious of you to toss off a one liner like it's (puts on sunglasses) just a simple matter of programming"
Agree. But the memory management thing really is an issue, too.
Take Android, for example. Android was designed to allow apps to remain in memory until you manually kill them, or the OS gets around to doing it, if ever. And the OS is notoriously lax at doing so. And yes, it was designed that way on purpose. Google doesn't want people killing apps... it cuts off their data stream and ads. Yes, really. So they built the whole OS that way. According to some people I spoke to who worked on Android.
Fortunat
Re: (Score:3)
Re: (Score:3)
There is a strange obsession among many that the only good RAM is empty RAM. Don;t shunt stuff out of memory until you need to, and it'll still be in memory next time you need it.
Hmmm, not sure I agree with this as a blanket statement. I guess it depends on what you mean by "until you need to".
I just built my wife a new computer. The old one was only five years old, with a quad-core 64-bit 2.5 GHz CPU, but it had horrible performance issues running Firefox. The problem was that my wife is a "power user"
Re:Easy (Score:5, Funny)
Memory management is easy. Just program in C instead of JavaScript, problem solved.
Re: (Score:3)
Memory management is easy. Just program in C++ using smart pointers instead of JavaScript, problem solved.
FTFY.
Re: (Score:2)
That's for plebs.
Better use real RAII.
Re: (Score:3)
Point is of course, you can't just forget about memory. And garbage collection has no place on a mobile device.
Re: (Score:2)
shared_ptr is reference counting, which is pretty much garbage collection.
Just manage your memory without relying on this but by designing your application taking into account which objects are responsible for the lifetime of other objects (ownership).
Re: (Score:2)
Re: (Score:2)
This is true, but it's JavaScript we're talking about. Project requirements are rarely scoped and developed from the ground up. Most apps and sites are dependent upon bloated frameworks and libraries that are not tailored for mobile capabilities; said frameworks were developed for the desktop.
Not to mention very few JS developers know how to properly manage memory.
Re:Easy (Score:5, Funny)
Stop loading dozens of fucking libraries and frameworks and learn to really code.
Because *REAL* programmers don't use libraries or frameworks. In fact, *REAL* programmers don't even use wussy text editors like vi or emacs; they use butterflies.
Re: (Score:2)
Only 1 butterfly.
Re: (Score:2)
Thank you so much Ada, you've enlightened an entire generation of developers how wrong we've been our entire careers. Please please teach us the holy grail of never reusing code. We're all listening.
Re: (Score:2)
Thank you so much Ada, you've enlightened an entire generation of developers how wrong we've been our entire careers. Please please teach us the holy grail of never reusing code. We're all listening.
We'll get to that, but first we need to talk about these things called deadlines, managers, and paychecks. After I'm done with the Q&A about those three things, anyone who still wants to seek the Grail may sign up on the sheet here on the desk... (blows away some dust)... Now, open your text books to page 25...
-- Ada
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3)
I know, anything beyond manually flipping switches is overhead. C is a terribly bloated mess.
always (Score:5, Insightful)
You always need to think about memory. Like you need to think about what you're doing.
Too bad for the "write app get rich" idiots.
Re: (Score:2)
You always need to think about memory.
+1, you have to always think about memory, no matter what language and garbage collection or not, memory leaks will still bring your system to a crawl.
Re: (Score:2)
Most high level languages these days don't leak unless you leave explicit permanent handles to things laying around. Is that what you're talking about, or not-quite garbage collectors which are really just poor substitute reference counting solutions. I haven't worried about true 'leaks' in code for years. Occasionally (like yearly maybe) which are leaking bad references (generally due to greedy singletons). The most interesting part of memory management I and probably most people deal with is are the trad
Re: (Score:2)
Re: (Score:2, Insightful)
Garbage collected languages have live-leaks that can have exactly the same memory bloat consequences that other memory leaks do. It's where you keep around a reference to an object sub-graph that you aren't actually using. This gets extra bad if you end up building up a linked list of such leaked objects and the linked list grows in size as your application runs. So you do need to think about live leaks every time you store a reference, it's just that the consequences are likely to be less dire if you get i
Re: (Score:2)
Garbage collection isn't perfect. The system doesn't have any way of knowing the difference between a value that's been saved for a purpose and one that's just been forgotten about.
Sure, the system can figure out when memory isn't accessible any more, but that doesn't cover all the memory that's been allocated and not released.
Re:always (Score:4, Insightful)
I write C code all day everyday and never once worry about whether something will leak or double-free. You quickly learn patterns which make object lifetime management safe and simple. In C++ they generically call it RAII. But the basic patterns are simple and are trivial to follow in C, there's just less syntactic sugar (which often is a good thing because you then tend to economize).
The number of times I've leaked memory in C is probably less than I've leaked memory in a GCd language (yes, you can leak memory in GC). If memory management is anything more than a simple chore, then you're doing it wrong. It does take practice, though.
Don't get me wrong, I also use mark+sweep (e.g. Lua) and reference-counted (e.g. Perl) garbaged collected languages. Good times. You just can't use a GC'd languages for anything that will use a lot of memory in a single process. This is why people complain about Java all the time. The time spent walking the object tree grows faster than linearly. The article is fails to grasp that point, although the Microsoft C# engineer nailed it when he mentioned that object references can grow exponentially. The reason why more memory makes it seem like GC is faster is simply because with more memory you can just let junk sit around longer, and then destroy the entire heap all at once. At least, for web pages, which tend to be ephemeral. That doesn't work well for long-lived server processes though, in Java or C#, which can't avoid constantly cycling through all memory.
If you must use a GCd language for memory intensive stuff, you need to use multiple processes and IPC, not just threads, so you can partition your heap memory. Partitioning that way will improve GC algorithmic performance better than linearly. Unfortunately Java and C# don't make multiprocess architectures as easy to implement as in Unix+C, where you can create nameless socket pairs, pre-fork, and pass file descriptors, etc, over the channels, all without polluting any external namespaces--i.e. no loopback ports or temporary files.
Re: (Score:3, Insightful)
It's more than that though, and it's not just that it seems faster -- it is faster. And it's not just because your program may end before needin
Re: (Score:2)
Yup. When I worked at Amazon the #1 question on internal mailing lists was "my Java webservice feezes up and breaks SLA whenever GC kicks in, how do I fix this?". GC is not a silver bullet, and you're going to end up thinking about memory on anything non-trivial.
Re: (Score:2)
Re: (Score:2)
You always need to think about memory. Like you need to think about what you're doing. Too bad for the "write app get rich" idiots.
But there's plenty "good code, crap idea" that won't make you rich either, most that have gone viral haven't been massively complicated, complex state of the art games. They've been simple, fun and easy to get into while being rather run-of-the-mill technically. Sure, you can't be hopeless but a lot are sufficiently skilled while the l33t coding skillz won't do any good on their own.
First rule of garbage collections (Score:2)
If you don't release all references to it, it will never be collected (that includes circular references if you're dealing with a reference counting garbage collector, like some IE browsers)
Re: (Score:3)
Reference counting is not garbage collection, please understand that.
E.g. a group of objects that is referencing each other, but none of them is referenced from the stack or a "global" variable will never be deleted. But a garbage collector would find those objects and free them.
intercept memeory allocation (Score:2)
But in the real world, i
Re: (Score:2)
Memory management is simple to any C or C++ developer. If you have difficulties with it, you are a bad developer.
In C++, exception-safe memory management can be a bit tricky, but since C++ globally makes things simpler if you follow the right idioms, it still ends up being easier.
I work in high-performance computing, and my focus is on the in-core and shared-memory optimization of numerical code. There is no need to ever go to assembly. Just write the right C code, using attributes or built-ins if necessary
Re: (Score:2)
Microsoft has long provided CRT macros for mapping memory allocations and finding leaks. Turning on _CRTDBG_MAP_ALLOC does exactly what you describe. http://msdn.microsoft.com/en-us/library/10t349zs.aspx [microsoft.com]
Re: (Score:2)
Only a short-term problem (Score:5, Funny)
The only way to get "silver bullet" memory management is the same way we do it on the desktopâ"by having 10x more memory than your program really needs
Give it a couple of years and that's exactly what will happen. Problem 'worked around'.
Explicit memory management. (Score:2, Insightful)
Re: (Score:2)
Don't generate garbage; this way you won't have garbage to collect.
Re: (Score:3)
Yet even on languages such as Java/Javascript you can be smart about your objects so to minimize the underlying allocations.
The article explains that game developers do exactly this (when they choose to write in Java at all): they allocate all their objects at the beginning, so the garbage collector has nothing to do. In other words their being forced to manage memory in an environment that wasn't designed to support it.
Garbage collection is dumb (Score:3)
Breaking news. Full story at 11.
Garbage collection is supposed to stop dumb programmers doing dumb things, but in reality it just gives them different ways to do dumb things.
Re: (Score:2)
Re: (Score:2)
GC is really quite nice.
So long as you don't mind spending weeks trying to eliminate the pauses and bloated memory usage and creating internal caching schemes to avoid having to allocate more objects to be garbage collected.
This is what's going to doom FF OS (Score:5, Insightful)
This is one of major flaws behind these Web based Mobile OS’s, you think that after WebOS, beautiful as it was, Mozilla would have learned their lesson. Instead, they’re trying to drive underpowered hardware with a HTML/JS. All the web technologies are being shoehorned into areas they were never designed for. From DOM being used for Applications to the lightweight scripting language, JavaScript, being used for Apps, to a bloated HTML render as the platform's UI toolkit.
JavaScript is a nice little scripting language that’s got some nice functional programming features. When you need to need to write heavy applications that require performance, low memory usage, and multithreading, it’s the wrong choice.
Re: (Score:2)
One of the issues are.. (Score:2)
Everybody is to trying to use 1 library for all platforms (mobile, desktop) etc.
The first thing is to stop this non-sense.
Use server side technologies to sniff out the client. When working with mobile phones, create or bastardize a library which has the smallest footprint possible to fit your needs.
Re: (Score:2)
Re: (Score:2)
Android GC sucks (Score:3)
The problem isn't that Android phones have "limited ram", the problem is that Android's garbage collection sucks miserably at dealing with short-lived objects -- a problem that was fixed & mostly a non-issue with J2SE by the time 1.4 or 1.5 came out more than a DECADE ago.
10 years ago, when J2SE 1.5 was out and its garbage-collection problem was already a historical footnote, a laptop with 512mb, 32-gig hard drive, and 700-1000MHz CPU was fairly respectable. A Galaxy S3 has a gig of ram, 16 or 32 gigs of internal flash, and a 32-gig class 10 microSD card costs $20 on sale.
Re: (Score:2)
A lot depends on what you're trying to do... (Score:4, Insightful)
Most the "Apps" I'm being hired to write are basically CRUD form apps that are designed to read info from tables in a database. Usually to take forms already in use by desktops written in Java or .Net or in some cases god only knows what and adapt them for use on mobile devices.
I've frankly found jQueryMobile + HTML5 + Phonegap/Cordova makes this task farily easy to undertake client side. Actuallly in most cases the cost is still developing and deploying the API side in your choice of server side scripting language. And often that's based upon a perl script that I wrote circa 2000 to take form input, validate, and then go fetch data from a database and return in XML, YAML, or JSON these days. Other projects, the server side is in PHP or C# or Java. Just depends on what the client already has.
Now I can see trying to buld other types of apps using HTML5/JS is asking for disaster.
Sorry, I'm an old perl guy who thinks use the right tool for the job and there is still more than one way to do it.
Except, its not? (Score:2)
Short of issues with CSS transforms and in some cases hardware acceleration in general on Android, its not all that slow really.
I have a nexus 4, which isn't low end but its no Galaxy S4 or iPhone 5 either as far as web performance goes, and for the vast majority of websites, it works just fine, with a few delays for ad-heavy sites, sites making heavy use of CSS transforms and animations (which are slow regardless of what you do with JavaScript...I'm being told the situation on iOS is much better), and a fe
This makes no sense (Score:2)
You could run certainly run Ruby on any mobile device if it had a magic garbage collector that solved everybody's problems. Except there's no such thing that's immune to idiot developers who allocate memory or variables and leave reference to them hanging around. The same problems apply to java on Android.
TFS hasn't inspired me to read TFA, so sorry if it's explained there.
Industry trade secrets revealed (Score:5, Insightful)
I've found the best way to get developers to stop being lazy is to give them shit hardware.
The mistake is buying the latest quad core 2GB android goodness... Give them a 5 yr old piece of shit and the resulting mobile apps will rock.
Rust (Score:3)
Memory management is an issue that has me excited about Rust [rust-lang.org]. Rust memory management is explicit, easy to use, deterministic, efficient and safe. The language designers understand that garbage collection is costly and that endemic use of a garbage collector limits applicability.
Although Rust does have reference counted heap objects on the so-called "exchange" heap, memory is normally allocated on the stack or on a "local" heap (via an "owned" pointer) that has "destructor-based memory management," much like C++ objects but without the leaks and wild pointers.
The result is the vast majority of allocated memory is not managed by the garbage collector. Use of the exchange heap is exceptional and explicit, yet immediately available when necessary. Otherwise, memory "management" is reduced to efficient stack pointer manipulation or simple, deterministic destruction. Compile time checks preclude bad pointers and simple leaks so common with traditional systems languages.
There is a series of detailed blog posts about Rust memory management here [github.io].
Rust was inspired by the need for an efficient, AOT compiled systems programming language that is productive, concise and at least as safe as contemporary "managed" languages. Its memory management scheme goes directly to the point of this story.
Article itself is a waste of memory (Score:3)
GC sucks, real programmers can do memory management, blah blah blah. Tell me the last time a programmer made billions because "he could memory manage" and I'll show you plenty of poorly written websites, apps, software, that suck at memory management yet still managed to become popular and used by millions.
The market decided long ago that fewer programmer hours was better than users waiting a few seconds everyday for their device to GC. Users don't exactly like it, but it works, they get their hands on a more than usable product faster.
But back to the article. In the article there's some fancy charts about how iphone 4s only has 512mb of ram. Ok, a mobile device isn't going to run with a swap file because, well, the manufacturer decided to skimp on Flash chip quality so not only does writting to flash suck, but it also runs the risk of forcing the cell to over-provision (meaning shrink in usable capacity). But iphone 4s will be 2 years old in 4 months! 2 years = EOL in the phone world.
How about a more current phone? Ok, the Google LG Nexus 4 which will become 1 year old in 5 months comes with a whopping 2GB of RAM. And its a relatively cheap phone! That's already half of my 2011 Macbook Air's RAM. Prediction? In 4-5 years, mid-range phones will be shipping with 4gb of RAM.
Ok, let's go the other direction. Let's say we all agree and programmers should sit down and memory manage again. Hurray! Problem solved? No. Because programmers are inherently bad at memory management. Memory will leak. Look at some popular web browser named after a type of animal. So instead of your phone pausing for a second to GC, now your app just crashes! AWESOME.
The standard software engineering practice still applies. Design your system. Build your system. Once it is usable and more importantly has a market, then profile for optimization.
Re:Article itself is a waste of memory (Score:5, Insightful)
The market decided long ago that fewer programmer hours was better than users waiting a few seconds everyday for their device to GC.
No, actually, that's not what happened. As the summary and the story itself (both of which went unread apparently,) point out, one of the most successful systems to emerge in the market recently, iOS, is not a GC environment.
Over here [apple.com] you may learn about iOS memory management. Without getting too far into that wall of text one discovers the following:
If you plan on writing code for iOS, you must use explicit memory management (the subject of this guide).
Ok, so your claim that GC is the only viable solution for contemporary application development is demonstrably false. Lets look some other assertions:
programmers are inherently bad at memory management. Memory will leak [if programmers must manage it].
First, the vast number of iOS applications not leaking shows that a non-GC system doesn't necessary have to leak. At least not badly enough to compromise the viability of the platform, which is the only meaningful criteria I can think of when it comes to the market.
Second, why assume programmers are inherently bad at a thing when that thing has traditionally been exposed via terrible, error prone, demonstrably awful mechanisms? It seems to me that among widely used tools we leaped from 'systems' languages with truly heinous MM primitives (C/C++) directly into pervasive GC systems. Aside from Objective C+ARC there just aren't enough good non-GC systems to make broad generalizations. Thus, you may be right about programmers, but you can't prove it, and I doubt it.
Finally, what proof is there that pervasive GC is better at not leaking than a good explicit MM system? Anyone with an Android system and a bunch of apps will quickly discover that pervasive GC does not eliminate leaks.
[some phone] comes with a whopping 2GB of RAM
Goggle Glass has 682mb of RAM. There is always a new platform into which we much fit our software and the new platform is usually resource constrained, so there will never be a day when questioning the cost of GCs is wrong. Maybe the wearable you eventually put on will have 8 GB of RAM. The computers you swallow or implant or sprinkle around the lawn probably won't. The fact the next generation of phones can piss away RAM to greedy GCs just isn't particularly informative.
Re: (Score:3, Informative)
Not to say that tracing collectors don't have a bit of a stuttring problem and I don't want to get into tracing vs reference counting, but:
1) Decent tracing collectors (and the Java VM has had one for a while) are not nearly as bad as you make them out to be, and
2) You mean "tracing collector" rather than mark-and-sweep. The latter is just the simplest form o
Re: (Score:2)
Generational GC can still be mark and sweep (and most of them are). Generational or not is orthogonal.
Re: (Score:2)
So looking around, it seems like "mark and sweep" does not have a single meaning. I didn't actually know this, and it means that I was wrong to say that the AC I replied to was wrong.
However, I learned "mark and sweep" (and my use is supported by this survey paper [rit.edu] and, I believe, the second edition of the dragon book [stanford.edu] (see, e.g., the lecture 17 slides from here [stanford.edu]) though I can't check right now) to be a specific GC algorithm that basically wor
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
I say again: Memory is cheap.
Blame the vendor.
Re: (Score:2)
Try running Firefox on an old XP box with 512MB of RAM and see how JavaScript chugs.
Re: (Score:2)
Re: (Score:2)
So there is something special about a memory chip built for mobile vs. a memory chip built for desktop?
Server, yes, ECC is fractionally more money. And there is a smaller market for specialized servers that may only have a few tens or hundreds of thousands of customers, so that boosts the price a bit.
But in both the server and mobile spaces, it's primarily an issue of gouging by the vendor. Especially in the mobile space where volume buying is the norm.
Re: (Score:3)
You probably mean "Memory is cheap in terms of dollars".
On a mobile device such as a smartphone, every micrometer of space counts. Every milliwatt of energy consumption counts. It's not about whether another GB of RAM costs ten bucks or so (whatever it is). It is about efficient use of space and battery.
If you add more RAM, it not only uses more energy but because it also requires space, it reduces the space available for either the battery or other components. There are numerous other reasons why memory, i
Re: (Score:3)
Volatile memory is defined as memory that requires power to preserve its contents. All of the 'cheap' memory that you can buy at the moment is volatile memory (and the non-volatile memory is either really expensive, really slow, or both). If you double the amount of RAM, then you double the number of capacitors that must be refreshed every few nanoseconds and so you double the power consumption. You also double the amount of heat that's generated and must be dissipated.
Now, what's the number one compl
Re: Mobile is not a special case (Score:4, Interesting)
10 years ago, desktop computers didn't page to hard drive sectors with half-lives of 100k writes or less, and siphon the bits through a single-bit (SPI/MMC mode) or 4-bit wide (SD-mode) cocktail straw. As a few guys @ XDA learned the hard way, micro-SD wear-leveling is NOWHERE close to being as robust as what's in a desktop SSD, and onboard flash (like what's in the Nexus 4) might not have "SSD" logic *at all* (leaving its management *entirely* up to the OS to cut costs). Vigorously swap to a mostly-full microSD card, and you can *literally* push it into "hard error" land and end up with weakened cells in just one single weekend of aggressive benchmarking. Blindly swapping to internal or microSD flash desktop-style is NOT consequence-free, and is *totally* unsafe to do on any phone without microSD (at least end users can toss & replace a worn-out microSD card).