Gaining RAM For Free, Through Software 68
wakaramon writes with a piece from IEEE Spectrum about an experimental approach to squeezing more usable storage out of a device's existing RAM; the researchers were using a Linux-based PDA as their testbed, and claim that their software "effectively gives an embedded system more than twice the memory it had originally — essentially for free." "Although the price of RAM has plummeted fast, the need for memory has expanded faster still. But if you could use data-compression software to control the way embedded systems store information in RAM, and do it in a way that didn't sap performance appreciably, the payoff would be enormous."
Software patents (Score:4, Interesting)
I haven't managed to find the patent application yet, but I wonder if Connectix's RAM Doubler product would be considered prior art.
Old tech, with one glaring flaw... (Score:3, Interesting)
Memory compression had one major drawback, aside from CPU use (which I suspect we would notice less today, with massively more powerful CPUs which tend to sit at 5-10% load the vast majority of the time)... It makes paging (in the 4k sense, not referring to the pagefile) into an absolute nightmare, and memory fragmentation goes from an intellectual nuissance that only CS majors care about, to a real practical bottleneck in performance. Consider the behavior of a typical program - Allocate a few megs on startup and zero it out - That compresses down to nothing. Now start filling in that space, and your compression drops from 99.9% down to potentially 0%.
Personally, I think it could work as an optional (to programs) OS-level alternative to memory allocation... The programmer can choose to use slightly slower compressed memory where appropriate (loading 200MB of textual customer data, for example), or full-speed uncompressed memory by default (stack frames, hash tables, digital photos, etc).
Re:News? (Score:3, Interesting)
When I was at Kodak in the mid 80s we were actively investigating lossless image compression to get by disk speed bottlenecks. Sadly, our lossless algorithms weren't hitting the compression ratios we needed. This was not for consumer products, but rather pre-press shops.
Re:IBM MXT (Score:3, Interesting)
If you do it in hardware, and have the data decompressed or compressed between the cache controller and the memory controller then it might work better - you'd gain a bit of latency, but you'd get more throughput (because you'd need less bandwidth to transmit the same amount of uncompressed data between CPU and RAM) which might make up for it in a lot of cases, particularly on chips with SMT support.
The biggest problem I see with doing it in software is that, for it not to be horrendously slow, you need to keep the compression code (and data) in cache at all times. This means that you are reducing the amount of cache available to all other programs, which means you are going to be fetching data from RAM more often, which eliminates much of the use for this.