Forgot your password?
typodupeerror
Software Upgrades Hardware Linux

Gaining RAM For Free, Through Software 68

Posted by timothy
from the but-ram-doubler-is-old-news dept.
wakaramon writes with a piece from IEEE Spectrum about an experimental approach to squeezing more usable storage out of a device's existing RAM; the researchers were using a Linux-based PDA as their testbed, and claim that their software "effectively gives an embedded system more than twice the memory it had originally — essentially for free." "Although the price of RAM has plummeted fast, the need for memory has expanded faster still. But if you could use data-compression software to control the way embedded systems store information in RAM, and do it in a way that didn't sap performance appreciably, the payoff would be enormous."
This discussion has been archived. No new comments can be posted.

Gaining RAM For Free, Through Software

Comments Filter:
  • Software patents (Score:4, Interesting)

    by tepples (727027) <tepples@nOSpAM.gmail.com> on Wednesday August 27, 2008 @09:51AM (#24764477) Homepage Journal
    Perhaps timothy just wanted to set up yet another software patent debate. From page 3 of the article [ieee.org], with my emphasis:

    Translating these gains from the lab bench to the marketplace has not been a trivial undertaking, however. In January 2007 we filed for patents on the process, which we dubbed CRAMES, for Compressed RAM for Embedded Systems.

    I haven't managed to find the patent application yet, but I wonder if Connectix's RAM Doubler product would be considered prior art.

  • by pla (258480) on Wednesday August 27, 2008 @10:07AM (#24764751) Journal
    Lightly compressing RAM to make it appear larger hardly counts as a new idea... I ran a program back on Windows 3.11 that did exactly that - And while it did indeed allow running more things at once without suffering a massive slowdown, It came at the cost of making everything run noticeably (though not unusably, as with swap/pagefile use) slower.

    Memory compression had one major drawback, aside from CPU use (which I suspect we would notice less today, with massively more powerful CPUs which tend to sit at 5-10% load the vast majority of the time)... It makes paging (in the 4k sense, not referring to the pagefile) into an absolute nightmare, and memory fragmentation goes from an intellectual nuissance that only CS majors care about, to a real practical bottleneck in performance. Consider the behavior of a typical program - Allocate a few megs on startup and zero it out - That compresses down to nothing. Now start filling in that space, and your compression drops from 99.9% down to potentially 0%.

    Personally, I think it could work as an optional (to programs) OS-level alternative to memory allocation... The programmer can choose to use slightly slower compressed memory where appropriate (loading 200MB of textual customer data, for example), or full-speed uncompressed memory by default (stack frames, hash tables, digital photos, etc).
  • Re:News? (Score:3, Interesting)

    by notthepainter (759494) <oblique@EEEalum. ... inus threevowels> on Wednesday August 27, 2008 @10:11AM (#24764831) Homepage

    These days, with RAM bandwidth being a major bottleneck, it might actually make a lot of sense if you could do the compression / decompression in hardware between the cache controller and the RAM - you'd get more bandwidth to RAM at the cost of slightly more latency.

    When I was at Kodak in the mid 80s we were actively investigating lossless image compression to get by disk speed bottlenecks. Sadly, our lossless algorithms weren't hitting the compression ratios we needed. This was not for consumer products, but rather pre-press shops.

  • Re:IBM MXT (Score:3, Interesting)

    by TheRaven64 (641858) on Wednesday August 27, 2008 @10:51AM (#24765485) Journal
    Doing it in software adds a lot of latency to memory accesses, which can make things significantly slower. My PhD was involved in techniques for more accurately handling swapping, and with the algorithms I was using as a case study (some fairly memory-intensive rendering techniques) I got around a 20% overall speedup by improving cache hit rate by under 1%. Taking this the other way, increasing memory latency by a small percentage can have a much larger percentage impact on overall performance.

    If you do it in hardware, and have the data decompressed or compressed between the cache controller and the memory controller then it might work better - you'd gain a bit of latency, but you'd get more throughput (because you'd need less bandwidth to transmit the same amount of uncompressed data between CPU and RAM) which might make up for it in a lot of cases, particularly on chips with SMT support.

    The biggest problem I see with doing it in software is that, for it not to be horrendously slow, you need to keep the compression code (and data) in cache at all times. This means that you are reducing the amount of cache available to all other programs, which means you are going to be fetching data from RAM more often, which eliminates much of the use for this.

"You know, we've won awards for this crap." -- David Letterman

Working...