Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Software Upgrades Hardware Linux

Gaining RAM For Free, Through Software 68

wakaramon writes with a piece from IEEE Spectrum about an experimental approach to squeezing more usable storage out of a device's existing RAM; the researchers were using a Linux-based PDA as their testbed, and claim that their software "effectively gives an embedded system more than twice the memory it had originally — essentially for free." "Although the price of RAM has plummeted fast, the need for memory has expanded faster still. But if you could use data-compression software to control the way embedded systems store information in RAM, and do it in a way that didn't sap performance appreciably, the payoff would be enormous."
This discussion has been archived. No new comments can be posted.

Gaining RAM For Free, Through Software

Comments Filter:
  • Stack is back (Score:3, Informative)

    by krischik ( 781389 ) <krischik&users,sourceforge,net> on Wednesday August 27, 2008 @09:27AM (#24764149) Homepage Journal

    That's an old idea - using transparent compression to gain more memory...

  • News? (Score:5, Informative)

    by TheRaven64 ( 641858 ) on Wednesday August 27, 2008 @09:29AM (#24764191) Journal
    RAMDoubler and other software did this for DOS almost two decades ago. A poor-man's version could be done with Windows 3.x by making a RAMdisk, using DoubleSpace on it, and then putting swap files on it.

    These days, with RAM bandwidth being a major bottleneck, it might actually make a lot of sense if you could do the compression / decompression in hardware between the cache controller and the RAM - you'd get more bandwidth to RAM at the cost of slightly more latency.

  • Re:News? (Score:3, Informative)

    by LordKronos ( 470910 ) on Wednesday August 27, 2008 @10:10AM (#24764803)

    It depends on the application.

    If you want a stream of data to come through at a certain rate, and you don't care whether it starts now or 1 second later, just as long as it comes through at that rate, then latency isn't a concern.

    On the other hand, if you want to read something from memory and have it available in a few clock cycles, then latency is a concern.

  • by Moraelin ( 679338 ) on Wednesday August 27, 2008 @10:17AM (#24764939) Journal

    Actually, I'd rate that informative, rather than funny. I've actually tried a couple of such programs back then, and invariably it was just a fancy way to slow your computer down. (Mildly.)

    Basically, the way it worked was:

    1. Report more RAM to the OS. That's actually what your swap file does too. Virtually any modern processor has _some_ way to pretend it has more memory than physically present, with the extra bytes being in a swap file.

    2. Set aside half the memory as a kind of compressed, virtual (in memory) swap file.

    So at this point, let's say your computer had 4 MB RAM (hey, back then we didn't measure RAM in Gigabytes). So now you'd only have 2 MB of it free as physical memory for your programs, and 2 MB set up as a compressed swap file. But your OS thought you have 8 MB, with 2 MB being the free RAM left and 6 MB of it being swap space.

    3. However, you typically still wanted some actual swap space, because you don't know, and can't guarantee, how well that swap space compresses. If you swap out, say, a table of random numbers, you may not be able to compress at all. Funky things can happen when the OS thinks it has room to swap a page out, but it turns out that it doesn't fit there. The actual HDD swap file would be, at the very least, the safety net to catch whatever doesn't fit into that RAM buffer.

    Now the thing is:

    A. That virtual compressed swap space was typically faster than HDD (we didn't have 15,000 RPM drives with huge caches, back then), but, here's the important part, _much_ slower than just plain old free RAM. ("Free" as in "available to the OS as it is.") Even the page fault itself, never mind the compression, was _much_ slower than the few cycles required to just read a memory page.

    Compression didn't make it much better. Almost any decent compression algorithm is fast when deconpressing, but slow when compressing. When handling a page fault in that context, you had to do both. Compress the page you want swapped out, and decompress the page you want swapped in. Not only that took time, but it was CPU time. Unlike IO time, which happens on DMA in an ideal world, and lets your CPU schedule some other task in that time.

    B. However, now you had less free RAM _and_ were encouraged to load more into it. If you had 5 MB of memory in use on the above described computer, without RamDoubling scams, you'd have 4 MB of physical memory in use and 1 MB swapped to disk. With such a RamDoubling scheme, you had 2 MB in actual normal RAM, and 3 MB swapped out.

    In almost all cases, the "ram doubling" inherently increased the number of pages swapped in and out per second. In some cases, dramatically. (E.g., Java's GC didn't play nice at all with swapping anyway. It already tended to push everything else out. Play with it in even less space, and things could get funny.)

    So a lot of the time, sometimes even most of the time, all you'd get for your efforts was slowing your computer down. And a useless number telling you "now you have 8 MB RAM!!!!11oneeleventeen", but not what the cost there is, or even what it really means.

  • Re:SoftRAM (Score:5, Informative)

    by mr_mischief ( 456295 ) on Wednesday August 27, 2008 @10:20AM (#24764993) Journal

    Well, it can work -- sort of. You lose CPU cycles to do the compression and decompression. You have to use part of your uncompressed memory to store the compression and decompression routines.

    Not everything compresses very tightly. In the case of lots of audio, video, and still graphics formats you're trying to compress an already compressed format, and decompress it twice for working with it (once from the RAM compression, which barely compressed anything anyway, and once in the viewing or editing package) instead of once.

    The thing is, if you're already tight on memory, using a good portion of it to double what's left may not gain you much. Something like this should cost a lower percentage of memory then on a 1 megabyte or 4 megabyte system, because the compression and decompression itself shouldn't need to be that much bigger than it was ten or twenty years ago.

    The thing is, even if you're getting double the effective RAM, you're still burning all kinds of cycles if you're doing this on the CPU. If you're doing it in the controller, you'd better be able to do it faster than the CPU needs the bits, because memory throughput is already the bottleneck for most applications.

    People put more memory into their machines for better performance. The virtual memory swapping to disk/flash is a problem long solved, after all. For this scheme to be worthwhile, several things have to work out:

    1. The RAM plus the compression scheme must be cheaper than just buying and slotting twice the RAM.
    2. The compression scheme can't raise too much contention for space on the motherboard vs. other components, or it'll drive the price of the device up past the point where the cost of more RAM would.
    3. It has to perform better and faster than swapping to disk or flash, or be a hell of a lot cheaper.
    4. It has to work flawlessly, because it's an extra layer of complexity that slotting double the RAM doesn't bring. If it has a single bug that bites 0.001% of the time, the cost to productivity outweighs the cost of extra RAM.
    5. The compression and decompression has to work faster than the bus transfers the data to and from the CPU, or you're losing the performance of your RAM and might as well use a swap file.
    6. If they patent it and there's a licensing cost, the price drops in RAM will overcome the price disparity this offers over doubling the physical RAM in short order.
  • Re:Stack is back (Score:5, Informative)

    by Alan Shutko ( 5101 ) on Wednesday August 27, 2008 @10:44AM (#24765355) Homepage

    If you had RTFA, you'd have found that the difference they've made is they've developed a compression scheme that doesn't have the huge performance penalty that old techniques had. (Specifically, they claim "0.2 percent on average and 9.2 percent in the worst case.")

  • Stack is back indeed (Score:5, Informative)

    by DrYak ( 748999 ) on Wednesday August 27, 2008 @11:15AM (#24765873) Homepage

    >If you had RTFA, you'd have found that the difference they've made is they've developed a compression scheme that doesn't have the huge performance penalty that old techniques had.

    Back then, the real Stack (not Microsoft's poorly implemented and unstable clone) didn't have a huge impact on the performance either, as it used a big cache and had significantly less amount of data to transfer from/to a harddisk which back then didn't shine bandwidth-wise.

    The reason stack died isn't the preformance hit. The reason stack died is a combination of :
    - Microsoft managing to instill paranoia about RT-compression thank to their double-crap
    - huge drops in storage price wich made on-the-fly compression irrelevant
    - newer data formats which are hard to compress any way (Stack could be efficient back then when most graphics where RLE-encoded bitmaps. Now that everything is stored as JPEGs and MP3, there's not much an additional layer of compression could do).

    Ultra-fast compression algorithms like LZO aren't something new, and could easily be implemented in a hardware chip for even faster performance.

    Such compression *could* have been useful a decade ago, when PDAs still had limited memory and did cost a lot.
    Now, with the price drops of memory and the increased popularity of solid state memory (micro-SD have just insane capacity these days), it hard to be short on memory even on embed device.

    So it's nice that they have gone through all the technical difficulties to have real-time compression at RAM-level of bandwith.
    But they developed it a decade too late to have any marketable product.

  • by tlhIngan ( 30335 ) <slashdot.worf@net> on Wednesday August 27, 2008 @11:54AM (#24766483)

    I've briefly tried RamDoubler on Windows, but on MacOS, RamDoubler actually was very effective. On Windows, I didn't seem to notice much of a difference, but on MacOS, it was essential.

    The reason is that MacOS's memory manager, to be honest, sucked. The basis of the memory management scheme was "Minimum Memory"/"Preferred Memory" - the OS will check RAM free, and if it was greater than minimum, would start the app. If not, it wouldn't. If there was more RAM free than Preferred, it would give the app that amount, if not, it would give it somewhere in-between. Mac apps were responsible for monitoring how much RAM was free, and to not do operations when they were running low. Problem is, if you have a big document, or big dataset (email inbox, say), the "minimum" wasn't often enough, and enlarging both was a task that was necessary.

    So now you have the problem in MacOS - if you set it too big, you can't launch the app when you need it. If you set it too small, it crashed or error'd out. Too big and the app does't need it, and it becomes wasted.

    Using swap wasn't always an option, for it inevitably made MacOS slower, so many people ran MacOS without swap.

    What RAMDoubler did, effectively, was manage this more effectively. It first created a swapfile as big as RAM (so it can "double" it if things went badly), then managed the free space more effectively - if an app wasn't using the RAM it was allocated, RD would reclaim it as its buffers to compress unused pages. It worked remarkably well - if you kept the app's "minimum" and "preferred" sizes to under the physical memory size, it had a very small impact on system performance (much less than MacOS' swapfile). You only got thrashing when you tried to use an app that had it's memory allocations higher than physical memory.

    In the end, the general recommendations was that the tool was to let you keep more apps running at once, rather than let you launch apps with less physical RAM than you actually had. In that regard, it actually succeeded fairly well, but only because the general awfulness of the MacOS memory model. Not entirely MacOS' fault, for it ran on systems without an MMU.

    Windows 3.1 (protected mode), I'm not so sure if it had that great of an effect - sure the Windows memory manager was horrible, but RD on Windows didn't actually improve matters much.

    MacOS though, it was needed. There was even a hack that you could apply to it to change the multiplier it used, so you could use it for boasting. Of course, it used more disk space as swapfile and caused more slowdowns, but still an improvement. Its life ended shortly after Apple moved to the PowerPC chips, where to the surprise of most people, you wanted to turn ON the swapfile on MacOS if you had a PowerPC machine - the system ran markedly faster (but only if the swapfile was between a certain range of values). RD worked (I believe it was available as a fat binary), but it didn't accomplsh much over what PowerPC MacOS could do, and the benefit wasn't worth the cost.

  • by TubeSteak ( 669689 ) on Wednesday August 27, 2008 @01:10PM (#24767647) Journal

    Such compression *could* have been useful a decade ago, when PDAs still had limited memory and did cost a lot.

    "Limited memory" is relative to the number of programs I want to be running.

    Now, with the price drops of memory and the increased popularity of solid state memory (micro-SD have just insane capacity these days), it hard to be short on memory even on embed device.

    1. memory aka RAM uses electricity, which is an issue in portable devices
    2. micro-SD is not RAM, making it irrelevant to the discussion.

    But they developed it a decade too late to have any marketable product.

    Did you RTFA?
    They've already licensed it to NEC for commercial use.

"May your future be limited only by your dreams." -- Christa McAuliffe

Working...