There have, in a similar vein, been discussions suggesting that it would be "highly valuable" to create something like the Windows/Mac "RAM Doubler" utilities.
These utilities come with two components:
A "memory defragmenter" that goes out and reclaims unneeded allocated memory.
There can be no such component for a Linux equivalent; the"defragmenter" is essentially displaying that MS-DOS and MacOS contain primitive memory allocation code (and in newer versions, milage may vary...). Linux already does this job quite well; this functionality is already intrinsic to Linux.
A memory compression system whereby data compression algorithms are used to "stuff" more data into memory, as well as to reduce the size of chunks of information being thrown out to "swapspace."
The theory is that by compressing the data, the computer will seem to have more memory. Unfortunately, there are two ill effects to consume some or all of the potential savings:
Compression algorithms consume CPU resources, offsetting possible benefits, and
Compressed data takes up unpredictable amounts of space, making the task of allocating space for the data complex and error prone.
Under Windows/MacOS, performance is poor enough that nobody is likely to notice, and Windows crashes so often that nobody would ever notice if it became somewhat less reliable. On multiuser systems, system corruptions have more widespread effects and are quite unacceptable.
In any case, it is unlikely that there will be significant performance advantage to offset the losses in both performance and system reliability.
All this being said, there has been some formal research work into this; the ACM "Operating Systems" SIG had a paper on this topic in 1997, with experimental results under Digital UNIX using several compression algorithms.
The findings were that they could provide overall performance improvements on the order of 10 available by using a variant on Huffman encoding, keeping on disk swap data compressed, with an integration of swap and disk cacheing.
If the recent trend where CPUs are much faster than dynamic RAM, and enormously faster than access to secondary storage continues, it may become worthwhile to use this sort of approach in production systems.