on-the-fly compression of drives - possible?
This thread belongs to

2009-01-10 18:20 GMT   |   #1

Since I have so much linux source code (kernel and other stuff), I was
hoping that it would be possible to enable compression. It could also
just be specific places on the filesystem: /opt/* and /usr/src/*
should be compressed...

Compression should happen transparent to the user (me). Is this


Significant disc space could be used better, so I hope this is
2009-01-11 00:23 GMT   |   #2
I have been curious about this myself.
2009-01-11 07:20 GMT   |   #3

You may put the filesystem into a normal file of the real filesystem and
loopmount it.
2009-01-11 07:20 GMT   |   #4
New site:
2009-01-11 09:20 GMT   |   #5
Possible? Most likely, but I woudl think that the disadvantages do not
outwigh the advantages.

Discspace is also enourmously cheap. A 1TB HD can be had for 80EUR. And
external for some 10 EUR more. Both 3.5 inch.

2.5inch a 500GB will cost some 100EUR internal and some 10 EUR more for

And with that you can have as much sourcecoude as you would need.

So for 100EUR or less, you will have enough storage for the future. My
/usr is just under 10GB. /usr/src is 3.2GB /opt is below 1GB
I did a compress of /usr/src and that 3.2 GB went to become 732MB, so a
lot of compression, but I will loose speed when using it.

I would not bother with it. If I do not have enough space, the only
solution is to get more space.

Obviously I have no idea of how your HD space is used, so I will be
completely wrong. When you do a standard install, / will take up 10GB
and /home will take up the rest. This could mean that /home is largely
empty, while / is almost completely full.

There are two things I can suggest, even though there are many more
1) Make /home smaller and / larger
2) Make /home smaller and put /usr (All of it) in a seperate partition
of say 20GB (Or whatever you think it needs +some extra to be sure)

The adsvantage of the second is that you can mount it read only, just as
it is intended to be. Also you only need one for all the machines you
have. From `man hier`
/usr This directory is usually mounted from a separate
partition. It should hold only sharable, read-only data,
so that it can be mounted by various machines running Linux.

Source files for different parts of the system, included
with some packages for reference purposes. Don't work here with
your own projects, as files below /usr should be read-only
except when installing software.

This was the traditional place for the kernel source.
Some distributions put here the source for the default kernel they
ship. You should probably use another directory when
building your own kernel.

Much more there in the man page.
2009-01-11 11:20 GMT   |   #6
houghi wrote:
I did a compress of /usr/src and that 3.2 GB went to become 732MB, so a
lot of compression, but I will loose speed when using it.

Did you actually measured this or you only had read about it
some 5 years ago? With today's processor and RAM speed even
with the typical 2:1 compression ratio you usually gain more
time on the reduced I/O than you lose on the decompression
process. In the above case I'd dare to say you gained speed
rather than lost it...

So doubling a disk capacity for free is not such a bad thing... Wink
2009-01-11 17:34 GMT   |   #7
I agree.

I come from a Novell NetWare background and this is something it
has done for many years. I consider it a valuable feature, because
it does save disk space and the CPU/disk overhead is pretty light.
However, the space savings are considerable. Compare that to
trying to convince the corporate bean-counters that I need to
spend another $250 for a 300GB SCSI drive and I will take the
free option any day.

The NetWare way is to compress the whole drive, minus some
specific data that can't be, since it's part of the OS and then
the OS decompresses it when a user requests the data. When saved
it is automatically compressed again. Totally transparent to the
2009-01-11 18:20 GMT   |   #8
Hi Scott

From memory the NW compression use to be opportunistic as well, it left
often accessed files in an uncompressed state.

I however thought the sub allocation feature much more useful. In its
day it use to make a huge available space difference when compared to
fixed clustersizes on Windows server gear.

I left NW at 5.1.... A corporate takeover forced us all to NT4/W2K.
<sigh> It was actually a huge backward step that left us with quite a
few problems. We were using SuSE 6.1 at the time for email/firewall
duties (Just to keep it on topic!). They are still using it 8+ years
later for server based PDF creation and concatenating, as well as
Hylafax! (I left the company 5 years ago)
2009-01-18 22:04 GMT   |   #9
Bob, I agree the sub-allocation feature is great and very useful.
I recently had to make a move to the total Windows world. Grrr...
I have used NetWare 3.x to 6.5 and am moving into Linux (a little
at work, but mostly on my own). It's not surprising that my
first Linux box (Red Hat 4.4) was the most stable server on the
network (compared to both Windows and NetWare).

So far, I've been working with Ubuntu, Suse and OpenSuse. I am
looking seriously into converting several of my own computers over
to one of those (from Windows).