[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: how to get a silent harddisk?



Joachim Förster wrote:

On Sat, 13 Sep 2003 23:49:16 -0600 Jacob Anawalt <jacob@cachevalley.com> wrote:

Joachim Förster wrote:

Hi!
On Fri, 12 Sep 2003 14:47:04 -0600 (MDT) "Jacob Anawalt" <jacob@cachevalley.com> wrote:
[snip]

For me, squid disk access while someone on my internal network is using
the proxy is not an issue. If squid were spinning up the drive when
'nothing'* is happening, calling sync()/fsync() for some odd reason then
that would be annoying. I'm running a gateway w/ squid right now, but I
haven't tried to stop the disk from spinning when squid is running.
Well, in the end I in fact want to have a PC without spinning up the disk at all, even when somebody is using it. When somebody is using sshd it whould be OK, but the use of dhcpd, squid*, isdnutils, dnsmasq should not bring up the disk.
*squid only with RAM cache, no disk cache!
Cool. I look forward to the details on what you had to tweak to get this :) Maybe send some tips to that silent-linux site I referenced.

[snip]

I am unclear from Joachim's email if Squid is spinning up the disk all the
time for him, every x seconds, or only when the proxy is being used. If
it's only the latter then for my needs that's OK.
Sorry, for me squid is spinning up the disk all the time, even when not in use.

It still seems odd if writes are spinning up the drive with the read only
setting. Maybe some file squid wants to read keeps being dropped from file
cache between accesses because other programs or more frequently accessed
files are using all the memory? (Ie, because squid is set to use XMb in
memory, is there still enough free memory to cache all the files squid
wants to read. Add to that all other running program's requirements.)
I don't know. I moved the whole /var and /tmp things of squid to a tmpfs, so the files are in memory?
How much memory does this computer have? Hopefully lots.

Hmmm, only 64MB. I think it could be too less.

You've got /var and /tmp in tempfs, squid's suppose to be doing all it's cache in memory, and any other program you're running that's not under inetd/xinetd but is running as a daemon is in memory. Is there enough to cache all of the files needed from /etc and binary/data files in /usr for all these programs?

I think that this is the problem ...

I wouldn't expect squid to be accessing those files except for when the proxy is in use and returning those error pages or images. I think that especially with your goal of everything in memory, even if you solve your chroot jail of squid you really need more memory. Maybe you should even step back and think over why you want to run squid.

I run squid at home mainly for one reason. squidGuard. It was a nice place for me to block all the ads and junk I didn't want to see on any of my computers at home. I still need to expand the system with a web interface to add more sites to or list the current sites rather than pulling up ssh all the time, but it works. The side benifit of helping to reduce internet traffic by caching documents is nice, but my desktop systems generally cache more web content on their disks and in memory than you have total memory in your gateway. My squid cache currently is 628MB.

I don't use it as a transparent proxy, because I want to have the ability to quickly "choose" to use the proxy or not on each machine, with each browser, by setting the proxy address or not. If I were to go to the transparent proxy, and have a cron that ran at night and in the morning to shut down squid and turn off the iptables redirect, then that would be a way for me to have the hard disk spin down at night.

If my workplace were to decide they wanted to use a proxy server for web data, I would not try to run it 100% out of memory and I'd give it lots of disk space and memory to work with. For higher traffic gateways I've read that "pro" proxy software will switch to "read only" disk cache during periods of high traffic to avoid the penalty of attempting to write each request to the disk, at the cost of less cached data. The rest of the time those solutions cache to disk. Again I guess I could use the transparent proxy/cron trick described above to spin that disk down outside of business hours.

For either transparent proxy scenario, when the proxy is down, the web still works and it is down during the lowest traffic times so having the cache wasn't too much of an advantage anyway. Sounds like a plausible solution if I give up having per-browser proxy options. I hope they get a better solution to that sometime because I like the non-transparent proxy, but would like to spin down the hard disk when the proxy is not being used. It is a pretty minor issue. I could just shut the proxy down at night anyway, and users could disable their proxy setting if they wanted to use their browser when the proxy is down. There are other more important issues for squid to advance on.

My gateway at the moment has 84MB of memory used for cache, and my desktop is using 48MB of memory for cache. On gateway with only 64MB of RAM minus the kernel minus all other running programs, and doing all the caching in memory it seems there would be a high turnover of web cached objects because there isn't much room to be working with. Even if you are only caching small documents, is it really providing a benifit to your network? How is the system doing so much with 64 MB of RAM? You aren't putting everything in tempfs to avoid disk access and then having the kernel swapping all the time for squid's memory requests, right?


Squid has lots of files in /usr/lib/squid like /usr/lib/squid/errors/* and /usr/lib/squid/icons/*. They shouldn't be being looked at every minute to keep noflushd from being able to spin the disk down, and even if they were, if there is enough free memory the kernel can cache those reads.

The files in /usr/lib/share are not that huge. Moving them to tmpfs should be no problem. I tried this yesterday (with a chroot jail), but till now I haven't been successful, because there is something wrong with my chroot jail (see my other mail/answer to Stephan's mail I think).

Getting the chroot jail going may be a good exercise, but I doubt those small files are the issue. They should end up being cached by the kernel if possible. See above.


Unless someone else answers soon with some pertinant "how to run a diskless squid cache" answers and you have oodles of memory even with all the tempfs data and whatever you have the squid cache memory set to, I suggest posting to the squid-users list found at

http://www.squid-cache.org/mailing-lists.html#squid-users

I am interested in knowing what you learn, even though 100% diskless isn't my goal, I don't want squid to keep the disk spinning when no-one is accessing the proxy.

Ok.

Since your goal is to have almost 100% no disk access, perhaps the LEAF project mds mentioned would be the best bet. I haven't looked at it, so I don't know how it meets your needs.

Last evening I was on leaf.sourceforge.net and studied the feature lists. The thing which is missing from all (or I didn't see it), is the ISDN support. Perhaps I have to look on other sites, too ...

Anyway I think LEAF shall be my last try. In fact I don't want to give up Debian (on my gateway machine :).

Cool! That's how I feel about it. :)

Jacob



Reply to: