[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Server REALLY slow after console messages



On Tue, Jun 27, 2006 at 11:29:27PM -0400, Carl Fink wrote:
> > > 	dd if=/dev/zero of=/var/spool/swapfile bs=1024 count=262144
> > > 
> > > 	swapon /var/spool/swapfile
> > 
> > Realistically, this isn't likely to help...  He's already used up 5GB
> > of virtual memory -- 2GB of RAM and 3 GB of swap space.  At such a
> > point, the problem is the system is thrashing the swap disk... that
> > is, it is trying to rapidly pull processes back from swap space as the
> > kernel changes context between all the runable processes.  
> 
> I wasn't suggesting it as a long-term solution, just an attempt to
> buy a few minutes of responsiveness in which to kill the exploding
> process.

Yeah, I know... but the thing is, it almost certainly won't work.
Your dd command is only going to make the disk busier, increasing
contention, though even that doesn't really matter.  If the system is
thrashing with 3GB of swap used up, adding a couple of hundred
megabytes isn't likely to change anything (I'm going to back this up,
down below a bit).  The system might not even be using most of the
swap -- you could have 2GB of the 3GB free.  The system is thrashing
NOT because it's out of virtual memory (necessarily, though it might
be), but because there is a significant amount of demand for certain
processes to run which don't all fit in PHYSICAL memory at the same
time.  It will repetitively bounce the memory pages for those
processes in and out of RAM/swap.  Reading those pages from and
writing them to the disk is what's killing performance -- compared to
memory, disk is SSSLLLOOOWWW.  Once this process of thrashing starts,
it usually just spirals until the system dies a painful death, unless
the admins can catch it before it gets out of hand and kill the
offending process(es).  Please allow me to illustrate:

Let's say you had a system with 1GB of RAM and 4GB of swap, and two
processes that were actively running on it.  Let's say process A had a
resident set size (roughly the amount of memory which must be in
physical RAM for the process to be runable) of 600MB.  Process B has a
RSS of 500MB.  You only miss having enough physical RAM by 100MB...
but even still, before one of these processes can run, the other one
must be (at least partially) swapped out to disk.  

For the sake of simplicity, let's pretend nothing is using memory
other than these two processes.  Remember that processes don't
actually run simultaneously (on a single-CPU system at least, but
SMP is another discussion), they take turns.   If process A is
running, and it becomes process B's turn to run, the system will need
500MB of free ram to run B.  But it only has 400MB free, so it will
need to swap 100MB out to disk to make room for process B.  If your
disk can transfer 40MB/s, that swap will take 2.5 seconds.  So you
have to wait 2.5 seconds before process B is able to run.  Then, after
process B runs for 100 milliseconds (or whatever time slice the kernel
allocates to it -- something similarly small, usually), it becomes
process A's turn to run again.  It has to swap 100MB of process B out,
so that it can swap the original 100MB of process A back in.  That's
200 MB!  Now your swap will take about 5 seconds.  And the part that
sucks is, you have 4GB of swap space allocated, but you're only
actually using about 100MB of it!!!  Oh, that, and your system is
taking about 5.01 seconds to do 0.01s worth of useful work.

Now, on a busy system, multiply that by 50 busy processes, all
competing for time...

Granted, I chose abnormally large (but far from impossible) resident
set sizes to illustrate the point, and simplified the problem a lot,
but essentially, that's what's going on.  And that's why adding 256 MB
of additional swap when the system is already thrashing probably won't
help at all.  It's also why having a huge amount of swap space is
mostly just a total waste of disk space (not entirely -- there's
swapping unused processes to trade for buffer cache).  You're much
better off just buying about twice as much RAM as you think you'll
need (or just buy as much as your budget can possibly allow for), and
configuring a small amount of swap (maybe 300-500MB) just for
emergencies, or so the kernel can swap unused processes in order to
fill RAM with buffer cache (to make disk I/O faster, by caching
frequently used blocks in RAM).  That way you can use all those extra
gigabytes of what used to be swap space for storing your pr0n and
MP3's.  ;-)  In addition, having only a small amount of swap probably
will help your system cope with thrashing much better!  Since there's
a lot less virtual memory, processes will be killed by the kernel
sooner, which means it will be a lot harder for the system to get
itself inexorably wedged by a backlog of processes which all need to
be swapped into RAM before they can run.  Instead of being swapped,
they'll just die.  :-D  Now, that's still bad for your users, but at
least the system will perform better for the ones who manage to get
service...  ;-)

The essential point is this: If you are swapping busy processes, you
are pretty much already screwed.  Anything you do to try to fix it
(like logging in) is going to require more memory, which will only
make the problem worse...  If you don't manage to catch it soon
enough, it's game over for your server.  You need to get in quickly,
kill the offending processes, and figure out why they're misbehaving
in order to prevent a reoccurence.

Other things *can* help too...  If the machine is a busy network
server, you could unplug it from the network.  Eventually all the
requests it's trying to serve will time out, and the machine will
recover.  But it could take 5-10 minutes, or 5-10 hours... depending
on how busy the server really was.  You probably can't afford to wait
that long. :(  So, while it does help, it's kind of questionable how
useful it is in practice...  Really, it depends on your environment.

> Hmm ... depending on how the offending process starts, ulimit might
> be a way to prevent future memory ballooning.

It could "help" -- but the processes will die unexpectedly when they
exceed their limit, just as they would if the kernel's Out Of Memory
killer did the job...  That's not much of an improvement.  From the
end user's perspective, your server is still toast.  But at least it
could allow the system to stay usable enough for the admins to log in
and figure out what's going wrong... sure.  On the whole though, I
don't think this is a condition you want to deal with -- if you set
the limit too low, processes which are NOT misbehaving may well die
when there are plenty of system resources to allocate to them.  That's
not a good thing.

Realistically, renicing the shell as I suggested in my other post also
probably won't help much -- unless you catch the problem early.  But
it's probably your best shot (especially if you DO catch it early).

-- 
Derek D. Martin
http://www.pizzashack.org/
GPG Key ID: 0x81CFE75D

Attachment: pgp2ZQtNtXKdv.pgp
Description: PGP signature


Reply to: