This is a quite vague at this point, but I'm looking idea on how to
track down performance problems.
We moved an application from a development machine to a client's
managed server an noticed it running quite a bit slower. Simple
requests[1] take about three to five times longer. And this happens
when the load average is above on on the fast machine and load
average is is < 0.1 on the client's server.
I'd like to provide the ISP with more info than "It's running slow" so
I'm wondering what tools to use to compare these two machines.
Some of the basic specs are:
Development Client's Server
--------------- ----------------------
CPU Athlon XP1800+ Xeon
Mhz 1150.591 1793.936
cache 256KB 512KB
bogomips 2260.99 3565.15
RAM 1GB .5GB
OS Deb Unstable Deb Sarge
Kernel 2.6.6 2.4.28
Tasks 115 214
fs xfs atime ext3 noatime, nodiratime
It seem
From that alone it would seem like the client's server would be
faster, although I'm sure that's not the entire story. Yet, the
development server can have a load average over 1 and still process
simple requests faster than the clients when its load average is <
0.1.
vmstat shows no swapping, so memory does not see to be a problem.
[1] It's a fast_cgi application that's just returning a small file
from the file system -- no database access involved in this request.
--
Bill Moseley
moseley@hank.org