[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Performance in scientific applications



Hi,

probably it's an off-topic email, but I would like to ask here because
maybe some of you could guide me.

I'm working with some scientists to evaluate some programs that
calculate a solution for a problem. They basically run an instance of
CPLEX ILOG and obtain the time used to calculate the solution or some
heuristics.

We use a server to make the calculations, and we have found (obviously)
significant variations that depend on the load of the machine. What we
want to obtain is some kind of measure independently of the load of the
server.

One approach has been to use the CPU time. In our case, the total number
of CPU-seconds that the process used directly (in user mode). We are not
talking about the elapsed real (wall clock) time that obviously is
directly affected by the load of the machine.

However, we have done several tests, and we have found that it has a
variation about 10% (depending on the load). We have evaluated also the
CPU time in kernel mode and also the involuntary context-switched. But,
honestly I have not obtained a clear idea of what's going on.

Another issue that we have found is how can affect the number of cores
or physical CPUs in the server.

Someone of you have found this issues and solved?

Best regards,

Leopold


-- 
--
Linux User 152692 GPG: 05F4A7A949A2D9AA
Catalonia
-------------------------------------
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?






Attachment: OpenPGP_signature
Description: OpenPGP digital signature


Reply to: