[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Bug#176267: ITP: mplayer -- Mplayer is a full-featured audioand video player for UN*X like systems



On Tue, Jan 28, 2003 at 08:12:17AM +0100, Gabucino wrote:
> > or do you just have no understanding of how scheduling works and what the
> > CPU% value means?
> After two years in media players, I thought I know what I am talking
> about.

Think again, you clearly don't (and where, exactly, does the design
and analysis of process scheduling code fit in to "media
players"?). Did you actually *read* my explanation of why your
"benchmark" was fundamentally flawed? Did you have trouble
understanding some parts of it?

> > ..[cut blabla]..
> Yes, top can be very unreliable, especially when usleep()s are in use.
> However just try to play a simple MP3 with aaxine:
> Cpu(s):  44.5% user,  55.5% system,   0.0% nice,   0.0% idle,   0.0% IO-wait
> 
> And with MPlayer:
> Cpu(s):   1.0% user,   3.6% system,   0.0% nice,  95.4% idle,   0.0% IO-wait

Again, meaningless. Why are you interested in "number of cpu cycles
wasted"? These figures will change almost randomly depending on what
else is running on the host at the same time. They certainly don't
reflect how an application will perform in the cases you care about,
especially if (like xine and mplayer) they adapt their performance
profile when system load is high - which any decent multimedia
application should be doing.

> But I also just did a more "real-life" benchmark:
> time gcc ... postprocess.c -o postprocess.o . It takes 13.821s if no other
> process is running. Then I started MP3 playing for each software, and
> compiled. The results are as follows:
> 
> aaxine:						36.842s
> mplayer:					15.971s
> 
> So with MPlayer running, compilation took 2.1 seconds more.
> With xine running, compilation took 23.021 seconds more.
> (yes, the compilation was always cached to memory.)

This is closer to the domain of real benchmarks, but it's still at the
sort of level you get from a marketing department. (One golden rule is
that if a benchmark was quoted without giving the variance, it's
neither objective nor useful).

> So thanks for your mail, but next time you doubt I can read 'top' output,
> think twice.

Uhh, the whole point was that top is not a useful benchmarking
tool. What mail were you reading?

-- 
  .''`.  ** Debian GNU/Linux ** | Andrew Suffield
 : :' :  http://www.debian.org/ | Dept. of Computing,
 `. `'                          | Imperial College,
   `-             -><-          | London, UK

Attachment: pgpgg6tFX96uR.pgp
Description: PGP signature


Reply to: