[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: clock(3) non-functional?



On Tue, 2014-07-01 at 14:47 +0200, Samuel Thibault wrote:
> Svante Signell, le Tue 01 Jul 2014 14:40:44 +0200, a écrit :
> > > > > $ ./test
> > > > > start = 3870
> > > > > end = 3910
> > > > > cpu_time_used = 0.000040
> > > > 
> > > > I get:
> > > > gcc -g -Wall test_clock.c
> > > > ./a.out
> > > > start = 0
> > > > end = 0
> > > > cpu_time_used = 0.000000
> > > 
> > > Well, yes, as I said sleep() doesn't consume CPU while sleeping, so
> > > clock() would only account the small overhead for starting the sleep,
> > > which is very small. Since the granularity is 1/100th second on the
> > > Hurd, that eventually amounts to zero.
> > 
> > Why are the integers start and end zero?
> 
> For the same reason: the program doesn't even need 1/100th of a second
> to start, so the CPU consumption is basically zero.
> 
> >   cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
> > 
> > and on Hurd:
> > start = 0
> > end = 423
> > cpu_time_used = 0.000423
> 
> It seems there is inconsistency between the value returned by clock()
> and CLOCKS_PER_SEC. See the implementation of clock() on the Hurd in
> ./sysdeps/mach/hurd/clock.c, it's really in 1/100th of seconds. I
> guess unsubmitted-clock_t_centiseconds.diff should probably also fix
> CLOCKS_PER_SEC.

Unfortunately CLOCKS_PER_SECOND if frozen by POSIX to 1 000 000
independent of the actual resolution, see e.g. man 3 clock.




Reply to: