[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Timeouts



[Adding debian-hppa to the CC, since it may require porter help]

On Tue, Mar 09, 2010 at 06:38:23PM +0000, Iain Lane wrote:
> On Tue, Mar 09, 2010 at 10:46:54AM -0700, dann frazier wrote:
>> On Mon, Mar 08, 2010 at 10:56:53PM +0100, Joachim Breitner wrote:
>>> Hi,
>>>
>>> Am Montag, den 08.03.2010, 22:42 +0100 schrieb Joachim Breitner:
>>> > > This might be a solution, yes, though I would prefer not to do
>>> > > this. You've managed to get highlighting-kate's build times down, making
>>> > > it build everywhere but on armel (where it got tried on a sloooow
>>> > > buildd, let's see if this gets better on one of the faster boxes). We
>>> > > have similar problem with agda, could you have a look at that?
>>> >
>>> > Maybe Iain Lane can comment on agda.
>>>
>>> speaking of which:
>>> https://buildd.debian.org/fetch.cgi?pkg=agda&arch=hppa&ver=2.2.6-3&stamp=1267580450&file=log&as=raw
>>> says
>>>
>>> Building Agda-2.2.6...
>>> [  1 of 191] Compiling Agda.Auto.NarrowingSearch ( src/full/Agda/Auto/NarrowingSearch.hs, dist-ghc6/build/Agda/Auto/NarrowingSearch.o )
>>> E: Caught signal 'Terminated': terminating immediately
>>> make: *** [build-ghc6-stamp] Terminated
>>> Build killed with signal TERM after 1 minutes of inactivity
>>>
>>> Isn???t this time limit a bit too low?
>>
>> I put that in to cause agda to fail and stop retrying, adn then bumped
>> it back up.
>>
>> The timeout is normally 300 minutes, but agda would continue to
>> generate output even when it had been wedged for over 12 hours.
>
> I rather suspect that the monotonically increasing memory usage is the  
> problem here. I don't know of a resolution, and the thread with the GHC  
> devs didn't seem to offer anything up. Anyone have any ideas? Dann, did  
> you have a look at the memory usage when the build was going on by any  
> chance?

Well, its stuck again on penalosa, so let me check..

dannf@penalosa:~$ free
             total       used       free     shared    buffers     cached
Mem:       4117004    2402440    1714564          0     735840    1202828
-/+ buffers/cache:     463772    3653232
Swap:      2650684         44    2650640

And it doesn't appear to be increasing.
ghc is taking ~90% of the cpu, top suggests its mostly system usage:

top - 21:49:22 up 10 days,  3:15,  2 users,  load average: 1.42, 1.45, 1.39
Tasks:  62 total,   2 running,  60 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.7%us, 99.3%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   4117004k total,  2402860k used,  1714144k free,   735840k buffers
Swap:  2650684k total,       44k used,  2650640k free,  1202832k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
  819 buildd    20   0  224m 198m  21m R 92.2  4.9  48:31.18 ghc                
31922 dannf     20   0  3804 1908 1536 R  0.3  0.0   0:00.51 top                
    1 root      20   0  2144  792  652 S  0.0  0.0   0:20.16 init               
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd           
    3 root      20   0     0    0    0 S  0.0  0.0   0:00.13 ksoftirqd/0        

strace shows ghc looping here:

--- SIGVTALRM (Virtual timer expired) @ 0 (0) ---
rt_sigreturn()                          = -819
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x40002b28) = -513
--- SIGVTALRM (Virtual timer expired) @ 0 (0) ---
rt_sigreturn()                          = -819
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x40002b28) = -513
--- SIGVTALRM (Virtual timer expired) @ 0 (0) ---
rt_sigreturn()                          = -819
clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x40002b28) = -513
--- SIGVTALRM (Virtual timer expired) @ 0 (0) ---
rt_sigreturn()                          = -819

> Anyway, this seems to be something that was introduced with 6.12. I  
> wonder what we can do about this, besides p-a-s.
>
> Iain



-- 
dann frazier


Reply to: