Bug#318244: segmentation fault with many files
On Fri, 29 Jul 2005, GOTO Masanori wrote:
>Manoj Srivastava wrote:
>> could be something that libc was not prepared for (memory
>> exhausted?).
Now that you mentioned it, some memory information:
> free
total used free shared buffers cached
Mem: 516520 456300 60220 0 142988 49504
-/+ buffers/cache: 263808 252712
Swap: 1004052 173664 830388
> ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
>Tuukka, is it easy to reappear this problem again?
Yes, it is easy to reproduce with the following script: create a new empty
directory and run the script in it:
#!/bin/bash
#FILES=130000 No crash
#FILES=140000 Crash
# This script crashes with the following message:
#> ./go.sh
#./go.sh: line 22: 21020 Segmentation fault make clean
FILES=140000
cat >Makefile <<EOF
config.h: \$(wildcard */xxxx)
echo hmm
EOF
j=0
while [ $j -lt $FILES ]; do
touch xx-$j
j=$(($j+1))
done
make clean
Reply to: