Bug Report?! Part II.
last week when i raised here an issue i thought i had made myself
quite clear, but that didn't seem to have been so. so i'll try
again and hope to get my message through.
from the very beginning i have to appologise that it will take
somewhat longer, but i won't make the misstake to leave out
possibly essential details.
i've encountered a behaviour of my GNU/Linux System
(Debian 2.0, Kernel-Version 2.0.34) that i consider to be a bug,
and a severe one too.
i wrote a programm and ran it on my old OS (Debian 1.3.1 Kernel-
Version 2.0.30). switching to 2.0 i recompiled the program to
look for inconsistancies with the new version of libc. by an
accident during backup i had lost the data of the old runs and so
did a closer review of my source. there i found a line
" for (I = 0; I < 100; I++) Rel_Vals[I] = log(Rel_Vals[I]); "
and found it to be somewhat overkill to calculate the whole
table when in fact i would need far less. so i calculated a new
lower limit and rewrote it to
" for (I = low_lim; I < 100; I++) Rel_Vals[I] = log(Rel_Vals[I]); "
thereby increasing the io-demand of my program somewhat that reads
a record from file before and updates it after that routine.
running my programm now led to the formerly described situation
of seemingly cluttering my system with io-demand that rendered
it unusable for anything else. but one thing was mysterious,
why did it take up to 15 minutes until that phenomen was to be
so i tried a closer look by starting top before firing up my
programm. now things became somewhat clearer. after boot -
i'm running xdm - when just the X server is running besides the
other system-programms, top shows 18000K free memory (40M total).
when i now fired up my own programm i could observe the figures
representing the free memory dropping down slowly but steadily.
when at least the numbers reach zero the system of course starts
using swap and that is the point when the accompaniing io-demands
seem to increase immense and the system is doing nearly nothing
more than waiting for IO.
now, there's just one plausible explanation for that, the system
is trying to cache all used files. ok: i quite can hear your
comments from "did you expect somewhat else" to "i thought that
was common knowledge" not to forget "and where, please are you
considering a bug".
i'll tell you: i consider it to be a bug that the system tries to
cache even pure input files (InPut = fopen(XFileName, "r");) and i
cant see the rationale behind it, especially if the reults can be
such dissapointing. in this case it's the main input-file that is
rather huge, making up more than 95 % off all memory used.
now, i'd like your comments on that. but please spare me the
posibillities of getting better hardware, i can't afford it.
i'm more interested in your oppinions wheather this could or
should be considered a bug.
in my affords to devise a solution fo me (at least i found
kind of workaround to avoid blocking my system - splittting
up the input-file into several smaller ones seem to enable
the system to reallocate memory faster when needed. e. g.
firing up xemacs - still far from satisfying) i tried to use
the setvbuf-function in all possible combinations but without
observable changes to memory usage. but during that i stumbled
on something that clearly is a bug, but where's the proper
place to report it?
i think the definitions were ok or the compiler would have
Program received signal SIGSEGV, Segmentation fault.
0x400903e0 in __xstat ()
eager to hear your comments,