[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

understanding ulimit -r -m -v



Hello, 

I'm having a hard time trying to understand what the following limits 
actually do:

-maximum size of a process's data segment

-the maximum resident set size - rss

-the maximum amount of virtual memory


I googled for quite a while an did not find much. Thats what I figured
so far:

-the maximum resident set size limits the amount of pages of a single
process that may be loaded into physical RAM at one time. "How much RAM
a single process may take". Anyway the process may take as much swap as
it likes.
Under linux processes may be granted more than rss size pages in RAM.
The limit only controls which processes will be swapping earlier.

-Although I know the difference between code and data, heap and stack
I have no clue what the maximum size of a process's data segment does. 

-Does the maximum amount of virtual memory actually limit the size of 
the virtual address space? As far as I figured from experiments it does 
not only affect how much memory a process may use (physical RAM +
swap), but it actually constrains the virtual address space of a
process. The kernel still overcommits as much memory as you request
when this limit is in effect. I set it to 100 meg. A test process could
allocate 4gigs of memory, but only access the first 100meg. It could
not dereference pointers to any address higher then the 100meg limit.

My conclusion would be that the rss limit is only usable to fine-tune 
the swapping behavior of special processes. Only the maximum amount of 
virtual memory (virtual address space?) is suitable for limiting the 
memory usage of processes.

My actuall problem is that there are many applications that leak memory 
or at least use way too much memory when supplied with wrong input
(ever tried opening a html with >300 big images in iceweasel? dillo
copes well with it.)
This leads to extensive swapping due to one bad process and effectively 
locks up the machine. Therefore I'd like to limit the total amount of 
memory a single process may use (physical + swap space).
I find the virtual memory constrain too hard. I still like huge address 
spaces and being able to realloc is nice.
Completely turning off swap is not possible since this is a multiuser 
system and there are many sleeping processes lying around.
I'll attach my testing program, in case someone would like to play
around with ulimit.


Any hints, pointers to documentation or better ideas to solve this
problem would be appreciated.


regards,
Christopher Zimmermann
#include<stdio.h>
#include<stdlib.h>
#include<unistd.h>
#include<string.h>

int main () {
    char *mem[4096];

    fputs("mallocating 4 gigs...\n", stderr);
    for(int i=0; i<4096; i++) {
        if(i%32 == 0)
            fprintf(stderr, "mallocated %d megs\n", i);
        mem[i] = malloc(1<<20);
    }

    fputs("mallocated 4 gigs\n", stderr);
    sleep(1);

    fputs("accessing start of allocated memory\n", stderr);
    *mem[0] = *mem[0];

    fputs("accessing end of allocated memory\n", stderr);
    *mem[4095] = *mem[4095];

    fputs("initializing 4 gigs...\n", stderr);
    for(int i=0; i<4096; i++) {
        if(i%32 == 0)
            fprintf(stderr, "initialized %d megs\n", i);
        memset(mem[i], 0, 1<<20);
        putc('.', stderr);
    }

    return 0;
}

Attachment: pgpJCKxW92EWw.pgp
Description: PGP signature


Reply to: