Resident memory set size
I've been looking at why the swap appears not to be working at the
moment. I've identified a small set of alterations to the code in
vm_page_seg_evict that have been introduced relatively recently that if
reverted restores swap functionality but I don't have sufficient
knowledge to propose a fix yet.
As part of this research I have also come across another feature that
arises when running low on memory. I have a virtual machine with 2GB of
RAM and a very simple test program to gobble 400M of memory per instance
(below). I run 2 of these and as expected each process has a virtual
size of ~550M and an RSS of 400M. I then run a 3rd instance to consume a
further 400M. This causes memory from the 1st of my test programs to
have some of its memory paged out and its RSS drops to about 150M.
That all seems quite reasonable until you discover that no memory is
actually being sent to swap as there are no messages sent to the pager.
I was so surprised by this that I actually commented out all of the code
in vm_pageout_scan except for the page balancing but it still did the
same. It is the page balancing that is reducing the resident memory
total for my first process. Each page that is moved gets marked as
VM_PROT_NONE and the task's (pmap) resident_count decremented
accordingly. There doesn't seem to be any equivalent increment
associated with the page that it is moved to.
I'm very aware that I have a very limited knowledge of the gnumach
kernel and I could be very mistaken in my analysis here. I was rather
hoping for some input to this scenario to either correct my
misunderstanding or else guide me into further research.
Regards,
Mike.
Test program:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
#define SZ 100000
int
main(int argc, const char *argv[])
{
printf("Usemem: %d\n", getpid());
void* mem = malloc(SZ * 4096);
memset(mem, 127, SZ * 4096);
sleep(3600);
free(mem);
return 0;
}
Reply to: