Solaris mmap for memory mapped file failing with ENOMEM

456 views Asked by At

On Solaris 10 as well as Linux, I am using mmap call to create a memory mapped file and subsequently read the file from a separate process. For large memory mapped file, during reading (no writing), I am getting ENOMEM. What could be the reason and what could be remedy or way forward? I thought memory mapped file is not occupying memory for the entirety.

I am using the following call:

char * segptr = (char *) mmap(0,sz,PROT_READ | PROT_WRITE,MAP_SHARED,fd,0);

where,sz is the file size and fd is file descriptor of file opened through open.

I am getting ENOMEM failure while trying to reserve space for the entirety.

ulimit -a shows:

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 10
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 29995
virtual memory          (kbytes, -v) unlimited

Can I map partial file? If I open partial file, then will I able to access the whole contents on-demand? I have not used setrlimit to set any limit, so I guess, using the default (don't know what is the default), should I increase that? Please guide. How do I map the file in smaller chunks to save over usage of memory and thus avoiding ENOMEM?

0

There are 0 answers