Name: clC74495 Date: 10/22/99
If a hotspot VM runs out of file descriptors while it is trying
to map a new chunk of memory for growing object memory,
a fatal error and coredump occurs. It would be preferrable
to treat this as an OutOfMemory exception .
In the Hotspot 1.0.1 FCS solaris sources, in os_solaris.cpp ,
there is this code:
static char* mmap_chunk( char *addr, jint size, int flags,
int prot = PROT_READ | PROT_WRITE | PROT_EXEC ) {
int fd= open("/dev/zero", O_RDWR);
if (fd < 0)
fatal1("mmap_chunk: cannot open /dev/zero (%s)", strerror(errno));
char *b = mmap(addr, size, prot, flags, fd, 0);
close(fd);
if (b == MAP_FAILED) {
return NULL;
}
At Gemstone, we have changed the code to be as follows. I believe
it would be a benefit to other users if this were included in
the Hotspot 2.0 source base
static char* mmap_chunk( char *addr, jint size, int flags,
int prot = PROT_READ | PROT_WRITE | PROT_EXEC ) {
int fd= open("/dev/zero", O_RDWR);
if (fd < 0) {
// Gemstone change fatal to warning , for bug 23226
warning("mmap_chunk: cannot open /dev/zero (%s)\n %s\n", strerror(errno),
"May have run out of file descriptors, see /etc/sysdef | grep descript");
return NULL;
}
char *b = mmap(addr, size, prot, flags, fd, 0);
close(fd);
if (b == MAP_FAILED) {
return NULL;
}
(Review ID: 96910)
======================================================================
- duplicates
-
JDK-4277287 Sparc/C2 Volano: cannot open /dev/zero (Too many open file)vm/os_solaris_sparc.c
-
- Closed
-