Issue | Fix Version | Assignee | Priority | Status | Resolution | Resolved In Build |
---|---|---|---|---|---|---|
JDK-8209751 | 12 | Igor Ignatyev | P4 | Resolved | Fixed | b08 |
JDK-8209715 | 11.0.2 | Igor Ignatyev | P4 | Resolved | Fixed | b01 |
JDK-8209726 | 11.0.1 | Igor Ignatyev | P4 | Resolved | Fixed | b07 |
In test/failure_handler/src/share/conf/linux.properties
We use gcore to generate core files:
native.core.app=gcore
native.core.args=-o ./core.%p %p
native.core.params.timeout=3600000
This is problematic since gcore seems to dump all reserved memory to disk, not the committed memory, or the paged in memory.
For example, if you run the following program:
--- mem.c ---
#include <sys/mman.h>
#include <stdio.h>
int main() {
// mmap 2GB of reserved memory
void* mem = mmap(0, 2ULL * 1024 * 1024 * 1024, PROT_NONE, MAP_PRIVATE | MAP_ANON, 0, 0);
if (mem == MAP_FAILED) {
perror("mmap failed");
return -1;
}
for (;;) {
// Spin forever
}
return 0;
}
---
$ gcc -Wall mem.c
$ ./a.out
and in another terminal run:
$ gcore <pid of a.out>
This generates a 2.1G core file.
If you then run with this instead:
$ kill -SIGABRT <pid of a.out>
The program crashes (as expected) and creates a much smaller 240K core file.
This is indicative of what the overhead is when using gcore. When running a small Java program with G1 and default flags the numbers are:
gcore: 5.3GB
kill -SIGABRT: 648M
This usage of gcore is problematic in our testing farm, and it eats up the disk space on our testing machines.
This is even more problematic with ZGC, which always reserves (but not commits) huge memory areas (17 TB).
We use gcore to generate core files:
native.core.app=gcore
native.core.args=-o ./core.%p %p
native.core.params.timeout=3600000
This is problematic since gcore seems to dump all reserved memory to disk, not the committed memory, or the paged in memory.
For example, if you run the following program:
--- mem.c ---
#include <sys/mman.h>
#include <stdio.h>
int main() {
// mmap 2GB of reserved memory
void* mem = mmap(0, 2ULL * 1024 * 1024 * 1024, PROT_NONE, MAP_PRIVATE | MAP_ANON, 0, 0);
if (mem == MAP_FAILED) {
perror("mmap failed");
return -1;
}
for (;;) {
// Spin forever
}
return 0;
}
---
$ gcc -Wall mem.c
$ ./a.out
and in another terminal run:
$ gcore <pid of a.out>
This generates a 2.1G core file.
If you then run with this instead:
$ kill -SIGABRT <pid of a.out>
The program crashes (as expected) and creates a much smaller 240K core file.
This is indicative of what the overhead is when using gcore. When running a small Java program with G1 and default flags the numbers are:
gcore: 5.3GB
kill -SIGABRT: 648M
This usage of gcore is problematic in our testing farm, and it eats up the disk space on our testing machines.
This is even more problematic with ZGC, which always reserves (but not commits) huge memory areas (17 TB).
- backported by
-
JDK-8209715 TimeoutHandler generates huge core files
- Resolved
-
JDK-8209726 TimeoutHandler generates huge core files
- Resolved
-
JDK-8209751 TimeoutHandler generates huge core files
- Resolved