-
Bug
-
Resolution: Fixed
-
P3
-
8-shenandoah, 11-shenandoah, 12, 13
-
b12
Issue | Fix Version | Assignee | Priority | Status | Resolution | Resolved In Build |
---|---|---|---|---|---|---|
JDK-8220605 | 12.0.2 | Aleksey Shipilev | P3 | Resolved | Fixed | b01 |
Have the machine with some amount of memory, say 128G.
Run Shenandoah with 80G:
$ java -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC -Xmx80g -Xms80g -XX:+AlwaysPreTouch -Xlog:gc
<runs fine, allocates 80G on regular heap>
Now reserve some hugetlbfs (50000x2M = 100G):
$ echo 50000 | sudo tee /proc/sys/vm/nr_hugepages
50000
$ cat /proc/meminfo | grep HugePages
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 50000
HugePages_Free: 50000
HugePages_Rsvd: 0
HugePages_Surp: 0
Run Shenandoah again:
$ java -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC -Xmx80g -Xms80g -XX:+AlwaysPreTouch -Xlog:gc
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f8a34000000, 85899345920, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 85899345920 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/shade/temp/alloc/hs_err_pid13903.log
This is expected, because hugetlbfs cut out 100G out of 128G. Okay, so we can try with -XX:+UseLargePages:
$ java -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC -Xmx80g -Xms80g -XX:+AlwaysPreTouch -XX:+UseLargePages -Xlog:gc G
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f546a000000, 85899345920, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 85899345920 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/shade/temp/alloc/hs_err_pid27749.log
So, 80G allocated in hugetlbfs, but Shenandoah cannot use it neither with nor without -XX:+UseLargePages.
Run Shenandoah with 80G:
$ java -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC -Xmx80g -Xms80g -XX:+AlwaysPreTouch -Xlog:gc
<runs fine, allocates 80G on regular heap>
Now reserve some hugetlbfs (50000x2M = 100G):
$ echo 50000 | sudo tee /proc/sys/vm/nr_hugepages
50000
$ cat /proc/meminfo | grep HugePages
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 50000
HugePages_Free: 50000
HugePages_Rsvd: 0
HugePages_Surp: 0
Run Shenandoah again:
$ java -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC -Xmx80g -Xms80g -XX:+AlwaysPreTouch -Xlog:gc
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f8a34000000, 85899345920, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 85899345920 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/shade/temp/alloc/hs_err_pid13903.log
This is expected, because hugetlbfs cut out 100G out of 128G. Okay, so we can try with -XX:+UseLargePages:
$ java -XX:+UnlockExperimentalVMOptions -XX:+UseShenandoahGC -Xmx80g -Xms80g -XX:+AlwaysPreTouch -XX:+UseLargePages -Xlog:gc G
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f546a000000, 85899345920, 0) failed; error='Not enough space' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 85899345920 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /home/shade/temp/alloc/hs_err_pid27749.log
So, 80G allocated in hugetlbfs, but Shenandoah cannot use it neither with nor without -XX:+UseLargePages.
- backported by
-
JDK-8220605 Shenandoah should not commit HugeTLBFS memory
- Resolved
- is blocked by
-
JDK-8220153 Shenandoah does not work with TransparentHugePages properly
- Resolved
-
JDK-8220350 Refactor ShenandoahHeap::initialize
- Resolved
- relates to
-
JDK-8220153 Shenandoah does not work with TransparentHugePages properly
- Resolved