-
Enhancement
-
Resolution: Fixed
-
P4
-
17.0.10, 21.0.2, 22
-
b26
Issue | Fix Version | Assignee | Priority | Status | Resolution | Resolved In Build |
---|---|---|---|---|---|---|
JDK-8331192 | 21.0.4 | Aleksey Shipilev | P4 | Resolved | Fixed | b01 |
There are multiple issues with peak printing.
1) Peak values are not considered when deciding whether allocations fall above the scale threshold:
NMT has this logic where it omits printing information if the values involved, expressed with the current NMT scale, don't raise above 0. For example, with `jcmd myprog VM.native_memory scale=g` we would only be shown allocations that raise above 1Gb. However, we should also show information that had historically large values, even if they currently are small or even 0. For example, we want to see peaks in compiler memory usage even if current compiler memory usage is small.
2) We decided to make peak printing a release-build feature withJDK-8317772, but peak printing for virtual memory is still debug-only
3) There is a bug in that VirtualMemory::peak that causes the peak value to be wrong; it gets not updated from the actual peak but from the commit increase, so the value shown is too low if we committed in multiple steps. That can be simply observed by "largest_committed" being smaller than "committed", e.g.: `(mmap: reserved=1048576KB, committed=12928KB, largest_committed=64KB)`
4) Finally, we really should have better regression tests for peak accounting.
1) Peak values are not considered when deciding whether allocations fall above the scale threshold:
NMT has this logic where it omits printing information if the values involved, expressed with the current NMT scale, don't raise above 0. For example, with `jcmd myprog VM.native_memory scale=g` we would only be shown allocations that raise above 1Gb. However, we should also show information that had historically large values, even if they currently are small or even 0. For example, we want to see peaks in compiler memory usage even if current compiler memory usage is small.
2) We decided to make peak printing a release-build feature with
3) There is a bug in that VirtualMemory::peak that causes the peak value to be wrong; it gets not updated from the actual peak but from the commit increase, so the value shown is too low if we committed in multiple steps. That can be simply observed by "largest_committed" being smaller than "committed", e.g.: `(mmap: reserved=1048576KB, committed=12928KB, largest_committed=64KB)`
4) Finally, we really should have better regression tests for peak accounting.
- backported by
-
JDK-8331192 [nmt] Multiple issues with peak accounting
- Resolved
- relates to
-
JDK-8293850 need a largest_committed metric for each category of NMT's output
- Resolved
-
JDK-8297958 NMT: Display peak values
- Resolved
-
JDK-8317772 NMT: Make peak values available in release builds
- Resolved
- links to
-
Commit openjdk/jdk21u-dev/262cacb2
-
Commit openjdk/jdk/dc256fbc
-
Review openjdk/jdk21u-dev/481
-
Review openjdk/jdk/16675
(3 links to)