Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-8320061

[nmt] Multiple issues with peak accounting



    • Enhancement
    • Resolution: Fixed
    • P4
    • 22
    • 17.0.10, 21.0.2, 22
    • hotspot
    • b26


      There are multiple issues with peak printing.

      1) Peak values are not considered when deciding whether allocations fall above the scale threshold:

      NMT has this logic where it omits printing information if the values involved, expressed with the current NMT scale, don't raise above 0. For example, with `jcmd myprog VM.native_memory scale=g` we would only be shown allocations that raise above 1Gb. However, we should also show information that had historically large values, even if they currently are small or even 0. For example, we want to see peaks in compiler memory usage even if current compiler memory usage is small.

      2) We decided to make peak printing a release-build feature with JDK-8317772, but peak printing for virtual memory is still debug-only

      3) There is a bug in that VirtualMemory::peak that causes the peak value to be wrong; it gets not updated from the actual peak but from the commit increase, so the value shown is too low if we committed in multiple steps. That can be simply observed by "largest_committed" being smaller than "committed", e.g.: `(mmap: reserved=1048576KB, committed=12928KB, largest_committed=64KB)`

      4) Finally, we really should have better regression tests for peak accounting.


        Issue Links



              stuefe Thomas Stuefe
              stuefe Thomas Stuefe
              0 Vote for this issue
              6 Start watching this issue