Working on many native memory customer cases, I miss the following metrics:
- Linux, process: vsize, swap (how much was swapped out for this process, not of the whole system)
- Linux, process: different RSS subtypes (shmem, anon)
- Linux+glibc: memory outstanding and retained (very important, even if it's an estimate), number of trims
- Linux, process: number of OS-side threads (important, if much larger than JVM threads, especially for scenarios where we run embedded in a custom launcher)
- Linux, process: number of open file descriptors (can we use fd array size as a quick upper boundary check? Yes, if the kernel shrinks fdarray size back after peaks -> check that)
- NMT: total number of outstanding mallocs (we do have total committed size, a pretty useless number)
- NMT: historical peak of malloc
- NMT: unsafe allocation outstanding and peak (important for DBB estimate - e.g., netty)
- Linux, process: vsize, swap (how much was swapped out for this process, not of the whole system)
- Linux, process: different RSS subtypes (shmem, anon)
- Linux+glibc: memory outstanding and retained (very important, even if it's an estimate), number of trims
- Linux, process: number of OS-side threads (important, if much larger than JVM threads, especially for scenarios where we run embedded in a custom launcher)
- Linux, process: number of open file descriptors (can we use fd array size as a quick upper boundary check? Yes, if the kernel shrinks fdarray size back after peaks -> check that)
- NMT: total number of outstanding mallocs (we do have total committed size, a pretty useless number)
- NMT: historical peak of malloc
- NMT: unsafe allocation outstanding and peak (important for DBB estimate - e.g., netty)