Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-8300729

Humongous Metaspace allocations



    • Bug
    • Resolution: Unresolved
    • P4
    • tbd
    • 17, 21
    • hotspot
    • Fix Understood
    • 16


      JEP-387 redesigned metaspace. It defragments a lot better than the old metaspace, but a side effect of that was a new limit to the maximum size of an individual allocation (root chunk size of 4M). At that time it was believed to be harmless. But it turns out that inefficiently formed yet still valid class files (typically generated by something other than javac) can cause larger metaspace allocations. Very rare, which is why we did not encounter it before. See e.g. JDK-8294677, where a StackMapTable is slightly larger than 4M and therefore fails to load.

      Allocations like this must be allowed to proceed. As a workaround to JDK-8294677, we increase the root chunk size to 16M. That is fine, but just a bandaid.

      For one, it is unclear what size would be enough. In theory, a StackMapTable could become very large (if each entry were to describe a full set of a full local var array and an equally full stack, and you had many of these entries). And then, increasing the root chunk size has subtle effects. It increases memory and cpu footprint of tests. It affects the granularity at which metaspace can be reserved (note: not committed), and the granularity of start addresses of metaspace mappings. The second point does not matter on 64-bit, but would hurts on 32-bit, where we may run out of address space.

      So we need to allow humongous metaspace allocations like in the olden days.


      Design goals:

      - should work for both class space and non-class metaspace.
      - on classloader death, humongous allocations should be reclaimed. If someone were to load and unload a giant class repeatedly, this should not lead to ever-growing memory footprint.
      - humongous allocations should be prematurely releasable (Metaspace::deallocate) like every other allocation
      - metrics and such should, of course, work

      Not design goals:

      It is expected that humongous allocations are rare and isolated. So we don't need to optimize performance (much) or care about fragmentation (much). Solution can be as simple as possible.


      I currently work on a prototype that introduces supra-blocks, blocks that span multiple chunks. Turns out this is a rather easy solution that feels very organic. It needs a new allocation path. But all the rest (de-allocation, release, purging, metrics and measuring) work out of the box and don't need changing.


        Issue Links



              stuefe Thomas Stuefe
              stuefe Thomas Stuefe
              0 Vote for this issue
              4 Start watching this issue