Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-8046132

JEP 142: Reduce Cache Contention on Specified Fields

XMLWordPrintable

    • Icon: JEP JEP
    • Resolution: Delivered
    • Icon: P4 P4
    • 8
    • hotspot
    • None
    • Jesper Wilhelmsson, Tony Printezis
    • Feature
    • Open
    • gc
    • Implementation
    • hostspot dash dev at openjdk dot java dot net
    • M
    • 142

      Summary

      Define a way to specify that one or more fields in an object are likely to be highly contended across processor cores so that the VM can arrange for them not to share cache lines with other fields, or other objects, that are likely to be independently accessed.

      Description

      Memory contention occurs when two memory locations that are in use by two different cores end up on the same cache line and at least one of the cores is performing writes. For highly contended memory locations this can be a serious performance and scalability issue. The aim of this enhancement is to avoid memory contention between cores, at least on fields we can easily identify during development.

      The idea is to add padding before and after each field which might experience contention to make sure no other field (or other object) can end up on the same cache line. In the general case where no object alignment is guaranteed, the size of the padding needs to be the same size as the cache lines of the machine we're running on. If specific object alignment can be guaranteed, we can then decrease the amount of padding needed. For example, if the first field of an object is always guaranteed to be at the start of a cache line, then we only need to pad just enough before a field to make sure the field is also at the start of a cache line, and then pad enough after it to make sure the following field is at the start of the subsequent cache line.

      This padding can be implemented reasonably easily at class loading time by introducing enough dummy fields to the class. Changing the class layout afterwards would be much more challenging, especially after instances of that class have been allocated and/or some of its methods have been JITed.

      If we want to cut down on the memory wasted due to this padding we'll have to ensure specific object alignment. However, this is a much more involved change that, apart from class loading, will also touch several other parts of the JVM: Allocation code (to make sure allocations of specific objects are correctly aligned and also to tag such objects as aligned so that the alignment is maintained in the future), the JIT compilers (to know which allocations need to be aligned and emit the instructions for the right allocation operation, or call a special runtime method), the GC (to make sure that any object that needs to be aligned remains aligned when moved), etc. Given that alignment will probably only allow us to reduce the memory footprint wasted due to the padding, and assuming that most objects that need to be padded are not numerous, we might get diminishing returns by introducing this alignment requirement.

      The main challenge is how to allow the developers to specify which fields might experience contention. One general-purpose way to do this is to use annotations (although that does require access to the source code). This way the JVM can handle the specified fields in the best way possible (i.e., by either just padding them, or by a combination of padding and aligned allocation as discussed earlier).

      If it is important to reduce cache contention on objects whose source code is not available (say instances of standard library classes) it could be possible to provide developers with a special factory method that will do the right alignment and padding so that the allocated objects to not share cache lines with other objects.

      Impact

      • Performance/scalability: The goal is to improve performance for multithreaded applications and allow them to scale better. Padding objects does imply higher memory usage.

            tonyp Tony Printezis
            tonyp Tony Printezis
            Tony Printezis Tony Printezis
            Doug Lea
            Mikael Vidstedt
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: