Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-8255978

[windows] os::release_memory may not release the full range



    • b26
    • windows
    • Verified


      On Windows, os::release_memory(p,size) may not actually release the whole region if it contains multiple mappings. This may cause memory bloat or runaway leaks or errors which look like failed mappings to specific wish addresses.


      On Windows, memory mappings are established with VirtualAlloc() [1] and released with VirtualFree() [2]. In constrast to POSIX munmap(), VirtualFree() can only release a single full range established with VirtualAlloc(). It cannot release multiple ranges, or parts of a range.

      The Windows implementation of os::release_memory(p, size) [3] calls VirtualFree(p, NULL, MEM_RELEASE) - it ignores the size parameter and releases whatever mapping happens to start at p:

      bool os::pd_release_memory(char* addr, size_t bytes) {
        return VirtualFree(addr, 0, MEM_RELEASE) != 0;

      ... which assumes that the given range size corresponds to the size of a mapping starting at p.

      This may be incorrect:

      1) For NUMA-friendly allocation, we allocate memory in stripes, each stripe individually allocated.
      2) For +UseLargePagesIndividualAllocation we do the same
      3) apart from that, the given region size may just be wrong. Since we never check these, we may never have noticed. I am currently running tests to find out if we have other mismatched releases.

      For cases (1) and (2), we would just release the first stripe in that striped range, leaving the rest of the mappings intact. This is not immediately noticeable, since VirtualFree() returns success. And even if it did not, we usually ignore the return code of os::release_memory().

      The problem is aggrevated since, on Windows, we often employ an "optimistically-release-and-remap" approach: since mappings are undivisible, if one wants to change their size, split them or similar, one has to follow this sequence:

      a) release old allocation
      b) place into the now vacated address room one or more new allocations

      This is not guaranteed to work, since between (a) and (b) someone may have grabbed the address space. We live with that since there is no way to do this differently.

      When used on a range which contains multiple mappings, this technique is almost guaranteed to fail. In that case, (a) would only release the first mapping in the range. (b) would almost certainly fail since most of the original range would still be mapped.

      Examples of these technique in os_windows.cpp:
      - os::split_reseved_memory() (see also [4])
      - map_or_reserve_memory_aligned()
      - os::replace_existing_mapping_with_file_mapping()

      This can manifest as small memory leak or inability to attach to a given wish address. It could also result in a viscous loop ([5], [6]) and result in ballooning and native OOMs.


      Solution would be to change os::release_memory() to use VirtualQuery to query the mappings in that range and release them individually. We should this only for cases where we know multi-map reservations can exist, e.g. NUMA or LP. Otherwise we should assert (guarantee?) that the range given to os::release_memory() has an exact match at the OS level.


      AFAICS this is an old issue, dating back to at least jdk 8.

      [1] https://docs.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-virtualalloc
      [2] https://docs.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-virtualfree
      [3] https://github.com/openjdk/jdk/blob/5dfb42fc68099278cbc98e25fb8a91fd957c12e2/src/hotspot/os/windows/os_windows.cpp#L3394
      [4] https://bugs.openjdk.java.net/browse/JDK-8253649
      [5] https://github.com/openjdk/jdk/blob/5dfb42fc68099278cbc98e25fb8a91fd957c12e2/src/hotspot/os/windows/os_windows.cpp#L3150
      [6] https://bugs.openjdk.java.net/browse/JDK-8255954


        Issue Links



              stuefe Thomas Stuefe
              stuefe Thomas Stuefe
              0 Vote for this issue
              6 Start watching this issue