-
Enhancement
-
Resolution: Unresolved
-
P5
-
None
-
None
-
generic
-
generic
A service has observed that an abundance of "regular" free memory can cause GC to sit idle even though there are no or only a small number of regions that are entirely free and/or none of the entirely free regions are neighbors. When the next humongous allocation request arrives, we degenerate.
It would have been better to trigger GC sooner so that humongous regions can be recycled and regular regions can be compacted.
This requested enhancement is most relevant to services that have reasonably high rates of humongous allocation and the humongous objects themselves have relatively short lifetimes.
Note that compaction of humongous objects can only happen during stop-the-world full GC. The idea is to trigger GC when:
humongous_allocation_rate * average_gc_time >= number-of-contiguous-free-regions * region_size
The hope is that we will reclaim enough humongous objects and the chosen collection set will have enough neighboring regions that we can expand the amount of memory available for humongous allocations. Our hope is that we can find available memory concurrently, without having to stop-the-world to slide humongous objects around.
This proposed trigger is "conservative" in that the number-of-contiguous-free-regions is only a conservative approximation of how much humongous memory can be reclaimed. Suppose, for example, that the free pool has a cluster of 37 contiguous free regions, but it also has another cluster of 15 contiguous regions and another one of 13. We will trigger as soon as 37 is not sufficient runway, even though we really have a runway of 37+15+13 for humongous regions that do not span more than 1 or 2 regions at a time. Perhaps the heuristic needs to be even smarter, in order to keep track of the "expected" humongous object sizes as part of its allocation rate representation.
Triggering for out-of-humongous-memory conditions could cause GC thrashing if humongous triggers occur so much more frequently than regular triggers that there is not enough time between GC passes to allow recently allocated objects to become garbage. The implementation of this capability should consider mechanisms to throttle the triggering of humongous-memory triggers in the case that the humongous-memory collections are not productive.
It would have been better to trigger GC sooner so that humongous regions can be recycled and regular regions can be compacted.
This requested enhancement is most relevant to services that have reasonably high rates of humongous allocation and the humongous objects themselves have relatively short lifetimes.
Note that compaction of humongous objects can only happen during stop-the-world full GC. The idea is to trigger GC when:
humongous_allocation_rate * average_gc_time >= number-of-contiguous-free-regions * region_size
The hope is that we will reclaim enough humongous objects and the chosen collection set will have enough neighboring regions that we can expand the amount of memory available for humongous allocations. Our hope is that we can find available memory concurrently, without having to stop-the-world to slide humongous objects around.
This proposed trigger is "conservative" in that the number-of-contiguous-free-regions is only a conservative approximation of how much humongous memory can be reclaimed. Suppose, for example, that the free pool has a cluster of 37 contiguous free regions, but it also has another cluster of 15 contiguous regions and another one of 13. We will trigger as soon as 37 is not sufficient runway, even though we really have a runway of 37+15+13 for humongous regions that do not span more than 1 or 2 regions at a time. Perhaps the heuristic needs to be even smarter, in order to keep track of the "expected" humongous object sizes as part of its allocation rate representation.
Triggering for out-of-humongous-memory conditions could cause GC thrashing if humongous triggers occur so much more frequently than regular triggers that there is not enough time between GC passes to allow recently allocated objects to become garbage. The implementation of this capability should consider mechanisms to throttle the triggering of humongous-memory triggers in the case that the humongous-memory collections are not productive.