-
Enhancement
-
Resolution: Won't Fix
-
P3
-
hs25, 8
At the moment G1 scan closures use the standard devirtualization mechanism implemented in Hotspot; for every reference location the do_oop_work_nv method is finally called.
It would be faster to, similar to other collectors, split out the check whether the reference is actually going to be scavenged first, and only then call do_oop_work_nv.
E.g. in parallel scavenge, instanceKlass.cpp:
void InstanceKlass::oop_push_contents(PSPromotionManager* pm, oop obj) {
InstanceKlass_OOP_MAP_REVERSE_ITERATE( \
obj, \
if (PSScavenge::should_scavenge(p)) { \ // <--- fast path check whether there is a point to do the work or not
pm->claim_or_forward_depth(p); \ // <--- push reference
}, \
assert_nothing )
}
This fast-path check is small and likely to be inlined (as opposed to the full do_oop_work_nv method, allowing quick removal of doing unnecessary work for references the collector is not interested in.
In case of G1, this check can be done (relatively) quickly using the _in_cset_fast_test array of G1CollectedHeap.
It would be faster to, similar to other collectors, split out the check whether the reference is actually going to be scavenged first, and only then call do_oop_work_nv.
E.g. in parallel scavenge, instanceKlass.cpp:
void InstanceKlass::oop_push_contents(PSPromotionManager* pm, oop obj) {
InstanceKlass_OOP_MAP_REVERSE_ITERATE( \
obj, \
if (PSScavenge::should_scavenge(p)) { \ // <--- fast path check whether there is a point to do the work or not
pm->claim_or_forward_depth(p); \ // <--- push reference
}, \
assert_nothing )
}
This fast-path check is small and likely to be inlined (as opposed to the full do_oop_work_nv method, allowing quick removal of doing unnecessary work for references the collector is not interested in.
In case of G1, this check can be done (relatively) quickly using the _in_cset_fast_test array of G1CollectedHeap.
- relates to
-
JDK-8027553 Change the in_cset_fast_test functionality to use the G1BiasedArray abstraction
-
- Resolved
-