The SA doesn't support bitmap operations on heaps spanning large virtual memory address. The current bitmap implementation uses an integer array to store the bits.
This becomes problematic when scaling up to large heaps.
For example, this is the code in MarkBits.java:
// FIXME: will have trouble with larger heap sizes
long idx = handle.minus(start) / VM.getVM().getOopSize();
if ((idx < 0) || (idx >= bits.size())) {
System.err.println("MarkBits: WARNING: object " + handle + " outside of heap, ignoring");
return false;
}
int intIdx = (int) idx;
if (bits.at(intIdx)) {
I propose that we add an interface that uses longs instead of ints, and implement a minimal segmented bitmap adhering to that interface. With this we can clean out these casts and FIXMEs in the GC code.
In a separate patch, ZGC will implement its own, discontiguous, bitmap.
This becomes problematic when scaling up to large heaps.
For example, this is the code in MarkBits.java:
// FIXME: will have trouble with larger heap sizes
long idx = handle.minus(start) / VM.getVM().getOopSize();
if ((idx < 0) || (idx >= bits.size())) {
System.err.println("MarkBits: WARNING: object " + handle + " outside of heap, ignoring");
return false;
}
int intIdx = (int) idx;
if (bits.at(intIdx)) {
I propose that we add an interface that uses longs instead of ints, and implement a minimal segmented bitmap adhering to that interface. With this we can clean out these casts and FIXMEs in the GC code.
In a separate patch, ZGC will implement its own, discontiguous, bitmap.