Stanley helped me with some runs to explore ways to avoid full gc for CRMFuse.
The gc logs are on sca00amp: /u01/export5/crmFuse/8u40/g1gc/logdir.g1gc.8u40.b01.200u.3hrs.nosoft.2ser.noFull<1/2/3>
The difference between those 3 runs:
1: -XX:G1OldCSetRegionThresholdPercent=6: 1 Full GC
2. -XX:G1OldCSetRegionThresholdPercent=16: 26 Full GC
3. -XX:G1OldCSetRegionThresholdPercent=16 -XX:MaxGCPauseMillis=60: 1 Full GC
The reason for trying different max old regions added to CSet, I noticed in CRMFuse gc log, the mixed gcs with to-space exhausted have maximum old regions added to the cset. If I reduce this, I might be able to void full gc.
Experiment 1: did not avoid the Full GC, but did not hurt.
Experiment 2: got more space collected during mixed gc, but much more Full GC. So increasing it increase the chance of getting overflow.
Experiment 3: the pause time goal is added so that when constraint by pause goal, do not add maximum old regions to cset. Comparing to Exp 1, this can collect more heap during mixed gc, but when mixed gc gets more expensive, less old regions are added. I noticed the Eden for mixed gc is 84M. Maybe dropping the G1NewSizePercent=1 would allow mixed gc to collect more.
While doing this, I am thinking maybe there is a way to try to avoid 'to-space exhausted' for this case. When old regions are added to cset, we check if maximum is reached, pause time exceeded, etc. Is there a way to predict at most how much we can evacuate? Something like _free_regions_at_the_end_of_collection, so that instead of adding maximum old regions, add some value less than max?
The gc logs are on sca00amp: /u01/export5/crmFuse/8u40/g1gc/logdir.g1gc.8u40.b01.200u.3hrs.nosoft.2ser.noFull<1/2/3>
The difference between those 3 runs:
1: -XX:G1OldCSetRegionThresholdPercent=6: 1 Full GC
2. -XX:G1OldCSetRegionThresholdPercent=16: 26 Full GC
3. -XX:G1OldCSetRegionThresholdPercent=16 -XX:MaxGCPauseMillis=60: 1 Full GC
The reason for trying different max old regions added to CSet, I noticed in CRMFuse gc log, the mixed gcs with to-space exhausted have maximum old regions added to the cset. If I reduce this, I might be able to void full gc.
Experiment 1: did not avoid the Full GC, but did not hurt.
Experiment 2: got more space collected during mixed gc, but much more Full GC. So increasing it increase the chance of getting overflow.
Experiment 3: the pause time goal is added so that when constraint by pause goal, do not add maximum old regions to cset. Comparing to Exp 1, this can collect more heap during mixed gc, but when mixed gc gets more expensive, less old regions are added. I noticed the Eden for mixed gc is 84M. Maybe dropping the G1NewSizePercent=1 would allow mixed gc to collect more.
While doing this, I am thinking maybe there is a way to try to avoid 'to-space exhausted' for this case. When old regions are added to cset, we check if maximum is reached, pause time exceeded, etc. Is there a way to predict at most how much we can evacuate? Something like _free_regions_at_the_end_of_collection, so that instead of adding maximum old regions, add some value less than max?
- duplicates
-
JDK-8142935 Adding old gen regions does not consider available free space
-
- Closed
-