Currently G1 uses a predictor based on a decaying average that adds a safety margin using the sequences' variance/standard deviation.
On workloads/situations with recent high variance, the predictors used in the G1Analytics class may currently return unexpected negative/positive values as they are not clamped to a useful range.
As usual these errors can influence the prediction accuracy significantly (e.g. I have seen predictions for the amount of bytes survived to be in the ~2^63 range due to overflow, which are then propagated further to result in completely impossible overall time predictions).
This is a day one bug as far as I understand; only in very few cases consumers of the predictions "manually" clamp values already, e.g. the G1Policy::predict_yg_surv_rate() method.
It would be better if the G1Analytics predict_* would do value clamping already.
On workloads/situations with recent high variance, the predictors used in the G1Analytics class may currently return unexpected negative/positive values as they are not clamped to a useful range.
As usual these errors can influence the prediction accuracy significantly (e.g. I have seen predictions for the amount of bytes survived to be in the ~2^63 range due to overflow, which are then propagated further to result in completely impossible overall time predictions).
This is a day one bug as far as I understand; only in very few cases consumers of the predictions "manually" clamp values already, e.g. the G1Policy::predict_yg_surv_rate() method.
It would be better if the G1Analytics predict_* would do value clamping already.
- is blocked by
-
JDK-8233702 Introduce helper function to clamp value to range
-
- Resolved
-