I'm seeing the "CodeCache is full" message, even though there is more than CodeCacheMinimumFreeSpace is available. This is the message I'm seeing:
Java HotSpot(TM) Server VM warning: CodeCache is full. Compiler has been disabled.
Java HotSpot(TM) Server VM warning: Try increasing the code cache size using -XX:ReservedCodeCacheSize=
CodeCache: size=20480Kb used=15994Kb max_used=19169Kb free=4485Kb
bounds [0xb5eed000, 0xb72ed000, 0xb72ed000]
total_blobs=3807 nmethods=3561 adapters=158
compilation: enabled
Notice there is over 4m free. I did some debugging and found that compilation of very large methods (100k to1m) is what most commonly triggers the problem. Since a large enough contiguous block cannot be found, nmethod::new_nmethod() fails. This results in calling CompileBroker::handle_full_code_cache(), which prints the above method and then calls NMethodSweeper::handle_full_code_cache(true). Passing in true for the is_full argument results in the the compiler being disabled.
I think the proper fix is to only pass true to NMethodSweeper::handle_full_code_cache(bool is_full) if the code cache is actually full (less than CodeCacheMinimumFreeSpace free). Otherwise pass in false. This will still trigger the sweeper, but will keep the compiler enabled.
However, we actually do want to disable compilation if we have just a little above CodeCacheMinimumFreeSpace free. Otherwise the compiler will keep trying to run, and keep failing to allocate nmethods, even if small. What we don't want is for a failed attempt at compiling a large method to result in the compiler being disabled. I was thinking something like a 10k buffer would be good. For example:
void CompileBroker::handle_full_code_cache() {
...
if (UseCodeCacheFlushing) {
bool is_full = CodeCache::unallocated_capacity() < CodeCacheMinimumFreeSpace + 10*K;
NMethodSweeper::handle_full_code_cache(is_full);
} else {
...
}
10*K should probably be replaced with some global, but at the moment I'm at a loss for a good name.
Java HotSpot(TM) Server VM warning: CodeCache is full. Compiler has been disabled.
Java HotSpot(TM) Server VM warning: Try increasing the code cache size using -XX:ReservedCodeCacheSize=
CodeCache: size=20480Kb used=15994Kb max_used=19169Kb free=4485Kb
bounds [0xb5eed000, 0xb72ed000, 0xb72ed000]
total_blobs=3807 nmethods=3561 adapters=158
compilation: enabled
Notice there is over 4m free. I did some debugging and found that compilation of very large methods (100k to1m) is what most commonly triggers the problem. Since a large enough contiguous block cannot be found, nmethod::new_nmethod() fails. This results in calling CompileBroker::handle_full_code_cache(), which prints the above method and then calls NMethodSweeper::handle_full_code_cache(true). Passing in true for the is_full argument results in the the compiler being disabled.
I think the proper fix is to only pass true to NMethodSweeper::handle_full_code_cache(bool is_full) if the code cache is actually full (less than CodeCacheMinimumFreeSpace free). Otherwise pass in false. This will still trigger the sweeper, but will keep the compiler enabled.
However, we actually do want to disable compilation if we have just a little above CodeCacheMinimumFreeSpace free. Otherwise the compiler will keep trying to run, and keep failing to allocate nmethods, even if small. What we don't want is for a failed attempt at compiling a large method to result in the compiler being disabled. I was thinking something like a 10k buffer would be good. For example:
void CompileBroker::handle_full_code_cache() {
...
if (UseCodeCacheFlushing) {
bool is_full = CodeCache::unallocated_capacity() < CodeCacheMinimumFreeSpace + 10*K;
NMethodSweeper::handle_full_code_cache(is_full);
} else {
...
}
10*K should probably be replaced with some global, but at the moment I'm at a loss for a good name.
- duplicates
-
JDK-8021827 HotSpot big performance regression
-
- Closed
-
- relates to
-
JDK-8022968 Some codecache allocation failures don't result in invoking the sweeper
-
- Resolved
-