-
Bug
-
Resolution: Fixed
-
P3
-
12, 13, 14
-
b08
The following assertion happens when setting SymbolTableSize larger than 2^17 (131,072).
bin/java -XX:+UnlockExperimentalVMOptions -XX:SymbolTableSize=2221221
# To suppress the following error report, specify this argument
# after -XX: or in .hotspotrc: SuppressErrorAt=/concurrentHashTable.inline.hpp:1005
#
# A fatal error has been detected by the Java Runtime Environment:
#
# Internal Error (/usr/local/google/home/jianglizhou/openjdk/jdk/src/hotspot/share/utilities/concurrentHashTable.inline.hpp:1005), pid=248464, tid=248465
# assert(log2size_limit >= log2size) failed: bad ergo
#
# JRE version: (14.0) (slowdebug build )
# Java VM: OpenJDK 64-Bit Server VM (slowdebug 14-internal+0-adhoc.jianglizhou.jdk, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64)
# Problematic frame:
# V [libjvm.so+0x12b9ec8] ConcurrentHashTable<SymbolTableConfig, (MemoryType)10>::ConcurrentHashTable(unsigned long, unsigned long, unsigned long)+0x182
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# ...dk/hs_err_pid248464.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
However, the SymbolTableSize argument does allow values < 111*32768 (3637248). The underlying ConcurrentHashTable allows even bigger range with 2^30 as the upper bound.
experimental(uintx, SymbolTableSize, defaultSymbolTableSize, \
"Number of buckets in the JVM internal Symbol table") \
range(minimumSymbolTableSize, 111*defaultSymbolTableSize) \
2^17 (131,072) is hard coded by the following:
const size_t END_SIZE = 17;
Large applications can benefit (performance improvement) from a symbol table with large initial size. The above hardcoded END_SIZE probably should be fixed.
bin/java -XX:+UnlockExperimentalVMOptions -XX:SymbolTableSize=2221221
# To suppress the following error report, specify this argument
# after -XX: or in .hotspotrc: SuppressErrorAt=/concurrentHashTable.inline.hpp:1005
#
# A fatal error has been detected by the Java Runtime Environment:
#
# Internal Error (/usr/local/google/home/jianglizhou/openjdk/jdk/src/hotspot/share/utilities/concurrentHashTable.inline.hpp:1005), pid=248464, tid=248465
# assert(log2size_limit >= log2size) failed: bad ergo
#
# JRE version: (14.0) (slowdebug build )
# Java VM: OpenJDK 64-Bit Server VM (slowdebug 14-internal+0-adhoc.jianglizhou.jdk, mixed mode, sharing, tiered, compressed oops, g1 gc, linux-amd64)
# Problematic frame:
# V [libjvm.so+0x12b9ec8] ConcurrentHashTable<SymbolTableConfig, (MemoryType)10>::ConcurrentHashTable(unsigned long, unsigned long, unsigned long)+0x182
#
# No core dump will be written. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# ...dk/hs_err_pid248464.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
However, the SymbolTableSize argument does allow values < 111*32768 (3637248). The underlying ConcurrentHashTable allows even bigger range with 2^30 as the upper bound.
experimental(uintx, SymbolTableSize, defaultSymbolTableSize, \
"Number of buckets in the JVM internal Symbol table") \
range(minimumSymbolTableSize, 111*defaultSymbolTableSize) \
2^17 (131,072) is hard coded by the following:
const size_t END_SIZE = 17;
Large applications can benefit (performance improvement) from a symbol table with large initial size. The above hardcoded END_SIZE probably should be fixed.
- relates to
-
JDK-8228855 Test runtime/CommandLine/OptionsValidation/TestOptionsWithRanges fails after JDK-8227123
-
- Closed
-