E.g., given a very large classlist file:
$ grep -v '@' classlist | wc
80242 261120 5817467
$ java -Xshare:dump -XX:SharedClassListFile=classlist
[24.343s][error ][cds,heap] [ 0] {0x000000060eb8a790} jdk.internal.loader.ArchivedClassLoaders::platformLoader (offset = 16)
[24.343s][error ][cds,heap] [ 1] {0x000000060e8a5be0} jdk.internal.loader.ClassLoaders$PlatformClassLoader::parallelLockMap (offset = 40)
[24.343s][error ][cds,heap] [ 2] {0x000000060e8a5ce8} java.util.concurrent.ConcurrentHashMap
[24.343s][error ][cds,heap] Cannot archive the sub-graph referenced from [Ljava.util.concurrent.ConcurrentHashMap$Node; object (0x000000061283e530) size 524304, skipped.
[24.343s][error ][cds ] An error has occurred while writing the shared archive file.
Proposed fix (draft):
https://github.com/iklam/jdk/commit/0367e6801229197e346c961982e19516cb2b7841
Root cause:
The old code, such as parallelLockMap.clear(), does not shrink the size of the ConcurrentHashMap::table[]. Since CDS has a limit on the max object size, the archive creation fails.
As there are no public APIs for shrinking the size of the underlying arrays used by ConcurrentHashMap, the options are:
- Reinitialize the parallelLockMap field (but this requires removing its `final` attribute` -- see https://git.openjdk.org/jdk/pull/21797)
- Add a package-private API to ArrayList and ConcurrentHashMap to shrink the internal array. Add a jdk.internal.access backdoor for the java.lang.Class class to call this API.
$ grep -v '@' classlist | wc
80242 261120 5817467
$ java -Xshare:dump -XX:SharedClassListFile=classlist
[24.343s][error ][cds,heap] [ 0] {0x000000060eb8a790} jdk.internal.loader.ArchivedClassLoaders::platformLoader (offset = 16)
[24.343s][error ][cds,heap] [ 1] {0x000000060e8a5be0} jdk.internal.loader.ClassLoaders$PlatformClassLoader::parallelLockMap (offset = 40)
[24.343s][error ][cds,heap] [ 2] {0x000000060e8a5ce8} java.util.concurrent.ConcurrentHashMap
[24.343s][error ][cds,heap] Cannot archive the sub-graph referenced from [Ljava.util.concurrent.ConcurrentHashMap$Node; object (0x000000061283e530) size 524304, skipped.
[24.343s][error ][cds ] An error has occurred while writing the shared archive file.
Proposed fix (draft):
https://github.com/iklam/jdk/commit/0367e6801229197e346c961982e19516cb2b7841
Root cause:
The old code, such as parallelLockMap.clear(), does not shrink the size of the ConcurrentHashMap::table[]. Since CDS has a limit on the max object size, the archive creation fails.
As there are no public APIs for shrinking the size of the underlying arrays used by ConcurrentHashMap, the options are:
- Reinitialize the parallelLockMap field (but this requires removing its `final` attribute` -- see https://git.openjdk.org/jdk/pull/21797)
- Add a package-private API to ArrayList and ConcurrentHashMap to shrink the internal array. Add a jdk.internal.access backdoor for the java.lang.Class class to call this API.
- links to
-
Review(master) openjdk/jdk/21797