Details
-
Bug
-
Resolution: Fixed
-
P3
-
11
Backports
Issue | Fix Version | Assignee | Priority | Status | Resolution | Resolved In Build |
---|---|---|---|---|---|---|
JDK-8207429 | 11.0.2 | Coleen Phillimore | P3 | Resolved | Fixed | b01 |
JDK-8207644 | 11.0.1 | Coleen Phillimore | P3 | Resolved | Fixed | b02 |
JDK-8207368 | 11 | Coleen Phillimore | P3 | Resolved | Fixed | b23 |
Description
While working on the SymbolTable work, we are seeing a crash in
test/hotspot/jtreg/vmTestbase/metaspace/staticReferences/StaticReferences.java
From the crash, it looks like the crashing thread is cleaning concurrently (all concurrentHashTable.inline.hpp) here:
509 if (!HaveDeletables<IsPointer<VALUE>::value, EVALUATE_FUNC>::
510 have_deletable(bucket, eval_f, prefetch_bucket)) {
511 // Nothing to remove in this bucket.
512 continue;
513 }
down:
266 if (next->next() != NULL) {
267 Prefetch::read(*next->next()->value(), 0);
268 }
the next->value() pointer is 0x8.
Another thread is trying to insert and has decided to clean at fast inserts on line 926:
922 } else if (i == 0 && clean) {
923 // We only do cleaning on fast inserts.
924 Bucket* bucket = get_bucket_locked(thread, lookup_f.get_hash());
925 assert(bucket->is_locked(), "Must be locked.");
926 delete_in_bucket(thread, bucket, lookup_f);
927 bucket->unlock();
In delete_in_bucket(), the other thread is trying to write_synchronize()
562 GlobalCounter::write_synchronize();
I've had and discarded several theories about the lock/critical section ordering. I'm trying to see if it doesn't reproduce without the prefetching because one thread might be removing entries from the prefetched bucket while the adding thread is deleting those entries. But the bucket linked list pointers should be using cas so it should be ok (?)
test/hotspot/jtreg/vmTestbase/metaspace/staticReferences/StaticReferences.java
From the crash, it looks like the crashing thread is cleaning concurrently (all concurrentHashTable.inline.hpp) here:
509 if (!HaveDeletables<IsPointer<VALUE>::value, EVALUATE_FUNC>::
510 have_deletable(bucket, eval_f, prefetch_bucket)) {
511 // Nothing to remove in this bucket.
512 continue;
513 }
down:
266 if (next->next() != NULL) {
267 Prefetch::read(*next->next()->value(), 0);
268 }
the next->value() pointer is 0x8.
Another thread is trying to insert and has decided to clean at fast inserts on line 926:
922 } else if (i == 0 && clean) {
923 // We only do cleaning on fast inserts.
924 Bucket* bucket = get_bucket_locked(thread, lookup_f.get_hash());
925 assert(bucket->is_locked(), "Must be locked.");
926 delete_in_bucket(thread, bucket, lookup_f);
927 bucket->unlock();
In delete_in_bucket(), the other thread is trying to write_synchronize()
562 GlobalCounter::write_synchronize();
I've had and discarded several theories about the lock/critical section ordering. I'm trying to see if it doesn't reproduce without the prefetching because one thread might be removing entries from the prefetched bucket while the adding thread is deleting those entries. But the bucket linked list pointers should be using cas so it should be ok (?)
Attachments
Issue Links
- backported by
-
JDK-8207368 Race with ConcurrentHashTable deleting items on insert with cleanup thread
- Resolved
-
JDK-8207429 Race with ConcurrentHashTable deleting items on insert with cleanup thread
- Resolved
-
JDK-8207644 Race with ConcurrentHashTable deleting items on insert with cleanup thread
- Resolved
- relates to
-
JDK-8195097 Make it possible to process StringTable outside safepoint
- Resolved
-
JDK-8206922 Show backtrace of all threads, not just the one that crashed
- Closed