-
Bug
-
Resolution: Unresolved
-
P4
-
None
-
8, 9, 11, 17, 21, 24, 25
-
None
We logged several bugs on the LinkedBlockingDeque. This aggregates them into a single bug report and PR.
1. LinkedBlockingDeque does not immediately throw InterruptedException in put/take
The LinkedBlockingDeque does not behave consistently with other concurrency components. If we call putFirst(), putLast(), takeFirst(), or takeLast() with a thread that is interrupted, it does not immediately throw an InterruptedException, the way that ArrayBlockingQueue and LInkedBlockingQueue does, because instead of lockInterruptibly(), we call lock(). It will only throw an InterruptedException if the queue is full (on put) or empty (on take). Since interruptions are frequently used as a shutdown mechanism, this might prevent code from ever shutting down.
2. LinkedBlockingDeque.clear() should preserve weakly-consistent iterators
LinkedBlockingDeque.clear() should preserve weakly-consistent iterators by linking f.prev and f.next back to f, allowing the iterators to continue from the first or last respectively. This would be consistent with how the other node-based weakly consistent queues LinkedBlockingQueue LinkedTransferQueue, ConcurrentLinkedQueue/Deque work.
The LBD already supports self-linking, since that is done by the unlinkFirst() and unlinkLast() methods, and the iterators and spliterator thus all support self-linking.
This can be fixed very easily by linking both f.prev and f.next back to f.
3. LinkedBlockingDeque offer() creates nodes even if capacity has been reached
In the JavaDoc of LinkedBlockingDeque, it states: "Linked nodes are dynamically created upon each insertion unless this would bring the deque above capacity." However, in the current implementation, nodes are always created, even if the deque is full. This is because count is non-volatile, and we only check inside the linkFirst/Last() methods whether the queue is full. At this point we have already locked and have created the Node. Instead, the count could be volatile, and we could check before locking.
In the current version, calling offer() on a full LinkedBlockingDeque creates unnecessary objects and contention. Similarly for poll() and peek(), we could exit prior to locking by checking the count field.
4. LinkedBlockingDeque allows us to overflow size with addAll()
In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then add that chain in bulk to the existing nodes. We count the nodes in "int n" and then whilst holding the lock, we check that we haven't exceeded the capacity with "if (count + n <= capacity)". However, if we pass in a collection that has more than Integer.MAX_VALUE items in it, then we can overflow n, making it negative. Since "count + n" is also negative, we can add the chain to our last item, and thus we end up with a LinkedBlockingDeque with more than Integer.MAX_VALUE of items and a negative size(). stream().count() gives the correct number of items.
This happens both via the bulk add constructor LinkedBlockingDeque(Collection) and when we call addAll(Collection) directly.
1. LinkedBlockingDeque does not immediately throw InterruptedException in put/take
The LinkedBlockingDeque does not behave consistently with other concurrency components. If we call putFirst(), putLast(), takeFirst(), or takeLast() with a thread that is interrupted, it does not immediately throw an InterruptedException, the way that ArrayBlockingQueue and LInkedBlockingQueue does, because instead of lockInterruptibly(), we call lock(). It will only throw an InterruptedException if the queue is full (on put) or empty (on take). Since interruptions are frequently used as a shutdown mechanism, this might prevent code from ever shutting down.
2. LinkedBlockingDeque.clear() should preserve weakly-consistent iterators
LinkedBlockingDeque.clear() should preserve weakly-consistent iterators by linking f.prev and f.next back to f, allowing the iterators to continue from the first or last respectively. This would be consistent with how the other node-based weakly consistent queues LinkedBlockingQueue LinkedTransferQueue, ConcurrentLinkedQueue/Deque work.
The LBD already supports self-linking, since that is done by the unlinkFirst() and unlinkLast() methods, and the iterators and spliterator thus all support self-linking.
This can be fixed very easily by linking both f.prev and f.next back to f.
3. LinkedBlockingDeque offer() creates nodes even if capacity has been reached
In the JavaDoc of LinkedBlockingDeque, it states: "Linked nodes are dynamically created upon each insertion unless this would bring the deque above capacity." However, in the current implementation, nodes are always created, even if the deque is full. This is because count is non-volatile, and we only check inside the linkFirst/Last() methods whether the queue is full. At this point we have already locked and have created the Node. Instead, the count could be volatile, and we could check before locking.
In the current version, calling offer() on a full LinkedBlockingDeque creates unnecessary objects and contention. Similarly for poll() and peek(), we could exit prior to locking by checking the count field.
4. LinkedBlockingDeque allows us to overflow size with addAll()
In LinkedBlockingDeque.addAll() we first build up the chain of nodes and then add that chain in bulk to the existing nodes. We count the nodes in "int n" and then whilst holding the lock, we check that we haven't exceeded the capacity with "if (count + n <= capacity)". However, if we pass in a collection that has more than Integer.MAX_VALUE items in it, then we can overflow n, making it negative. Since "count + n" is also negative, we can add the chain to our last item, and thus we end up with a LinkedBlockingDeque with more than Integer.MAX_VALUE of items and a negative size(). stream().count() gives the correct number of items.
This happens both via the bulk add constructor LinkedBlockingDeque(Collection) and when we call addAll(Collection) directly.
- duplicates
-
JDK-8354138 LinkedBlockingDeque allows us to overflow size with addAll()
-
- Open
-
-
JDK-8349543 LinkedBlockingDeque does not immediately throw InterruptedException in put/take
-
- Open
-
-
JDK-8354060 LinkedBlockingDeque.clear() should preserve weakly-consistent iterators
-
- Open
-
-
JDK-8354076 LinkedBlockingDeque offer() creates nodes even if capacity has been reached
-
- New
-
- links to
-
Review(master) openjdk/jdk/24925