Details

Bug

Status: Closed

P4

Resolution: Fixed

6u18, 7

b94

generic, x86

generic, linux

Not verified
Description
see discussion on the corelibsdev mailing list:
http://mail.openjdk.java.net/pipermail/corelibsdev/2010March/003694.html
Excerpt from this discussion that describes the problem:
"I've noticed bugs in java.util.ArrayList, java.util.Hashtable and
java.io.ByteArrayOutputStream which arise when the capacities of the data
structures reach a particular threshold. More below.
When the capacity of an ArrayList reaches (2/3)*Integer.MAX_VALUE its size
reaches its capacity and an add or an insert operation is invoked, the
capacity is increased by only one element. Notice that in the following
excerpt from ArrayList.ensureCapacity the new capacity is set to (3/2) *
oldCapacity + 1 unless this value would not suffice to accommodate the
required capacity in which case it is set to the required capacity. If the
current capacity is at least (2/3)*Integer.MAX_VALUE, then (oldCapacity *
3)/2 + 1 overflows and resolves to a negative number resulting in the new
capacity being set to the required capacity. The major consequence of this
is that each subsequent add/insert operation results in a full resize of the
ArrayList causing performance to degrade significantly.
int newCapacity = (oldCapacity * 3)/2 + 1;
if (newCapacity < minCapacity)
newCapacity = minCapacity;
Hashtable breaks entirely when the size of its backing array reaches (1/2) *
Integer.MAX_VALUE and a rehash is necessary as is evident from the following
excerpt from rehash. Notice that rehash will attempt to create an array of
negative size if the size of the backing array reaches (1/2) *
Integer.MAX_VALUE since oldCapacity * 2 + 1 overflows and resolves to a
negative number.
int newCapacity = oldCapacity * 2 + 1;
HashtableEntry newTable[] = new HashtableEntry[newCapacity];
When the capacity of the backing array in a ByteArrayOutputStream reaches
(1/2) * Integer.MAX_VALUE its size reaches its capacity and a write
operation is invoked, the capacity of the backing array is increased only by
the required number of elements. Notice that in the following excerpt from
ByteArrayOutputStream.write(int) the new backing array capacity is set to 2
* buf.length unless this value would not suffice to accommodate the required
capacity in which case it is set to the required capacity. If the current
backing array capacity is at least (1/2) * Integer.MAX_VALUE + 1, then
buf.length << 1 overflows and resolves to a negative number resulting in the
new capacity being set to the required capacity. The major consequence of
this, like with ArrayList, is that each subsequent write operation results
in a full resize of the ByteArrayOutputStream causing performance to degrade
significantly.
int newcount = count + 1;
if (newcount > buf.length) {
buf = Arrays.copyOf(buf, Math.max(buf.length << 1, newcount));
}
It is interesting to note that any statements about the amortized time
complexity of add/insert operations, such as the one in the ArrayList
javadoc, are invalidated by the performance related bugs. One solution to
the above situations is to set the new capacity of the backing array to
Integer.MAX_VALUE when the initial size calculation results in a negative
number during a resize."
http://mail.openjdk.java.net/pipermail/corelibsdev/2010March/003694.html
Excerpt from this discussion that describes the problem:
"I've noticed bugs in java.util.ArrayList, java.util.Hashtable and
java.io.ByteArrayOutputStream which arise when the capacities of the data
structures reach a particular threshold. More below.
When the capacity of an ArrayList reaches (2/3)*Integer.MAX_VALUE its size
reaches its capacity and an add or an insert operation is invoked, the
capacity is increased by only one element. Notice that in the following
excerpt from ArrayList.ensureCapacity the new capacity is set to (3/2) *
oldCapacity + 1 unless this value would not suffice to accommodate the
required capacity in which case it is set to the required capacity. If the
current capacity is at least (2/3)*Integer.MAX_VALUE, then (oldCapacity *
3)/2 + 1 overflows and resolves to a negative number resulting in the new
capacity being set to the required capacity. The major consequence of this
is that each subsequent add/insert operation results in a full resize of the
ArrayList causing performance to degrade significantly.
int newCapacity = (oldCapacity * 3)/2 + 1;
if (newCapacity < minCapacity)
newCapacity = minCapacity;
Hashtable breaks entirely when the size of its backing array reaches (1/2) *
Integer.MAX_VALUE and a rehash is necessary as is evident from the following
excerpt from rehash. Notice that rehash will attempt to create an array of
negative size if the size of the backing array reaches (1/2) *
Integer.MAX_VALUE since oldCapacity * 2 + 1 overflows and resolves to a
negative number.
int newCapacity = oldCapacity * 2 + 1;
HashtableEntry newTable[] = new HashtableEntry[newCapacity];
When the capacity of the backing array in a ByteArrayOutputStream reaches
(1/2) * Integer.MAX_VALUE its size reaches its capacity and a write
operation is invoked, the capacity of the backing array is increased only by
the required number of elements. Notice that in the following excerpt from
ByteArrayOutputStream.write(int) the new backing array capacity is set to 2
* buf.length unless this value would not suffice to accommodate the required
capacity in which case it is set to the required capacity. If the current
backing array capacity is at least (1/2) * Integer.MAX_VALUE + 1, then
buf.length << 1 overflows and resolves to a negative number resulting in the
new capacity being set to the required capacity. The major consequence of
this, like with ArrayList, is that each subsequent write operation results
in a full resize of the ByteArrayOutputStream causing performance to degrade
significantly.
int newcount = count + 1;
if (newcount > buf.length) {
buf = Arrays.copyOf(buf, Math.max(buf.length << 1, newcount));
}
It is interesting to note that any statements about the amortized time
complexity of add/insert operations, such as the one in the ArrayList
javadoc, are invalidated by the performance related bugs. One solution to
the above situations is to set the new capacity of the backing array to
Integer.MAX_VALUE when the initial size calculation results in a negative
number during a resize."
Attachments
Issue Links
 duplicates

JDK6787888 Optimization for java.util.ArrayList :: ensureCapacity()
 Closed
 relates to

JDK6992121 StringBuilder.ensureCapacity(int minCap) throws OutOfMemoryError with minCap=Integer.MIN_VALUE
 Resolved

JDK8055949 ByteArrayOutputStream capacity should be maximal array size permitted by VM
 Closed

JDK6955504 (str) String[Builder/Buffer].append(char[],int,int) throws OutOfMemoryError in b94
 Closed

JDK6952330 Fix for 6933217 broke contract of StringBuffer.ensureCapacity
 Closed