-
Bug
-
Resolution: Unresolved
-
P4
-
7
-
None
-
Fix Understood
-
generic
-
generic
Once upon a time, hotspot had great trouble optimizing code generation for
types with two different implementations. Here we are thinking specifically
about the DirectByteBuffer and HeapByteBuffer implementations of ByteBuffer.
For this reason, encodeLoop has generally used parallel, massively duplicated
and errorprone implementations as follows:
protected final CoderResult encodeLoop(CharBuffer src,
ByteBuffer dst)
{
if (src.hasArray() && dst.hasArray())
return encodeArrayLoop(src, dst);
else
return encodeBufferLoop(src, dst);
}
At some point, support for "bi-morphic" implementations was added to server compiler
so that this optimization is no longer effective (in fact a small pessimization),
but as of this writing it still a big win for the client compiler.
Soon (2008-2009?), however, tiered compilation will be standard in the JVM,
and the code duplication should be removed, as advised by hotspot engineering.
types with two different implementations. Here we are thinking specifically
about the DirectByteBuffer and HeapByteBuffer implementations of ByteBuffer.
For this reason, encodeLoop has generally used parallel, massively duplicated
and errorprone implementations as follows:
protected final CoderResult encodeLoop(CharBuffer src,
ByteBuffer dst)
{
if (src.hasArray() && dst.hasArray())
return encodeArrayLoop(src, dst);
else
return encodeBufferLoop(src, dst);
}
At some point, support for "bi-morphic" implementations was added to server compiler
so that this optimization is no longer effective (in fact a small pessimization),
but as of this writing it still a big win for the client compiler.
Soon (2008-2009?), however, tiered compilation will be standard in the JVM,
and the code duplication should be removed, as advised by hotspot engineering.