Name: krC82822 Date: 05/07/2001
java version "1.3.1-rc2"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.1-rc2-b23)
Java HotSpot(TM) Client VM (build 1.3.1-rc2-b23, mixed mode)
ladybird (1.3.1) contains a fix for bug 4336459
("writing/reading to TCP socket is inneficient under NT/Linux").
The fix has two problems:
1. The buffer size is clipped at 64K.
2. Socket.getReceiveBufferSize() and getSendBufferSize()
return incorrect information about the actual size of the buffer.
Since setReceiveBufferSize() is a "hint" (the following is
from the 1.3 docs)...
"Because SO_RCVBUF is a hint, applications that want to verify what size the
buffers were set to should call getReceiveBufferSize()."
...you could argue that the 64K limit is acceptable, if you could also establish
that the limit was imposed by the OS and not the Java runtime.
However, I don't believe you can argue that getReceiveBufferSize() and
getSendBufferSize() should return incorrect info about the buffer size.
I propose the following:
1. getSendBufferSize() and getReceiveBufferSize() should tell the truth.
2. In the absence of any real platform limitations, setSendBufferSize() and
setReceiveBufferSize() should set the buffers to whatever you want.
I care about this so much because my product relays large chunks of data
between boxes, and I have inherent latency problems. If there's 128K sitting on
one box, I want to transfer it all back in one request/response roundtrip (yes, I
am aware of TCP/IP packet size limits).
(Review ID: 123867)
======================================================================
- duplicates
-
JDK-4397070 Socket.setSendBufferSize and setReceiveBufferSize do not properly delegate to OS
- Resolved