Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-4211025

InputStream.close() takes time proportional to bytes left to read

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Fixed
    • Icon: P4 P4
    • 1.4.0
    • 1.1.6, 1.1.7
    • core-libs
    • beta
    • generic, x86
    • generic, windows_nt



      Name: dbT83986 Date: 02/11/99


      When closing an InputStream connected to a URL pointing to a large file (possibly via a slow connection) it becomes apparent
      that the close operation takes a time that is proportional to the number of bytes left to read on the stream until the end of file is
      reached. This can be a very long time if the connection is slow (actually, I suppose it's the time that would be needed to read
      the entire file thru the current connection). The example code demonstrates this: the output window displays "Closing...", then
      a long pause, then "Closed OK.". Try this with different URLs pointing to more or less large files, resp. with more or less fast
      connections. The more data is read() before closing the stream, the more the time between the "Closing..." and "Closed OK"
      messages is reduced. I tested this on Macintosh with MRJ 2.0 (thus Java 1.1.3) and on Win NT with JRE 1.1.6.

      The close() operation can be interrupted via a call to Thread.interrupt(). The resulting InterruptedException's stack trace might
      reveal the source of the problem:

      java.lang.InterruptedException: operation interrupted
      at java.net.SocketInputStream.read(SocketInputStream.java:89)
      at java.net.SocketInputStream.skip(SocketInputStream.java:122)
      at sun.net.www.MeteredStream.skip(MeteredStream.java:87)
      at sun.net.www.http.KeepAliveStream.close(KeepAliveStream.java:70)
      at java.io.FilterInputStream.close(FilterInputStream.java:185)
      at MyThread.run(TrivialApplication.java:38)

      It appears that MeteredInputStream skips the remaining data on the stream before closing it. The skip() method in turn calls SocketInputStream.read(), which probably reads all the remaining data, just to throw it away afterwards (I presume - I didn't actually see the source code). This is an aweful waste of time, as in a Web application, the user might want to close a stream before reading all the data just because the connection is too slow. If the data is then read anyway, the sense of the operation is lost.

      I didn't test this with InputStreams accessing local files, so this behavior might be relevant for Internet (socket) streams only.

      -----------

      import java.net.*;
      import java.io.*;

      public class Main
      {
      public static void main(String args[])
      {
      InputStream input_stream = null;

      try
      {
      URL url = new URL("http://www.example.com/huge_file.dat");
      URLConnection connection = url.openConnection();
      input_stream = connection.getInputStream();

      byte[] buffer = new byte[200000];
      input_stream.read(buffer);
      }
      catch (Throwable e)
      {
      e.printStackTrace();
      }
      finally
      {
      if (input_stream != null)
      {
      try
      {
      System.out.println("Closing ...");
      input_stream.close();
      System.out.println("Closed OK.");
      }
      catch (Throwable e)
      {
      System.out.println("Exception thrown while closing: " + e.toString());
      }
      }
      }
      }
      }
      (Review ID: 38231)
      ======================================================================

            jccollet Jean-Christophe Collet (Inactive)
            dblairsunw Dave Blair (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

              Created:
              Updated:
              Resolved:
              Imported:
              Indexed: