Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-6606430

OOME caused by ChunkedInputStream implementation change in 1.4.2

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: P3 P3
    • 1.4.2_12
    • 1.4.2_12
    • core-libs
    • x86
    • solaris_10

      The OOME occurrence became frequent when one of our licensees
      migrating their application from jdk 1.3.1_06 to 1.4.2_12.

      OS : Solaris 8 -> Solaris 10
      JDK: 1.3.1_06 -> 1.4.2_12

      They sent a test case in which the jsp file runs on Tomcat and Apache
      runs as front web server. URLConnectTest.java runs at client side.

      In attached there is a -Xprof log collected by the licensee showed an
      OOME was thrown.

      Below is their investigation.

      The data whose size is 107,161,907 byte was transmitted with http
      chunked encoding. In jdk 1.3.1_06, if the prior read data is enough
      the following snippet does not read further.

      ----->
      private int read1(byte[] b, int off, int len) throws IOException {
              int avail = count - pos;
              if (avail <= 0) {
                  fill();
                  avail = count - pos;
                  if (avail <= 0) return -1;
              }
              int cnt = (avail < len) ? avail : len;
              System.arraycopy(buf, pos, b, off, cnt);
              pos += cnt;
              return cnt;
      }
      <-----

      In jdk 1.4.2, it seems that the implementation below hoard the read
      data to chunkData. Since repetition of chunkData expansion leads to GC
      of frequent occurrence and the reading processing of the application
      does not overtake, the exploded buffer results in OOME. Why changing
      the implementation in this way?

      ----->
          /*
           * Expand or compact chunkData if needed.
           */
          if (chunkData.length < chunkCount + copyLen) {
               int cnt = chunkCount - chunkPos;
               if (chunkData.length < cnt + copyLen) {
                  byte tmp[] = new byte[cnt + copyLen];
                  System.arraycopy(chunkData, chunkPos, tmp, 0, cnt);
                  chunkData = tmp;
               } else {
                  System.arraycopy(chunkData, chunkPos, chunkData, 0, cnt);
               }
               chunkPos = 0;
               chunkCount = cnt;
          }
          /*
           * Copy the chunk data into chunkData so that it's available
           * to the read methods.
           */
          System.arraycopy(rawData, rawPos, chunkData, chunkCount, copyLen);
          rawPos += copyLen;
          chunkCount += copyLen;
          chunkRead += copyLen;
      <-----

      The licensee also provided a suggested fix through limiting the data
      size of prior read below,

      ----->
      ***************
      *** 558,563 ****
      --- 558,564 ----
             * <code>chunkData<code> or we need to determine how many bytes
             * are available on the input stream.
             */
      + static private int MAX_READAHEAD_SIZE = 8192;
            private int readAhead(boolean allowBlocking) throws IOException {

              /*
      ***************
      *** 568,573 ****
      --- 569,581 ----
              }

              /*
      + If more than MAX_READAHEAD_SIZE bytes are available now,
      + we don't need any more for the time being.
      + */
      + if (chunkCount - chunkPos > MAX_READAHEAD_SIZE)
      + return chunkCount - chunkPos;
      +
      + /*
               * Reset position/count if data in chunkData is exhausted.
               */
              if (chunkPos >= chunkCount) {
      <-----

            chegar Chris Hegarty
            xiaojuzh Xiaojun Zhang (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved:
              Imported:
              Indexed: