Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-4447092

Relieve the server-side bottleneck in TCPTransport.run.

XMLWordPrintable

    • Icon: Enhancement Enhancement
    • Resolution: Duplicate
    • Icon: P4 P4
    • None
    • 1.3.0
    • core-libs
    • generic
    • generic



      Name: krC82822 Date: 04/18/2001


      java version "1.3.0"
      Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.0-C)
      Java HotSpot(TM) Client VM (build 1.3.0-C, mixed mode)

      According to RMI "folklore", the RMI RTS can only handle about 300 RMI calls per
      second maximum.

      One reason for this is that Java RMI has a major server-side bottleneck around
      the site of the myServer.accept() call in TCPTransport.run().

      For each incoming connection, the RMI RTS has to create a thread and do
      housekeeping things (getting the client address and setting it into a
      ThreadLocal, playing with the connected socket, &c) before the connection can
      start communicating with the client and before the listening thread can proceed
      to the next accept().

      This is in fact the simplest possible server-side architecture. The bottleneck
      can be relieved by a very little more sophistication in the design of the
      listening engine.

      The RMI RTS should pre-create a fixed number of threads, all of which are
      looping around a myServer.accept()/ConnectionHandler.run() loop. In other words
      they are initially all in the "accept" state. When one of them gets a connection
      it immediately processes it directly without starting another thread, and
      returns to the "accept" state when the connection terminates, only exiting when
      the underlying ServerSocket is finally closed. This is all in addition to the
      present processing which starts a new thread per accepted connection.

      These fixed threads constitute a thread pool of fixed size, which could be
      externally controllable, e.g. via a system property such as
      sun.rmi.server.threadPoolSize, and which should have a default value of some
      reasonable number like 4, 8, 16, ...

      An incoming connection has an equal chance of being dispatched into any of the
      concurrent myServer.accept() calls. Therefore the overhead presently imposed by
      creating a thread per accept is reduced by threadPoolSize/(threadPoolSize+1):
      for a pool of 4 threads, by 4/5 or 80%.

      This is a very simple improvement to implement, basically similar to the NFSD
      server architecture, which also relies on N concurrent accept()s on the same
      passive socket (and N concurrent recv()s on the same datagram socket). In this
      case there are concurrent threads rather than concurrent processes, but the
      principle is the same.

      This server architecture is described in Stevens, Unix Network Programming, vol.
      i, 27.11, and the change proposed is equivalent to moving from Stevens' 27.10 to
      27.11. Stevens says in 27.13 that on some platforms a mutex is required to
      ensure that only one thread is actually inside accept() at a time, while other
      platforms support concurrent accept()s. The PlainSocketImpl.accept() method is
      synchronized, which seems to take care of this issue.

      A more complex and better fix would be to return each ConnectionHandler thread
      arising out of the current implementation to a dynamic thread pool when the
      connection is closed, and to allocate new connection threads from the pool where
      possible, rather than creating new threads (something like Stevens' 27.12). This
      dynamic thread pool could have a maximum size or an idle expiry time, or both;
      it might also reasonably have a *minimum* size (of 4 or 8 as above), which can
      be enforced by not exiting on expiry if the minimum pool size has been reached.

      However the 27.11-based solution is so simple to implement that I am requesting
      it instead.

      The only minor difficulty I can see is that you have to call Thread.setName()
      some time in these threads, otherwise you lost the association between the
      thread name and the client address for the threads in the fixed pool.
      (Review ID: 120904)
      ======================================================================

            peterjones Peter Jones (Inactive)
            kryansunw Kevin Ryan (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

              Created:
              Updated:
              Resolved:
              Imported:
              Indexed: