The GC runs well for about 48 hrs (relative to load) and then fails to reclaim all memory and soon the application runs Out of Memory. The application has been profiled with OptimizeIt and there seems no memory leak. In any case if it was there, it would have been eveident in the first 48 hrs as well.
Application is a SIP Proxy Server (like a call switch) and operates under
JRE 1.2.2, 1.3.0 or 1.3.1 - all exhibit the same problem. The server runs for a LONG time before things appear to go wrong. By a long time, I am talking about 20-30 hours. It processes incoming call requests in a protocol called SIP.
We're talking about 15-20Million messages here. The cycles of partial GC followed
by full GC, is repeated a LOT. With the GC returning all the memory - it's not
as if the GC is slowly losing memory (which I would immediately see as a memory leak).
I have enclosed a log file of output shown using the following flags to java:
java -DJARDIR=$INSTALL_DIR/lib/ -verbosegc -Xms64M -Xmx64M
Near the end you can see it suddenly is unable to reclaim the memory and quickly dies.
It turned out that this was due to a race condition situation as a result of threads contention, which has now ben fixed using thread priorities. This bug may be closed now.
Application is a SIP Proxy Server (like a call switch) and operates under
JRE 1.2.2, 1.3.0 or 1.3.1 - all exhibit the same problem. The server runs for a LONG time before things appear to go wrong. By a long time, I am talking about 20-30 hours. It processes incoming call requests in a protocol called SIP.
We're talking about 15-20Million messages here. The cycles of partial GC followed
by full GC, is repeated a LOT. With the GC returning all the memory - it's not
as if the GC is slowly losing memory (which I would immediately see as a memory leak).
I have enclosed a log file of output shown using the following flags to java:
java -DJARDIR=$INSTALL_DIR/lib/ -verbosegc -Xms64M -Xmx64M
Near the end you can see it suddenly is unable to reclaim the memory and quickly dies.
It turned out that this was due to a race condition situation as a result of threads contention, which has now ben fixed using thread priorities. This bug may be closed now.