The garbage collector does not behave as expected with the use of -Xincgc option in JDK1.2.2_004 and higher as well as JDK1.3 RC on solaris. The -Xincgc option is actually reducing performance AND increasing the pause time whereas, -Xincgc is supposed to bring down performance by 10% and reduce the pauses. It is increasing the pause AND degrading the performance by ~50 %.
Carlos.Lucasius@Canada (September 16, 2000):
COMMENTS FROM GEMSTONE AND IDEAS TOWARDS A FIX:
In Hotspot 1.0.1, we observed in some of our tests that the object
heap seemed to grow an awful lot before the Train GC would attempt
a mark-sweep. We added the logic described below, which has helped
some.
In MarkSweep, remember the capacity of old generation in an int at the
end of each mark sweep. Add to markSweep.hpp :
static int _last_old_capacity;
along with appropriate accessor methods, etc.
In globals.hpp, added
product(int, MaxOldHeapExpansion, 8*M , "Max expansion of old gen without full GC if train Gc in use" )
In scavenge.cpp, near the comment "Check if we had to expand old generation"
we have the code like this:
----------
// Check if we had to expand old generation in order to complete scavenge
// If so, do full gc on next scavenge and (possibly) shrink old generation again
if (!UseTrainGC) {
int actual_old_capacity_expansion = Universe::old_gen()->capacity() - prev_old_capacity;
if (actual_old_capacity_expansion > 0) {
_full_gc_revert_count = MarkSweep::invoke_count();
}
} else {
// do not use prev_old_capacity here
// Gemstone additions
// be more agressive about doing full mark sweeps if we are flushing a lot of
// objects and getting a lot of object memory growth.
int old_capacity_expansion =
Universe::old_gen()->capacity() - MarkSweep::last_old_capacity();
if (old_capacity_expansion > MaxOldHeapExpansion ) {
_full_gc_revert_count = MarkSweep::invoke_count();
}
}
---------------
This change helped for Java programs that created a lot of
objects that would survive for 5 or 10 scavenges, and then dereferenced them.
If a mark sweep ran often enough the system would stabilize, but Train GC
could not cope otherwise.
I doubt that our change is a complete or correct solution .
We are still not happy with the performance of Train GC.
Allen Otis <###@###.###>
Name: yyT116575 Date: 12/07/2000
java -server -version
java version "1.3.0beta_refresh"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.0beta_refresh-b09)
Java HotSpot(TM) Server VM (build 1.3.0beta-b07, mixed mode)
In the Java HotSpot documentation it states that the -Xincgc is a pauseless
garbage collector. This isn't true in a bad way. I have an application that
loads 950MB of data into RAM and has one trivial method exposed over RMI. The
data is stored in a Hashmap which has 800K entries. Incremental garbage
collection does tend to increase the number of small gc runs (as shown using
the -verbosegc flag), but full gc calls are still being made which took 30+
seconds to run. This is with a totaly optimized code that allocates
practically no new objects during its lifetime. After a rearchitecting of the
storage I brought me memory down to 650MB and reduced the GC delay to 4 seconds
(by serializing the Object[] used to a byte[] and storing only the byte[]).
This is hardly pauseless garbage collection. During this time no threads can
be serviced and my RMI calls hang. If too many threads hit the server during
this time I get a seg fault and java exits.
The current command to launch the rmiregistry is:
rmiregistry -J-Xincgc -J-Xmx100M -J-Djava.security.policy=java.policy &
The command to launch the process is:
java -server -Xincgc -verbosegc -Xms1900M -Xmx1900M myClass
This was also attempted on the Windows 2000 with Java 1.3.0 and the seperate
HotSpot Server VM 2.0 with identical results on the same code.
Have tried this on systems with 1-2GB of RAM ( with appropriate reductions in
the -Xms and -Xms flags for the smaller boxes) running Redhat 6.1, Redhat 6.2,
Suse (not sure version), and Redhat 6.2 Enterprise Oracle Edition)
(Review ID: 108537)
======================================================================
Carlos.Lucasius@Canada (September 16, 2000):
COMMENTS FROM GEMSTONE AND IDEAS TOWARDS A FIX:
In Hotspot 1.0.1, we observed in some of our tests that the object
heap seemed to grow an awful lot before the Train GC would attempt
a mark-sweep. We added the logic described below, which has helped
some.
In MarkSweep, remember the capacity of old generation in an int at the
end of each mark sweep. Add to markSweep.hpp :
static int _last_old_capacity;
along with appropriate accessor methods, etc.
In globals.hpp, added
product(int, MaxOldHeapExpansion, 8*M , "Max expansion of old gen without full GC if train Gc in use" )
In scavenge.cpp, near the comment "Check if we had to expand old generation"
we have the code like this:
----------
// Check if we had to expand old generation in order to complete scavenge
// If so, do full gc on next scavenge and (possibly) shrink old generation again
if (!UseTrainGC) {
int actual_old_capacity_expansion = Universe::old_gen()->capacity() - prev_old_capacity;
if (actual_old_capacity_expansion > 0) {
_full_gc_revert_count = MarkSweep::invoke_count();
}
} else {
// do not use prev_old_capacity here
// Gemstone additions
// be more agressive about doing full mark sweeps if we are flushing a lot of
// objects and getting a lot of object memory growth.
int old_capacity_expansion =
Universe::old_gen()->capacity() - MarkSweep::last_old_capacity();
if (old_capacity_expansion > MaxOldHeapExpansion ) {
_full_gc_revert_count = MarkSweep::invoke_count();
}
}
---------------
This change helped for Java programs that created a lot of
objects that would survive for 5 or 10 scavenges, and then dereferenced them.
If a mark sweep ran often enough the system would stabilize, but Train GC
could not cope otherwise.
I doubt that our change is a complete or correct solution .
We are still not happy with the performance of Train GC.
Allen Otis <###@###.###>
Name: yyT116575 Date: 12/07/2000
java -server -version
java version "1.3.0beta_refresh"
Java(TM) 2 Runtime Environment, Standard Edition (build 1.3.0beta_refresh-b09)
Java HotSpot(TM) Server VM (build 1.3.0beta-b07, mixed mode)
In the Java HotSpot documentation it states that the -Xincgc is a pauseless
garbage collector. This isn't true in a bad way. I have an application that
loads 950MB of data into RAM and has one trivial method exposed over RMI. The
data is stored in a Hashmap which has 800K entries. Incremental garbage
collection does tend to increase the number of small gc runs (as shown using
the -verbosegc flag), but full gc calls are still being made which took 30+
seconds to run. This is with a totaly optimized code that allocates
practically no new objects during its lifetime. After a rearchitecting of the
storage I brought me memory down to 650MB and reduced the GC delay to 4 seconds
(by serializing the Object[] used to a byte[] and storing only the byte[]).
This is hardly pauseless garbage collection. During this time no threads can
be serviced and my RMI calls hang. If too many threads hit the server during
this time I get a seg fault and java exits.
The current command to launch the rmiregistry is:
rmiregistry -J-Xincgc -J-Xmx100M -J-Djava.security.policy=java.policy &
The command to launch the process is:
java -server -Xincgc -verbosegc -Xms1900M -Xmx1900M myClass
This was also attempted on the Windows 2000 with Java 1.3.0 and the seperate
HotSpot Server VM 2.0 with identical results on the same code.
Have tried this on systems with 1-2GB of RAM ( with appropriate reductions in
the -Xms and -Xms flags for the smaller boxes) running Redhat 6.1, Redhat 6.2,
Suse (not sure version), and Redhat 6.2 Enterprise Oracle Edition)
(Review ID: 108537)
======================================================================
- duplicates
-
JDK-4459148 Incremental GC has delays upto 13 seconds under jdk1.2.2 / 1.3.1 / 1.4.0-b64
-
- Closed
-
- relates to
-
JDK-4374139 Serverside apps:some objects need not get scanned during the entire life of app.
-
- Closed
-