-
Bug
-
Resolution: Not an Issue
-
P1
-
1.4.2_09
-
sparc
-
solaris_8
We are getting lots of OutOfMemoryErrors in java 1.4.1_05 at our customer site. Each error is immediately followed by a coredump/crash. There are two variants:
Exception java.lang.OutOfMemoryError: requested 41943040 bytes
Exception java.lang.OutOfMemoryError: requested 20971520 bytes
Those byte values are exactly 5120 and 2560 8k pages. The first variant happens more often than the second.
However, we don't see how the process possibly could be in an out-of-memory situation. There are GBs of swap space and physical memory left. In one recent example, the VM was using 2.7 GB (RSS), so it's far from reaching the 4GB 32-bit process limit. The heap size is reported to be (in that example) 900 MB used of 1.6 GB total (700 MB free, and the ceiling is allowed to grow to 3GB).
Knowing that Sun doesn't support 1.4.1 properly anymore, I tried to reproduce this behavior in 1.4.2_07. I was able to get the following error message:
Exception java.lang.OutOfMemoryError: requested 41943040 bytes for GrET* in /export/jdk142-update/ws/fcs/hotspot/src/share/vm/utilities/growableArray.cpp. Out of swap space?
The heap has plenty of space, the VM size is safely within the 4GB limit, we are running on v880s with plenty of available
physical memory and swap. Solaris 8 on 4-cpu v880 machines with 16GB of RAM and 20GB of swap, several GBs free at all times.
The command-line parameters are:
-server -Xms512m -Xmx2536m
-XX:SoftRefLRUPolicyMSPerMB=15000
-XX:+OverrideDefaultLibthread
-XX:+UseSignalChaining
-XX:+UseParallelGC
-XX:+UseAdaptiveSizePolicy
The Xms and Xmx values can change but are generally quite high as shown. 3072 MB is the largest Xmx setting we use.
In general, there are no core files produced for this particular crash.
The log files I collected from 1.4.2_09 with the settings you suggested, but with -Xms 512m -Xmx3072m (which is the configuration that our customer wants us to test). There was no core file. I was hoping that 1.4.2_09 would produce a core file in this crash scenario, as one of the bugs listed in the bug parade suggested that improvements had been made to the handling of VM crashes, but that was not the case. Could you comment on that ?
We log the free/total heap memory periodically in our app. This is what it said the last time it logged before the crash:
70 / 855 MB free/total memory. Note that the heap ceiling (855 MB) is _far_ below the maximum of 3072m, and that the machine has 5GB physical memory free and 20GB swap free around the time of the crash.
I was running prstat at the time; its log file is attached as well with a line from 'top' showing the memory on the machine. The sm_oom.log attached captures stdout from the application - grep for OutOfMemoryError to see the actual error message before it crashed (note that there are entries in that file after the OutOfMemoryError which are actually from the app when it restarted immediately after the crash).
The gc_sm file attached is the file pointed to by -Xloggc. Note that the last line shows the VM being in the middle of a Full GC.
When run another test with the same -Xms and -Xmx the same values and error file obtained has been attached : hs_err_pid21701.log from the resulting crash. GC logs : gc_21701.log
Again after 4 Hrs one more crash is seen with no core file and an OutOfMemoryError logged immediately prior to the crash:
Exception java.lang.OutOfMemoryError: requested 2048000 bytes for GrET* in /export1/jdk142-update/ws/fcs/hotspot/src/share/vm/utilitiess/growableArray.cpp. Out of swap space?
As before, there is plenty of physical memory left. prstat reports the VM as: SIZE 3744M RSS 3697M immediately prior to the crash. The GC output is attached, file : gc_22188.log. Again, the VM was apparently in the middle of a Full GC.
Customer has run tests against the early access java 1.4.2_10 without any other changes and had several crashes. hs_err files attached are :
1) hs_err_pid14405.log (2) hs_err_pid23728.log (3) hs_err_pid2937.log
Exception java.lang.OutOfMemoryError: requested 41943040 bytes
Exception java.lang.OutOfMemoryError: requested 20971520 bytes
Those byte values are exactly 5120 and 2560 8k pages. The first variant happens more often than the second.
However, we don't see how the process possibly could be in an out-of-memory situation. There are GBs of swap space and physical memory left. In one recent example, the VM was using 2.7 GB (RSS), so it's far from reaching the 4GB 32-bit process limit. The heap size is reported to be (in that example) 900 MB used of 1.6 GB total (700 MB free, and the ceiling is allowed to grow to 3GB).
Knowing that Sun doesn't support 1.4.1 properly anymore, I tried to reproduce this behavior in 1.4.2_07. I was able to get the following error message:
Exception java.lang.OutOfMemoryError: requested 41943040 bytes for GrET* in /export/jdk142-update/ws/fcs/hotspot/src/share/vm/utilities/growableArray.cpp. Out of swap space?
The heap has plenty of space, the VM size is safely within the 4GB limit, we are running on v880s with plenty of available
physical memory and swap. Solaris 8 on 4-cpu v880 machines with 16GB of RAM and 20GB of swap, several GBs free at all times.
The command-line parameters are:
-server -Xms512m -Xmx2536m
-XX:SoftRefLRUPolicyMSPerMB=15000
-XX:+OverrideDefaultLibthread
-XX:+UseSignalChaining
-XX:+UseParallelGC
-XX:+UseAdaptiveSizePolicy
The Xms and Xmx values can change but are generally quite high as shown. 3072 MB is the largest Xmx setting we use.
In general, there are no core files produced for this particular crash.
The log files I collected from 1.4.2_09 with the settings you suggested, but with -Xms 512m -Xmx3072m (which is the configuration that our customer wants us to test). There was no core file. I was hoping that 1.4.2_09 would produce a core file in this crash scenario, as one of the bugs listed in the bug parade suggested that improvements had been made to the handling of VM crashes, but that was not the case. Could you comment on that ?
We log the free/total heap memory periodically in our app. This is what it said the last time it logged before the crash:
70 / 855 MB free/total memory. Note that the heap ceiling (855 MB) is _far_ below the maximum of 3072m, and that the machine has 5GB physical memory free and 20GB swap free around the time of the crash.
I was running prstat at the time; its log file is attached as well with a line from 'top' showing the memory on the machine. The sm_oom.log attached captures stdout from the application - grep for OutOfMemoryError to see the actual error message before it crashed (note that there are entries in that file after the OutOfMemoryError which are actually from the app when it restarted immediately after the crash).
The gc_sm file attached is the file pointed to by -Xloggc. Note that the last line shows the VM being in the middle of a Full GC.
When run another test with the same -Xms and -Xmx the same values and error file obtained has been attached : hs_err_pid21701.log from the resulting crash. GC logs : gc_21701.log
Again after 4 Hrs one more crash is seen with no core file and an OutOfMemoryError logged immediately prior to the crash:
Exception java.lang.OutOfMemoryError: requested 2048000 bytes for GrET* in /export1/jdk142-update/ws/fcs/hotspot/src/share/vm/utilitiess/growableArray.cpp. Out of swap space?
As before, there is plenty of physical memory left. prstat reports the VM as: SIZE 3744M RSS 3697M immediately prior to the crash. The GC output is attached, file : gc_22188.log. Again, the VM was apparently in the middle of a Full GC.
Customer has run tests against the early access java 1.4.2_10 without any other changes and had several crashes. hs_err files attached are :
1) hs_err_pid14405.log (2) hs_err_pid23728.log (3) hs_err_pid2937.log