Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-7026932

G1: No need to abort VM when card count cache expansion fails

XMLWordPrintable

    • Icon: Enhancement Enhancement
    • Resolution: Fixed
    • Icon: P4 P4
    • hs21
    • 7
    • hotspot
    • gc
    • b08
    • generic
    • generic
    • Not verified

        Steffan Friberg reported that he experienced the following crash while doing some G1 performance runs:

        #
        # There is insufficient memory for the Java Runtime Environment to continue.
        # Native memory allocation (malloc) failed to allocate 9831128 bytes for CardEpochCacheEntry in /HUDSON/workspace/jdk7-2-build-linux-i586-product/jdk7/hotspot/src/share/vm/gc_implementation/g1/concurrentG1Refine.cpp
        # Possible reasons:
        # The system is out of physical RAM or swap space
        # In 32 bit mode, the process size limit was hit
        # Possible solutions:
        # Reduce memory load on the system
        # Increase physical memory or swap space
        # Check if swap backing store is full
        # Use 64 bit Java on a 64 bit OS
        # Decrease Java heap size (-Xmx/-Xms)
        # Decrease number of Java threads
        # Decrease Java thread stack sizes (-Xss)
        # Set larger code cache with -XX:ReservedCodeCacheSize=
        # This output file may be truncated or incomplete.
        #
        # Out of Memory Error (allocation.inline.hpp:44), pid=27123, tid=3022015376
        #
        # JRE version: 6.0_25-b03
        # Java VM: Java HotSpot(TM) Server VM (21.0-b02 mixed mode linux-x86 )
        # Core dump written. Default location: /localhome/tests/specjapp04/wls1032/sthx6434/wlsdomain/wls103/specdomain/core or core.27123
        #

        --------------- T H R E A D ---------------

        Current thread (0x08dfd000): GCTaskThread [stack: 0x00000000,0x00000000] [id=27125]

        Stack: [0x00000000,0x00000000], sp=0xb42017b0, free space=2951173k
        Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
        V [libjvm.so+0x6c709b] VMError::report_and_die()+0x19b
        V [libjvm.so+0x2c8c2e] report_vm_out_of_memory(char const*, int, unsigned int, char const*)+0x4e
        V [libjvm.so+0x14a93b] AllocateHeap(unsigned int, char const*)+0x4b
        V [libjvm.so+0x2967a2] ConcurrentG1Refine::clear_and_record_card_counts()+0x92
        V [libjvm.so+0x35bd9c] G1RemSet::oops_into_collection_set_do(OopsInHeapRegionClosure*, int)+0x13c
        V [libjvm.so+0x347808] G1CollectedHeap::g1_process_strong_roots(bool, SharedHeap::ScanningOption, OopClosure*, OopsInHeapRegionClosure*, OopsInGenClosure*, int)+0x288
        V [libjvm.so+0x34f26d] G1ParTask::work(int)+0x98d
        V [libjvm.so+0x6d6289] GangWorker::loop()+0x99
        V [libjvm.so+0x6d5c08] GangWorker::run()+0x18
        V [libjvm.so+0x587411] java_start(Thread*)+0x111
        C [libpthread.so.0+0x5832] abort@@GLIBC_2.0+0x5832

        The full hs_err is attached.

              johnc John Cuthbertson
              johnc John Cuthbertson
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated:
                Resolved:
                Imported:
                Indexed: