Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-6922992

G1: assert(Universe::heap()->is_in_or_null(*loc),"found non oop pointer")

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: P2 P2
    • None
    • hs17
    • hotspot
    • gc
    • x86
    • generic

      See logs and rerun.sh here /net/sqenfs-1.sfbay/export1/tools/gtee/results/JDK6/ADHOC/VM/2010-02-02_01/vm/solaris-i586/server/mixed/solaris-i586_server_mixed_vm.gc.testlist/ResultDir/StringGC

      The crash reproduced on vm-x4100-01 with fastdebug and product. It does not reproduced without G1.

      The hs_err is:
      ;; Using jvm: "/net/sqenfs-1.sfbay/export1/jdk/vm/hsx/17/pit/b08/jdk7b81/fastdebug/solaris-i586/jre/lib/i386/server/libjvm.so"
      #
      ;; Using jvm: "/net/sqenfs-1.sfbay/export1/jdk/vm/hsx/17/pit/b08/jdk7b81/fastdebug/solaris-i586/jre/lib/i386/server/libjvm.so"
      #
      # A fatal error has been detected by the Java Runtime Environment:
      #
      # Internal Error (/tmp/jprt/P1/B/230259.et151817/source/src/share/vm/compiler/oopMap.cpp:425), pid=24252, tid=11
      # Error: assert(Universe::heap()->is_in_or_null(*loc),"found non oop pointer")
      #
      # JRE version: 7.0-b80
      # Java VM: Java HotSpot(TM) Server VM (17.0-b08-2010-01-22-230259.et151817.hs17b08-fastdebug mixed mode solaris-x86 )
      # If you would like to submit a bug report, please visit:
      # http://java.sun.com/webapps/bugreport/crash.jsp
      #

      --------------- T H R E A D ---------------

      Current thread (0x09224800): VMThread [stack: 0xb677f000,0xb67ff000] [id=11]

      Stack: [0xb677f000,0xb67ff000], sp=0xb67fe100, free space=1fcb67ff000k
      Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
      V [libjvm.so+0x1753b6e];; void VMError::report(outputStream*)+0x5be
      V [libjvm.so+0x1754cf6];; void VMError::report_and_die()+0x586
      V [libjvm.so+0x7e5c29];; void report_assertion_failure(const char*,int,const char*)+0x61
      V [libjvm.so+0x131c890];; void OopMapSet::all_do(const frame*,const RegisterMap*,OopClosure*,void(*)(oop*,oop*),OopClosure*)+0x790
      V [libjvm.so+0x131c0f5];; void OopMapSet::oops_do(const frame*,const RegisterMap*,OopClosure*)+0x31
      V [libjvm.so+0x8b9d44];; void frame::oops_code_blob_do(OopClosure*,CodeBlobClosure*,const RegisterMap*)+0x34
      V [libjvm.so+0x8bb1a8];; void frame::oops_do_internal(OopClosure*,CodeBlobClosure*,RegisterMap*,bool)+0x8c
      V [libjvm.so+0x1667820];; void JavaThread::oops_do(OopClosure*,CodeBlobClosure*)+0x290
      V [libjvm.so+0x166f7df];; void Threads::possibly_parallel_oops_do(OopClosure*,CodeBlobClosure*)+0x53
      V [libjvm.so+0x14996e6];; void SharedHeap::process_strong_roots(bool,bool,SharedHeap::ScanningOption,OopClosure*,CodeBlobClosure*,OopsInGenClosure*)+0xc2
      V [libjvm.so+0x909592];; void G1MarkSweep::mark_sweep_phase3()+0x96
      V [libjvm.so+0x908dd2];; void G1MarkSweep::invoke_at_safepoint(ReferenceProcessor*,bool)+0xc6
      V [libjvm.so+0x8c9dc5];; void G1CollectedHeap::do_collection(bool,bool,unsigned)+0x519
      V [libjvm.so+0x8cac55];; HeapWord*G1CollectedHeap::satisfy_failed_allocation(unsigned)+0x4d
      V [libjvm.so+0x178d1c6];; void VM_G1CollectForAllocation::doit()+0x3a
      V [libjvm.so+0x178b029];; void VM_Operation::evaluate()+0xe1
      V [libjvm.so+0x1789813];; void VMThread::evaluate_operation(VM_Operation*)+0xeb
      V [libjvm.so+0x1789dfa];; void VMThread::loop()+0x446
      V [libjvm.so+0x178946a];; void VMThread::run()+0xb2
      V [libjvm.so+0x132d87f];; java_start+0x117
      C [libc.so.1+0xa7045];; _thr_setup+0x4e
      C [libc.so.1+0xa7330];; _lwp_start+0x0

      JavaThread 0x09333c00 (nid = 21) was being processed
      Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
      v ~RuntimeStub::_new_array_Java
      J gc.gctests.StringGC.StringGC$StringAdder.run()V
      J nsk.share.runner.ThreadsRunner$ManagedThread.run()V
      v ~StubRoutines::call_stub
      VM_Operation (0xb432c330): G1CollectForAllocation, mode: safepoint, requested by thread 0x09334800
      The line is updated to
      The crash reproduced on vm-x4100-01 with fastdebug and product. It does not reproduced without G1.
      I ran the test case 100 times in a loop (with a JVM that does not contain the fix for 6755988) on vm-x4100-01 and got 4 failures. Added instrumentation (to display when we allocate a TLAB that is >= the humongous threshold and the resulting heap region) shows:

      ========== Run: 29 / Fails: 0 ==========
      Stress time: 60
      Stress iterations factor: 1
      Stress threads factor: 1
      Max memory: 1040187392
      Sleep time: 500
      Iterations: 0
      Number of threads: 4
      Seed: 1269308115803
      Run GC thread: false
      Run mem diag thread: false
      Run forever: false
      Starting Thread[gc.gctests.StringGC.StringGC$StringAdder@1de3f2d,5,main]
      Starting Thread[gc.gctests.StringGC.StringGC$StringAdder@1f9dc36,5,main]
      Starting Thread[gc.gctests.StringGC.StringGC$StringAdder@e86da0,5,main]
      Starting Thread[gc.gctests.StringGC.StringGC$StringAdder@1754ad2,5,main]
      ####### Humongous TLAB [0xd4273a30 , 0xd42f3a30)
      VM option '+UnlockExperimentalVMOptions'
      VM option '+UseG1GC'
      VM option '+UnlockDiagnosticVMOptions'
      VM option '+PrintHeapAtGCExtended'
      VM option '-G1PrintHeapRegions'
      ####### Region:
             Y 106 space 1024K, 95% used [0xd4200000, 0xd42f3a30, 0xd4300000)
      ####### Humongous TLAB [0xc927c680 , 0xc92fc680)
      ####### Region:
             Y 107 space 1024K, 98% used [0xc9200000, 0xc92fc680, 0xc9300000)
      ####### Humongous TLAB [0xc22449f0 , 0xc22c49f0)
      ####### Region:
             Y 108 space 1024K, 76% used [0xc2200000, 0xc22c49f0, 0xc2300000)
      ####### Humongous TLAB [0xbda79940 , 0xbdaf9940)
      ####### Region:
             Y 108 space 1024K, 97% used [0xbda00000, 0xbdaf9940, 0xbdb00000)
      ####### Humongous TLAB [0xd0e3aa40 , 0xd0ebaa40)
      ####### Region:
             Y 109 space 1024K, 97% used [0xd0e00000, 0xd0ef9590, 0xd0f00000)
      ####### Humongous TLAB [0xc0300000 , 0xc0380000)
      ####### Region:
       HS 116 space 1024K, 50% used [0xc0300000, 0xc0380000, 0xc0400000)
      ####### Humongous TLAB [0xe4b00000 , 0xe4b80000)
      ####### Region:
       HS 118 space 1024K, 50% used [0xe4b00000, 0xe4b80000, 0xe4c00000)
      ####### Humongous TLAB [0xef200000 , 0xef280000)
      ####### Region:
       HS 120 space 1024K, 50% used [0xef200000, 0xef280000, 0xef300000)
      # Found non oop pointer. Dumping state at failure
      ------
      OopMap{ebp=Oop [12]=Oop [20]=Oop [40]=Oop [44]=Oop off=660}bci: 1
      Compiled frame (sp=0xb437d890 unextended sp=0xb437d890, fp=0xef27f158, pc=0xf94ca134)
             7 nmethod (2) (method)
             7 nmethod (2) (method)
      Register map
      ebp [0xb437d888] = 0xef27f158
      ------
      ebp=Oop register r
      ebploc = b437d888 *loc = ef27f158

      The output above indicates that we are operating on the compiled frame at sp=0xb437d890. The ebp register contains the address 0xb437d888 which, when dereferenced, is supposed to contain an oop. The value of the oop is 0xef27f158. The added tracing also shows that this oop is actually part of a TLAB that ended up in a region that was tagged as humongous (6755988):

      ####### Humongous TLAB [0xef200000 , 0xef280000)
      ####### Region:
       HS 120 space 1024K, 50% used [0xef200000, 0xef280000, 0xef300000)

      However if we look at the heap region in the heap dump in the hs_err file:

                F 120 space 1024K, 0% used [0xef200000, 0xef200000, 0xef300000)

      the region has been freed. In fact the heap dump in the hs_err file does NOT contain any humongous regions at all - even though we can see from the above tracing that some regions were certainly tagged with the startsHumongous attribute.

      The stack trace is the same as that in the original report:

      Stack: [0xb677f000,0xb67ff000], sp=0xb67fe000, free space=1fcb67ff000k
      Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code)
      V [libjvm.so+0x176baae];; __1cHVMErrorGreport6MpnMoutputStream__v_+0x5be
      V [libjvm.so+0x176cc36];; __1cHVMErrorOreport_and_die6M_v_+0x586
      V [libjvm.so+0x7f14b1];; __1cYreport_assertion_failure6Fpkci1_v_+0x61
      V [libjvm.so+0x132d1c0];; __1cJOopMapSetGall_do6FpknFframe_pknLRegisterMap_pnKOopClosure_pFpnDoop_9B_v8_v_+0x790
      V [libjvm.so+0x132ca25];; __1cJOopMapSetHoops_do6FpknFframe_pknLRegisterMap_pnKOopClosure__v_+0x31
      V [libjvm.so+0x8c6bb4];; __1cFframeRoops_code_blob_do6MpnKOopClosure_pnPCodeBlobClosure_pknLRegisterMap__v_+0x34
      V [libjvm.so+0x8c8018];; __1cFframeQoops_do_internal6MpnKOopClosure_pnPCodeBlobClosure_pnLRegisterMap_b_v_+0x8c
      V [libjvm.so+0x167f2b8];; __1cKJavaThreadHoops_do6MpnKOopClosure_pnPCodeBlobClosure__v_+0x290
      V [libjvm.so+0x16871b3];; __1cHThreadsZpossibly_parallel_oops_do6FpnKOopClosure_pnPCodeBlobClosure__v_+0x53
      V [libjvm.so+0x14aeb62];; __1cKSharedHeapUprocess_strong_roots6Mbbn0AOScanningOption_pnKOopClosure_pnPCodeBlobClosure_pnQOopsInGenClosure__v_+0xc2
      V [libjvm.so+0x913fd2];; __1cLG1MarkSweepRmark_sweep_phase36F_v_+0x96
      V [libjvm.so+0x913702];; __1cLG1MarkSweepTinvoke_at_safepoint6FpnSReferenceProcessor_b_v_+0xc6
      V [libjvm.so+0x8d6fd9];; __1cPG1CollectedHeapNdo_collection6MbbI_v_+0x519
      V [libjvm.so+0x8d7e69];; __1cPG1CollectedHeapZsatisfy_failed_allocation6MI_pnIHeapWord__+0x4d
      V [libjvm.so+0x17a537a];; __1cZVM_G1CollectForAllocationEdoit6M_v_+0x3a
      V [libjvm.so+0x17a310d];; __1cMVM_OperationIevaluate6M_v_+0xe1
      V [libjvm.so+0x17a1907];; __1cIVMThreadSevaluate_operation6MpnMVM_Operation__v_+0xeb
      V [libjvm.so+0x17a1eb8];; __1cIVMThreadEloop6M_v_+0x410
      V [libjvm.so+0x17a155e];; __1cIVMThreadDrun6M_v_+0xb2
      V [libjvm.so+0x133e3d7];; java_start+0x117

      And the GC is currently processing a thread with the following stack trace:

      JavaThread 0x09332c00 (nid = 21) was being processed
      Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
      v ~RuntimeStub::_new_array_Java
      J java.lang.StringBuilder.append(Ljava/lang/String;)Ljava/lang/StringBuilder;
      J gc.gctests.StringGC.StringGC$StringAdder.run()V
      J nsk.share.runner.ThreadsRunner$ManagedThread.run()V
      v ~StubRoutines::call_stub
      VM_Operation (0xb54fc330): G1CollectForAllocation, mode: safepoint, requested by thread 0x092e7c00

      The next failure in the run has a similar pattern:

      ####### Humongous TLAB [0xc3900000 , 0xc3980000)
      ####### Region:
       HS 117 space 1024K, 50% used [0xc3900000, 0xc3980000, 0xc3a00000)
      # Found non oop pointer. Dumping state at failure
      ------
      OopMap{ebp=Oop [12]=Oop [20]=Oop [40]=Oop [44]=Oop off=660}bci: 1
      Compiled frame (sp=0xb437da90 unextended sp=0xb437da90, fp=0xf51ed9a0, pc=0xf94ca1b4)
             7 nmethod (2) (method)
             7 nmethod (2) (method)
      Register map
      ebp [0xb437da88] = 0xf51ed9a0
      ------
      [12]=Oop register r
      [12]loc = b437da9c *loc = c397f1c8

      JavaThread 0x0932a800 (nid = 21) was being processed
      Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
      v ~RuntimeStub::_new_array_Java
      J java.lang.StringBuilder.append(Ljava/lang/String;)Ljava/lang/StringBuilder;
      J gc.gctests.StringGC.StringGC$StringAdder.run()V
      J nsk.share.runner.ThreadsRunner$ManagedThread.run()V
      v ~StubRoutines::call_stub
      VM_Operation (0xb42db3f0): G1CollectForAllocation, mode: safepoint, requested by thread 0x0932d000

      and the region containing 0xc397f1c8

                F 117 space 1024K, 0% used [0xc3900000, 0xc3900000, 0xc3a00000)

      is free.

            Unassigned Unassigned
            lmesnik Leonid Mesnik
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

              Created:
              Updated:
              Resolved:
              Imported:
              Indexed: