Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-4710319

HashMap.resize(int) aborts when it shouldn't

XMLWordPrintable

    • Icon: Enhancement Enhancement
    • Resolution: Fixed
    • Icon: P4 P4
    • 1.4.2
    • 1.4.0, 1.4.1
    • core-libs
    • mantis
    • generic, x86
    • solaris_8, windows_nt
    • Verified



      Name: gm110360 Date: 07/01/2002


      FULL PRODUCT VERSION :
      java version "1.4.0"
      Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.0-b92)
      Java HotSpot(TM) Client VM (build 1.4.0-b92, mixed mode)
      ------------------------------------------
      "Issue" also present in 1.4.1 beta sources (have not yet installed the binaries)

      FULL OPERATING SYSTEM VERSION : ALL (source code issue)


      A DESCRIPTION OF THE PROBLEM :
      This "issue" does not cause malfunctions or problems but
      degrades performance (or rather: renders performance
      improvement measures non-functional)

      So here we go:
      In "HashMap.resize(int newCapacity)", we find the following
      code:
      int oldCapacity = oldTable.length;
          
      // check if needed
      if (size < threshold || oldCapacity > newCapacity) return;

      The "size < threshold" is the problem: This renders the
      "newCapacity" parameter useless! The method returns when
      there's "no need" to expand yet, so the capacity will always
      either remain unchanged or double.

      Let's take a look at HashMap.putAll():
      public void putAll(Map t) {
          // Expand enough to hold t's elements without resizing.
          int n = t.size();
          if (n == 0) return;
          if (n >= threshold) {
              ... calculations ....
              resize(capacity);
          }

      The resize(capacity) - statement will never have any effect
      :-( I know this is "small stuff", but considering the cost
      of (repeated useless) re-hashing, it still worth
      "adjusting". After all, somebody put some thought into
      "putAll"...

      Speaking of which: I think it should be
      "if (n + size > threshold)". ok, in IdentityHashMap, it says
      "// conservatively pre-expand", but still: we should not
      expect a 100% overlap between old and new entries. In some
      cases, that's not even possible... Maybe, we could take
      something in between... The relative cost of int-operations
      compared to re-hashing makes this seem worthwhile... OTOH,
      this whole check only makes much of a difference if there
      are way more new entries than old ones... (newNeededCapacity
      > 4*currentCapacity). Still: re-hashing should be faster as
      long as there are fewer entries!
      Well, whatever: Just wanted to remark this because it looked
      strange to me...


      P.S.: Exactly the same story in WeakHashmap


      REPRODUCIBILITY :
      This bug can be reproduced always.
      (Review ID: 158763)
      ======================================================================

            jjb Josh Bloch
            gmanwanisunw Girish Manwani (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

              Created:
              Updated:
              Resolved:
              Imported:
              Indexed: