• Icon: Sub-task Sub-task
    • Resolution: Future Project
    • Icon: P3 P3
    • tbd
    • 9
    • core-libs
    • None

      zlib still seems to be slow, even after fixing the other two peer subtasks of this subtask.
      Using the following flags make it warm up a bit:
        -XX:MaxNodeLimit=10000000 -XX:-ClipInlining -XX:-DontCompileHugeMethods
      It gets from stable 1ops/min to 3-4ops/min with this. It seems that functions a1() and a8() are the ones where most of the time is being spent.

      a1 is split. I tried to eliminate its splitting, but it's really _huge_. Using -Dnashorn.compiler.splitter.threshold=161719 (number found through binary search; could be still somewhat larger, but not as large as 162500) it generates a 48k bytecode method for one of its splits. It just can't fit in 64k in its current form. It also contains 473(!) local variables, and since the method is split, they all end up in scope. As an asm.js method, it'll start out by initializing all of them with zeroes whenever it is invoked.

      It's not entirely clear what's the current bottleneck. a JFR recording (attached to parent issue) seems to imply that there's a bunch of call sites within it (see screenshot attached to parent issue) that when further opened present calls to LambdaForm$MH.linkToCallSite, although there doesn't seem to be any actual linking going on all the time.

      The parent issue also has attached a prettified version of zlib-data.js that was used as the basis for JFR reporting. It is easier to debug with the prettified version.

            attila Attila Szegedi
            attila Attila Szegedi
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: