-
Bug
-
Resolution: Fixed
-
P3
-
9
-
b151
-
generic
-
generic
Issue | Fix Version | Assignee | Priority | Status | Resolution | Resolved In Build |
---|---|---|---|---|---|---|
JDK-8183887 | 8u161 | Hannes Wallnoefer | P3 | Resolved | Fixed | b01 |
JDK-8172667 | 8u152 | Hannes Wallnoefer | P3 | Resolved | Fixed | b01 |
JDK-8192730 | emb-8u161 | Hannes Wallnoefer | P3 | Resolved | Fixed | b01 |
A report on nashorn-dev shows that our current way of populating the underlying dense array when setting elements in a SparseArrayData instance can lead to huge performance problems:
http://mail.openjdk.java.net/pipermail/nashorn-dev/2016-December/006713.html
When running the code with the data provided later in the thread, the following line causes the function's execution speed to drop by more than 100 x:
response[summoner.id] = summoner;
The reason is that summoner.id is a numeric id that is sometimes smaller than the current max length for dense arrays (1024 x 1024), causing allocation of large dense arrays when the index is set.
The solution seems to be to never grow the underlying dense array in a sparse array data, even if the index is considered suitable for dense arrays. Another solution would be only grow the dense array if the growth is sequential, i.e. we're setting the element at index underlying.length(). However, this leads to complications as we need to check existing elements in the sparse map for that index, so I think the first approach is the better and simpler solution.
An initial implementation of the first approach shows that performance increases more than 100 x for the test script, from ~ 130000 ms for 1000 concurrent iterations to less than 1000 ms.
http://mail.openjdk.java.net/pipermail/nashorn-dev/2016-December/006713.html
When running the code with the data provided later in the thread, the following line causes the function's execution speed to drop by more than 100 x:
response[summoner.id] = summoner;
The reason is that summoner.id is a numeric id that is sometimes smaller than the current max length for dense arrays (1024 x 1024), causing allocation of large dense arrays when the index is set.
The solution seems to be to never grow the underlying dense array in a sparse array data, even if the index is considered suitable for dense arrays. Another solution would be only grow the dense array if the growth is sequential, i.e. we're setting the element at index underlying.length(). However, this leads to complications as we need to check existing elements in the sparse map for that index, so I think the first approach is the better and simpler solution.
An initial implementation of the first approach shows that performance increases more than 100 x for the test script, from ~ 130000 ms for 1000 concurrent iterations to less than 1000 ms.
- backported by
-
JDK-8172667 SparseArrayData should not grow its underlying dense array data
-
- Resolved
-
-
JDK-8183887 SparseArrayData should not grow its underlying dense array data
-
- Resolved
-
-
JDK-8192730 SparseArrayData should not grow its underlying dense array data
-
- Resolved
-