-
Enhancement
-
Resolution: Unresolved
-
P4
-
23
It would be desirable if Unsafe::setMemory enjoyed the same level of optimizations (e.g. redundant store elimination) as e.g. Arrays::copyOf.
This is particularly important in the context of allocators in the FFM API. Segment allocators provided by the JDK perform memory zeroing before returning the allocated segment back to clients, for safety reasons. However, the FFM API allows for implementors that want to provide non-zeroing allocation behavior (e.g. for performance reasons).
This creates friction: a library might want to allow clients to provide an allocator, so that clients can override allocation and deallocation policy. But at the same time the library might rely on the fact that memory is zeroed after it is allocated. If that's the case, the only way a program can ensure that a freshly allocated segment is correctly zeroed is to zero it explicitly, like so:
```
void allocateAndZero(SegmentAllocator allocator) {
MemorySegment segment = allocator.allocate(100);
segment.fill((byte)0);
...
}
```
But doing so might result in double zeroing if the provided allocator is already a zeroing allocator. Fixing this at the API level is not an option (ideally, the primitive allocation method should NOT zero memory, but that was deemed to be too unsafe for a Java SE API). Given these constraints, it would be helpful if patterns like these where redundant zeroing occurs could be detected, and eliminated as an optimization step (as it happens for redundant array copies).
This is particularly important in the context of allocators in the FFM API. Segment allocators provided by the JDK perform memory zeroing before returning the allocated segment back to clients, for safety reasons. However, the FFM API allows for implementors that want to provide non-zeroing allocation behavior (e.g. for performance reasons).
This creates friction: a library might want to allow clients to provide an allocator, so that clients can override allocation and deallocation policy. But at the same time the library might rely on the fact that memory is zeroed after it is allocated. If that's the case, the only way a program can ensure that a freshly allocated segment is correctly zeroed is to zero it explicitly, like so:
```
void allocateAndZero(SegmentAllocator allocator) {
MemorySegment segment = allocator.allocate(100);
segment.fill((byte)0);
...
}
```
But doing so might result in double zeroing if the provided allocator is already a zeroing allocator. Fixing this at the API level is not an option (ideally, the primitive allocation method should NOT zero memory, but that was deemed to be too unsafe for a Java SE API). Given these constraints, it would be helpful if patterns like these where redundant zeroing occurs could be detected, and eliminated as an optimization step (as it happens for redundant array copies).
- relates to
-
JDK-8351140 RISC-V: Intrinsify Unsafe::setMemory
-
- Open
-
-
JDK-8329331 Intrinsify Unsafe::setMemory
-
- Resolved
-