-
Enhancement
-
Resolution: Unresolved
-
P4
-
None
FreeListAllocator::release adds the released node to an internal pending list, and then calls try_transfer_pending, which conditionally transfers nodes from the pending list to the free list. That conditional is based on the number of entries in the pending list, in order to batch the transfers and reduce the number of synchronizations.
However, some clients might be releasing some unknown but potentially fairly large number of nodes at once (typically on some phase transition). This can lead to repeated smaller batch transfers and synchronizations within the series of release operations. In some of those use cases the client even knows that no synchronization is even needed, because there won't be any concurrent allocations.
It would be nice if such a client could have more control over the batching of those transfers.
One idea is to add a "more coming" parameter to release(), to indicate there are more release operations coming in short order, so defer transfers for now. It would default to false. The release operation would skip the transfer attempt when that argument is true.
To go with that, add a "flush" operation to indicate such a batch of release operations is complete, and transfer should be considered. It could have an argument (or there could be two flushing functions) indicating whether the internal batching should be done, or the transfer attempt should be unconditional (e.g. don't suppress the transfer if the size of the pending list is below the configured batch threshold).
A different idea is to provide a release operation that takes a list of elements as a tuple of head, tail, and length.
However, some clients might be releasing some unknown but potentially fairly large number of nodes at once (typically on some phase transition). This can lead to repeated smaller batch transfers and synchronizations within the series of release operations. In some of those use cases the client even knows that no synchronization is even needed, because there won't be any concurrent allocations.
It would be nice if such a client could have more control over the batching of those transfers.
One idea is to add a "more coming" parameter to release(), to indicate there are more release operations coming in short order, so defer transfers for now. It would default to false. The release operation would skip the transfer attempt when that argument is true.
To go with that, add a "flush" operation to indicate such a batch of release operations is complete, and transfer should be considered. It could have an argument (or there could be two flushing functions) indicating whether the internal batching should be done, or the transfer attempt should be unconditional (e.g. don't suppress the transfer if the size of the pending list is below the configured batch threshold).
A different idea is to provide a release operation that takes a list of elements as a tuple of head, tail, and length.