-
Enhancement
-
Resolution: Unresolved
-
P4
-
None
Currently when using HugeTLBFS large pages (via -XX:+UseLargePages), some collectors require up-front commit of all the memory.
This is an issue when not wanting to commit the whole heap upfront, and also do not want to use THP.
The reason for this is that when failing to commit parts of the memory, at least historically, the reservation would be lost as well.
With kernel 5.14+, there is the madvise flag MADV_POPULATE_WRITE that can be used to commit large pages without losing the reservation. Use mprotect and madvise to fault in the memory, and if that does not work (e.g. because of out of large pages), madvise will simply return EFAULT not loosing the reservation.
In that case one could revert back to committing small pages.
(Brought up by [~stefank] in some internal discussion)
This is an issue when not wanting to commit the whole heap upfront, and also do not want to use THP.
The reason for this is that when failing to commit parts of the memory, at least historically, the reservation would be lost as well.
With kernel 5.14+, there is the madvise flag MADV_POPULATE_WRITE that can be used to commit large pages without losing the reservation. Use mprotect and madvise to fault in the memory, and if that does not work (e.g. because of out of large pages), madvise will simply return EFAULT not loosing the reservation.
In that case one could revert back to committing small pages.
(Brought up by [~stefank] in some internal discussion)