The jdk.ThreadDump event is currently written when a chunk begins and ends (everyChunk), but when it is written at the beginning, it may trigger another rotation within 1 second. This can cause other relevant data to be flushed out very quickly, e.g. 15 seconds if using the default max size of 250 MB.
Reproducer:
$ java Reproducer 1400 300 20
import java.util.concurrent.Semaphore;
import jdk.jfr.Configuration;
import jdk.jfr.Recording;
public class Reproducer {
public static void main(String[] args) throws Exception {
int threadCount = Integer.parseInt(args[0]);
int stackDepth = Integer.parseInt(args[1]);
int sleepTime = Integer.parseInt(args[2]);
Semaphore semaphore = new Semaphore(0);
for (int i = 0; i < threadCount; i++) {
Thread t = new Thread(() -> stack(stackDepth, semaphore));
t.setDaemon(true);
t.start();
}
semaphore.acquire(threadCount);
Configuration c = Configuration.getConfiguration("default");
try (Recording r = new Recording(c)) {
r.start();
Thread.sleep(sleepTime * 1000);
}
}
static void stack(int depth, Semaphore semaphore) {
if (depth > 0) {
stack(depth - 1, semaphore);
}
if (depth == 0) {
semaphore.release();
try {
Thread.sleep(Long.MAX_VALUE);
} catch (InterruptedException ignored) {}
}
}
}
Short-term, we could change the implementation so that jdk.ThreadDump is only emitted when a recording starts and a chunk ends. Such a change may be suitable for backporting.
Longer-term, we might want to address this in a more generic way so that it cannot occur with other events as well, including user-defined ones. Three alternatives:
1) Redefine "everyChunk" so that it only emits when a recording starts and when a chunk ends.
2) Create a new keyword, e.g. "rotation", with the same semantics as option 1, but keep "everyChunk" as is.
3) Broaden the setting so it can accept a combination of recording- or chunk-specific settings.
Regardless of approach, the new semantics must support cases where two recordings are in use at the same time with different settings.
Reproducer:
$ java Reproducer 1400 300 20
import java.util.concurrent.Semaphore;
import jdk.jfr.Configuration;
import jdk.jfr.Recording;
public class Reproducer {
public static void main(String[] args) throws Exception {
int threadCount = Integer.parseInt(args[0]);
int stackDepth = Integer.parseInt(args[1]);
int sleepTime = Integer.parseInt(args[2]);
Semaphore semaphore = new Semaphore(0);
for (int i = 0; i < threadCount; i++) {
Thread t = new Thread(() -> stack(stackDepth, semaphore));
t.setDaemon(true);
t.start();
}
semaphore.acquire(threadCount);
Configuration c = Configuration.getConfiguration("default");
try (Recording r = new Recording(c)) {
r.start();
Thread.sleep(sleepTime * 1000);
}
}
static void stack(int depth, Semaphore semaphore) {
if (depth > 0) {
stack(depth - 1, semaphore);
}
if (depth == 0) {
semaphore.release();
try {
Thread.sleep(Long.MAX_VALUE);
} catch (InterruptedException ignored) {}
}
}
}
Short-term, we could change the implementation so that jdk.ThreadDump is only emitted when a recording starts and a chunk ends. Such a change may be suitable for backporting.
Longer-term, we might want to address this in a more generic way so that it cannot occur with other events as well, including user-defined ones. Three alternatives:
1) Redefine "everyChunk" so that it only emits when a recording starts and when a chunk ends.
2) Create a new keyword, e.g. "rotation", with the same semantics as option 1, but keep "everyChunk" as is.
3) Broaden the setting so it can accept a combination of recording- or chunk-specific settings.
Regardless of approach, the new semantics must support cases where two recordings are in use at the same time with different settings.