-
Bug
-
Resolution: Won't Fix
-
P4
-
None
-
24
-
x86_64
-
windows
ADDITIONAL SYSTEM INFORMATION :
Windows implementations of JDK11++, possibly earlier. Linux doesn’t seem to be affected.
A DESCRIPTION OF THE PROBLEM :
Time measurements relative of a starting point taken with nanoTime vs currentTimeMillis diverge after at most three minutes.
Since the most likely culprit is Windows and I believe currentTimeMillis to be the stable time function while nanoTime is off, I’d like to know how Java obtains those timestamps.
STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
run the attached class and wait a little
EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
Under Linux, we see differences stemming from the way how we scale up the millis to nanos. The differences are well-behaved and within millisecond accuracy.
ACTUAL -
Under Windows however, they diverge. On all my physical machines, the differences becomes negative with a relative drift of 0.001%, with the exception of a KVM Windows guest, where they become positive at 0.011++%.
Since both values are computed against their respective starting point, taken within a few µs of each other, one would think that they stay within millisecond accuracy like it happens under Linux. Any irregularities regarding the timer cadence shouldn’t matter as the timer only serves to give a slow, readable progression of measurements where the exact timing doesn’t matter.
---------- BEGIN SOURCE ----------
import java.text.*;
import java.util.*;
import java.util.concurrent.*;
public class ClockWatch {
static final long StartNS = System.nanoTime();
static final long StartMS = System.currentTimeMillis();
public static void main(String[] args) {
NumberFormat nf = NumberFormat.getNumberInstance( Locale.US);
nf.setMaximumFractionDigits( 3);
nf.setGroupingUsed( true);
DateFormat df = new SimpleDateFormat( "HH:mm:ss");
df.setTimeZone( TimeZone.getTimeZone( "GMT"));
ScheduledExecutorService ses = new ScheduledThreadPoolExecutor( 1);
Runnable compareJob = () -> {
long nowNS = System.nanoTime() - StartNS;
long nowMS = System.currentTimeMillis() - StartMS;
long tookNS = ( System.nanoTime() - StartNS) - nowNS;
long diffNS = 1_000_000 * nowMS - nowNS;
double driftPercent = 100.0 * diffNS / nowNS;
System.out.println( df.format( new Date( nowMS))
+ " currentTimeMillis is "
+ ( diffNS < 0 ? "" : " ")
+ nf.format( 1e-6 * diffNS)
+ " ms faster than nanoTime, took "
+ nf.format( tookNS) + " ns, drift "
+ nf.format( driftPercent) + " % since start"
);
};
ses.scheduleAtFixedRate( compareJob, 1, 3, TimeUnit.SECONDS);
}
}
---------- END SOURCE ----------
CUSTOMER SUBMITTED WORKAROUND :
open for ideas
The reason why this bothers me is that I want to reuse an existing nano timestamp and convert it into an equivalent to currentTimeMillis without calling the latter. I know, it only takes 4 ns under Windows on recent hardware.
FREQUENCY : always
Windows implementations of JDK11++, possibly earlier. Linux doesn’t seem to be affected.
A DESCRIPTION OF THE PROBLEM :
Time measurements relative of a starting point taken with nanoTime vs currentTimeMillis diverge after at most three minutes.
Since the most likely culprit is Windows and I believe currentTimeMillis to be the stable time function while nanoTime is off, I’d like to know how Java obtains those timestamps.
STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
run the attached class and wait a little
EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
Under Linux, we see differences stemming from the way how we scale up the millis to nanos. The differences are well-behaved and within millisecond accuracy.
ACTUAL -
Under Windows however, they diverge. On all my physical machines, the differences becomes negative with a relative drift of 0.001%, with the exception of a KVM Windows guest, where they become positive at 0.011++%.
Since both values are computed against their respective starting point, taken within a few µs of each other, one would think that they stay within millisecond accuracy like it happens under Linux. Any irregularities regarding the timer cadence shouldn’t matter as the timer only serves to give a slow, readable progression of measurements where the exact timing doesn’t matter.
---------- BEGIN SOURCE ----------
import java.text.*;
import java.util.*;
import java.util.concurrent.*;
public class ClockWatch {
static final long StartNS = System.nanoTime();
static final long StartMS = System.currentTimeMillis();
public static void main(String[] args) {
NumberFormat nf = NumberFormat.getNumberInstance( Locale.US);
nf.setMaximumFractionDigits( 3);
nf.setGroupingUsed( true);
DateFormat df = new SimpleDateFormat( "HH:mm:ss");
df.setTimeZone( TimeZone.getTimeZone( "GMT"));
ScheduledExecutorService ses = new ScheduledThreadPoolExecutor( 1);
Runnable compareJob = () -> {
long nowNS = System.nanoTime() - StartNS;
long nowMS = System.currentTimeMillis() - StartMS;
long tookNS = ( System.nanoTime() - StartNS) - nowNS;
long diffNS = 1_000_000 * nowMS - nowNS;
double driftPercent = 100.0 * diffNS / nowNS;
System.out.println( df.format( new Date( nowMS))
+ " currentTimeMillis is "
+ ( diffNS < 0 ? "" : " ")
+ nf.format( 1e-6 * diffNS)
+ " ms faster than nanoTime, took "
+ nf.format( tookNS) + " ns, drift "
+ nf.format( driftPercent) + " % since start"
);
};
ses.scheduleAtFixedRate( compareJob, 1, 3, TimeUnit.SECONDS);
}
}
---------- END SOURCE ----------
CUSTOMER SUBMITTED WORKAROUND :
open for ideas
The reason why this bothers me is that I want to reuse an existing nano timestamp and convert it into an equivalent to currentTimeMillis without calling the latter. I know, it only takes 4 ns under Windows on recent hardware.
FREQUENCY : always