-
Bug
-
Resolution: Duplicate
-
P4
-
None
-
8u201
-
x86_64
-
windows_10
ADDITIONAL SYSTEM INFORMATION :
I've tested it using Java 8 Update 201, Java 8 Update 92 and Java 12 on Windows 10. The issue appears when using each of these versions.
I've also tested it using Java 8 Update 201 inside a Linux VM, where the issue did not appear at all.
A DESCRIPTION OF THE PROBLEM :
While creating an application which records and analyzes audio in real time, I ran into the same issue as described in JDK-8211428, however in my case the delay didn't start appearing after several days of runtime (which would be totally fine in my case), but after only several hours.
While experimenting with different audio formats I found out that the time that has to pass until the latency is introduced is dependent on the frame size and the sample rate of the used audio format. By doubling either the amount of channels or the sample size or the sample rate, the latency starts appearing after only half of the runtime as before. This seems like the latency is introduced after a fixed amount of bytes have been read from the TargetDataLine. In fact, someone at Stackoverflow speculated that it might be an internal sample counter which might overflow. While this is only speculation of what the cause could be, it's worth mentioning that this thesis could hold true. The latency starts appearing consistently after somewhere between 6.5 and 7 hours of runtime when using a 16-bit stereo audio format with a sample rate of 44100 Hertz. Assuming there might be an internal 32-bit integer which counts the number of bytes read, it would overflow after 2^32 byte / (2 byte [sample size in bytes] * 2 [number of audio channels] * 44100 Hertz [sample rate]) = 24348 seconds = ~6.76 hours, which is inside the interval I've narrowed the start of the latency down to.
It's also worth mentioning that this issue doesn't seem to happen when using Linux. The delay only appeared for me when using Windows.
In the following Stackoverflow question I've described everything I've tested so far (contains non-aggregated results of my investigations): https://stackoverflow.com/questions/55482552/sudden-delay-while-recording-audio-over-long-time-periods-inside-the-jvm
This is the speculation about an overflowing counter I was referring to: https://chat.stackoverflow.com/transcript/message/45929177#45929177
STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
As in JDK-8211428 you will need to have some audio input device, such as a microphone . Run the attached code sample and select your audio input device, an audio format, a sample rate and a buffer size to use. The application will output the RMS of the currently captured audio samples to the console. If you make some loud noise (e.g. clap in front of your microphone), you should see this number rise without any noticeable delay.
Leave the application running for several hours. The actual time needed to wait until the delay is introduced seems to be dependent on the audio format used, so it is useful to select an audio format with a high frame size and a high sample rate in order to not need to wait so long.
Make some loud noise again and watch the output of the application. On both computers I've tested it the RMS number is delayed by about one to two seconds.
My code sample can also be found in the following GitHub repository: https://github.com/FabianB98/audio-input-delay-test
EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
The displayed RMS number should align with the actual noise with a minimal delay even after several hours of runtime.
ACTUAL -
The displayed RMS number is noticeably delayed by somewhere between one and two seconds.
---------- BEGIN SOURCE ----------
//Note: The following code sample is written in Kotlin, but I can confirm that the issue also appears if written in Java. In case code samples written in Kotlin are not desired, I'm also willing to port my code sample to Java.
//Main.kt
package audiotest
import java.nio.ByteBuffer
import java.nio.ByteOrder
import javax.sound.sampled.AudioFormat
import javax.sound.sampled.AudioSystem
import kotlin.math.ceil
import kotlin.math.sqrt
const val SAMPLE_SIZE = 1024
const val AUTOFLUSH = true
const val AUTOFLUSH_INTERVAL = 10 * 60 * 1000L //10 minutes
fun main() {
//Get all audio input devices.
val inputs = AudioInput.getAudioInputs()
//Show a list of all audio inputs.
println("Found the following audio inputs:")
inputs.forEachIndexed { index, audioInput -> println(" - $index: $audioInput") }
println()
//Ask the user to select an input device.
println("Enter the index of the input device to use...")
val inputDeviceIndex = readLine()?.toIntOrNull() ?: 0
val input = inputs[inputDeviceIndex]
println()
//Show all supported audio formats of the selected input device.
val inputFormats = input.getSupportedAudioFormats()
println("Desired input device supports the following audio formats:")
inputFormats.forEachIndexed { index, audioFormat -> println(" - $index: $audioFormat") }
println()
//Let the user select an audio format.
println("Enter the index of the audio format to use...")
val inputFormatIndex = readLine()?.toIntOrNull() ?: 0
var inputFormat = inputFormats[inputFormatIndex]
println()
//Let the user specify a sample rate if the audio format doesn't have one.
if (inputFormat.sampleRate == AudioSystem.NOT_SPECIFIED.toFloat()) {
println("Desired audio format doesn't specify a sample rate. Please enter the sample rate to use...")
val sampleRate = readLine()?.toFloatOrNull() ?: 44100.0f
inputFormat = AudioFormat(
inputFormat.encoding,
sampleRate,
inputFormat.sampleSizeInBits,
inputFormat.channels,
inputFormat.frameSize,
sampleRate,
inputFormat.isBigEndian
)
println()
}
//Let the user specify a buffer size.
println("Enter the desired buffer size...")
val bufferSize = readLine()?.toIntOrNull() ?: 4096
println()
//Start capturing audio data.
println("Starting to capture audio samples as soon as you're ready (Press enter to start). To pause the console " +
"output while the application is capturing audio, press enter again.")
readLine()
input.open(inputFormat, bufferSize)
input.start()
val listener = object : AudioInputListener {
@Volatile
var output: Boolean = true
private val dataAsInts = IntArray(SAMPLE_SIZE * inputFormat.channels)
private val converter = ByteToIntConverter(inputFormat)
private var nextAutoFlush = System.currentTimeMillis() + AUTOFLUSH_INTERVAL
override fun audioFrameCaptured(data: ByteArray) {
if (output) {
//Calculate and print the RMS value of the captured audio samples.
converter.bytesToInts(data, dataAsInts)
val quadraticSum = dataAsInts.sumByDouble { it.toDouble() * it.toDouble() }
val rms = sqrt(quadraticSum / dataAsInts.size)
println("RMS: $rms")
}
if (AUTOFLUSH && nextAutoFlush <= System.currentTimeMillis()) {
println("Auto flushing audio input...")
input.flush()
nextAutoFlush = System.currentTimeMillis() + AUTOFLUSH_INTERVAL
}
}
}
val thread = input.AudioCaptureThread(listener, SAMPLE_SIZE)
//Handle user inputs.
loop@
while (true) {
readLine()
listener.output = false
println("Output paused. Enter \"flush\", \"restart\", \"reopen\", \"stop\", \"quit\" or nothing...")
val line = readLine()?.trim()
when (line) {
"flush" -> {
println("Flushing audio input...")
input.flush()
}
"restart" -> {
println("Restarting audio input...")
input.stop()
input.start()
}
"reopen" -> {
println("Reopening audio input...")
input.stop()
input.close()
input.open(inputFormat, bufferSize)
input.start()
}
"stop", "quit" -> {
println("Stopping...")
break@loop
}
}
println("Output unpaused.")
listener.output = true
}
//Clean up.
thread.interrupt()
thread.join()
input.stop()
input.close()
}
class ByteToIntConverter(format: AudioFormat) {
//Determine the amount of bytes per integer, the byte order and the offset for the conversion.
private val bytesPerInt = ceil(format.sampleSizeInBits.toFloat() / 8.0f).toInt()
private val order = if (format.isBigEndian) ByteOrder.BIG_ENDIAN else ByteOrder.LITTLE_ENDIAN
private val srcPos = if (format.isBigEndian) 4 - bytesPerInt else 0
//Determine the formula to use for getting the most significant (i.e. highest order) byte of a number.
private val getHighestByte = if (format.isBigEndian)
{ intIndex: Int -> intIndex * bytesPerInt }
else
{ intIndex: Int -> (intIndex + 1) * bytesPerInt - 1 }
private val signed = format.encoding == AudioFormat.Encoding.PCM_SIGNED
//Create a buffer for converting exactly four bytes into an integer.
private val localBytes = ByteArray(4)
private val byteBuffer = ByteBuffer.wrap(localBytes).order(order)
fun bytesToInts(data: ByteArray, result: IntArray) {
//Perform the actual conversion.
for (i in 0 until data.size / bytesPerInt) {
//Determine if the highest bit of the is set.
val highestBit = data[getHighestByte(i)].toInt() and 0x80 != 0
//Create exactly four bytes that represent the same number.
val initialValue = if (highestBit && signed) 0xFF.toByte() else 0x00.toByte()
for (j in 0 until 4)
localBytes[j] = initialValue
for (j in 0 until bytesPerInt)
localBytes[srcPos + j] = data[i * bytesPerInt + j]
//Convert the four bytes into an integer.
byteBuffer.position(0)
result[i] = byteBuffer.int
}
}
}
//AudioInput.kt
package audiotest
import javax.sound.sampled.*
import kotlin.math.min
/**
* Represents an audio input device.
*/
class AudioInput(val mixer: Mixer, val line: TargetDataLine) {
/**
* Gets all supported [AudioFormat]s for this audio input device.
*/
fun getSupportedAudioFormats(): Array<AudioFormat> = (line.lineInfo as DataLine.Info).formats
/**
* Opens the audio input device with the given [format] and the given [bufferSize]. [bufferSize] might be negative
* if the lines default buffer size should be used.
*/
fun open(format: AudioFormat, bufferSize: Int = -1) {
val bufferSizeInBytes = bufferSize * format.frameSize
if (!line.isOpen) {
//Open the line.
if (bufferSize > 0)
line.open(format, bufferSizeInBytes)
else
line.open(format)
}
//Check if the buffer size was set correctly.
if (bufferSize > 0 && line.bufferSize != bufferSizeInBytes)
System.err.println(
"Couldn't set the buffer size to the desired $bufferSizeInBytes bytes! Actual buffer " +
"size is ${line.bufferSize} bytes instead..."
)
}
/**
* Closes the audio input device.
*/
fun close() {
line.close()
}
/**
* Starts the audio input device, so it may engage in data I/O.
*/
fun start() {
line.start()
}
/**
* Stops the audio input device.
*/
fun stop() {
line.stop()
}
/**
* Flushes the audio input devices internal buffer.
*/
fun flush() {
line.flush()
}
override fun toString(): String = mixer.mixerInfo.name
protected fun finalize() {
stop()
close()
}
/**
* A thread that continuously captures audio samples in packets of [sampleSize] audio frames until interrupted.
*/
inner class AudioCaptureThread(val listener: AudioInputListener, val sampleSize: Int) : Thread() {
private var running = true
init {
start()
}
override fun run() {
//Create a buffer for the audio samples.
val bytesToRead = sampleSize * line.format.frameSize
val data = ByteArray(bytesToRead)
while (!isInterrupted && running) {
//Read the next set of audio samples.
var bytesRead = 0
while (bytesRead < bytesToRead) {
bytesRead += line.read(data, bytesRead, min(bytesToRead, bytesToRead - bytesRead))
}
//Update the listener.
listener.audioFrameCaptured(data)
}
}
override fun interrupt() {
super.interrupt()
running = false
}
}
companion object {
/**
* Gets a list of all available [AudioInput]s.
*/
fun getAudioInputs(): ArrayList<AudioInput> {
val result = ArrayList<AudioInput>()
//Iterate over all available Mixers.
for (info in AudioSystem.getMixerInfo()) {
val mixer = try {
AudioSystem.getMixer(info)
} catch (e: SecurityException) {
System.err.println("Couldn't access Mixer \"${info.name}\" due to security restrictions!")
e.printStackTrace()
continue
}
//Iterate over all available TargetDataLines of the current Mixer.
for (lineInfo in mixer.targetLineInfo) {
val line = try {
mixer.getLine(lineInfo)
} catch (e: LineUnavailableException) {
System.err.println(
"Couldn't get the TargetDataLine for Mixer \"${info.name}\" as it is " +
"currently unavailable!"
)
e.printStackTrace()
continue
} catch (e: SecurityException) {
System.err.println(
"Couldn't get the TargetDataLine for Mixer \"${info.name}\" due to " +
"security restrictions!"
)
e.printStackTrace()
continue
}
if (line is TargetDataLine)
result.add(AudioInput(mixer, line))
}
}
return result
}
}
}
//AudioInputListener.kt
package audiotest
/**
* A listener that gets called when an [AudioInput] has captured some audio samples.
*/
interface AudioInputListener {
/**
* This method will get called whenever a new set of audio samples was captured.
*/
fun audioFrameCaptured(data: ByteArray)
}
---------- END SOURCE ----------
CUSTOMER SUBMITTED WORKAROUND :
As described in JDK-8211428, I can confirm that closing and reopening the TargetDataLine gets rid of the delay. However, this is not ideal since doing this would result in some arbitrary amount of time where I wouldn't be able to capture audio samples. Furthermore, the Javadoc states that some lines can't be reopened at all after being closed, so this workaround might not work on all computers.
FREQUENCY : always
I've tested it using Java 8 Update 201, Java 8 Update 92 and Java 12 on Windows 10. The issue appears when using each of these versions.
I've also tested it using Java 8 Update 201 inside a Linux VM, where the issue did not appear at all.
A DESCRIPTION OF THE PROBLEM :
While creating an application which records and analyzes audio in real time, I ran into the same issue as described in JDK-8211428, however in my case the delay didn't start appearing after several days of runtime (which would be totally fine in my case), but after only several hours.
While experimenting with different audio formats I found out that the time that has to pass until the latency is introduced is dependent on the frame size and the sample rate of the used audio format. By doubling either the amount of channels or the sample size or the sample rate, the latency starts appearing after only half of the runtime as before. This seems like the latency is introduced after a fixed amount of bytes have been read from the TargetDataLine. In fact, someone at Stackoverflow speculated that it might be an internal sample counter which might overflow. While this is only speculation of what the cause could be, it's worth mentioning that this thesis could hold true. The latency starts appearing consistently after somewhere between 6.5 and 7 hours of runtime when using a 16-bit stereo audio format with a sample rate of 44100 Hertz. Assuming there might be an internal 32-bit integer which counts the number of bytes read, it would overflow after 2^32 byte / (2 byte [sample size in bytes] * 2 [number of audio channels] * 44100 Hertz [sample rate]) = 24348 seconds = ~6.76 hours, which is inside the interval I've narrowed the start of the latency down to.
It's also worth mentioning that this issue doesn't seem to happen when using Linux. The delay only appeared for me when using Windows.
In the following Stackoverflow question I've described everything I've tested so far (contains non-aggregated results of my investigations): https://stackoverflow.com/questions/55482552/sudden-delay-while-recording-audio-over-long-time-periods-inside-the-jvm
This is the speculation about an overflowing counter I was referring to: https://chat.stackoverflow.com/transcript/message/45929177#45929177
STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
As in JDK-8211428 you will need to have some audio input device, such as a microphone . Run the attached code sample and select your audio input device, an audio format, a sample rate and a buffer size to use. The application will output the RMS of the currently captured audio samples to the console. If you make some loud noise (e.g. clap in front of your microphone), you should see this number rise without any noticeable delay.
Leave the application running for several hours. The actual time needed to wait until the delay is introduced seems to be dependent on the audio format used, so it is useful to select an audio format with a high frame size and a high sample rate in order to not need to wait so long.
Make some loud noise again and watch the output of the application. On both computers I've tested it the RMS number is delayed by about one to two seconds.
My code sample can also be found in the following GitHub repository: https://github.com/FabianB98/audio-input-delay-test
EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
The displayed RMS number should align with the actual noise with a minimal delay even after several hours of runtime.
ACTUAL -
The displayed RMS number is noticeably delayed by somewhere between one and two seconds.
---------- BEGIN SOURCE ----------
//Note: The following code sample is written in Kotlin, but I can confirm that the issue also appears if written in Java. In case code samples written in Kotlin are not desired, I'm also willing to port my code sample to Java.
//Main.kt
package audiotest
import java.nio.ByteBuffer
import java.nio.ByteOrder
import javax.sound.sampled.AudioFormat
import javax.sound.sampled.AudioSystem
import kotlin.math.ceil
import kotlin.math.sqrt
const val SAMPLE_SIZE = 1024
const val AUTOFLUSH = true
const val AUTOFLUSH_INTERVAL = 10 * 60 * 1000L //10 minutes
fun main() {
//Get all audio input devices.
val inputs = AudioInput.getAudioInputs()
//Show a list of all audio inputs.
println("Found the following audio inputs:")
inputs.forEachIndexed { index, audioInput -> println(" - $index: $audioInput") }
println()
//Ask the user to select an input device.
println("Enter the index of the input device to use...")
val inputDeviceIndex = readLine()?.toIntOrNull() ?: 0
val input = inputs[inputDeviceIndex]
println()
//Show all supported audio formats of the selected input device.
val inputFormats = input.getSupportedAudioFormats()
println("Desired input device supports the following audio formats:")
inputFormats.forEachIndexed { index, audioFormat -> println(" - $index: $audioFormat") }
println()
//Let the user select an audio format.
println("Enter the index of the audio format to use...")
val inputFormatIndex = readLine()?.toIntOrNull() ?: 0
var inputFormat = inputFormats[inputFormatIndex]
println()
//Let the user specify a sample rate if the audio format doesn't have one.
if (inputFormat.sampleRate == AudioSystem.NOT_SPECIFIED.toFloat()) {
println("Desired audio format doesn't specify a sample rate. Please enter the sample rate to use...")
val sampleRate = readLine()?.toFloatOrNull() ?: 44100.0f
inputFormat = AudioFormat(
inputFormat.encoding,
sampleRate,
inputFormat.sampleSizeInBits,
inputFormat.channels,
inputFormat.frameSize,
sampleRate,
inputFormat.isBigEndian
)
println()
}
//Let the user specify a buffer size.
println("Enter the desired buffer size...")
val bufferSize = readLine()?.toIntOrNull() ?: 4096
println()
//Start capturing audio data.
println("Starting to capture audio samples as soon as you're ready (Press enter to start). To pause the console " +
"output while the application is capturing audio, press enter again.")
readLine()
input.open(inputFormat, bufferSize)
input.start()
val listener = object : AudioInputListener {
@Volatile
var output: Boolean = true
private val dataAsInts = IntArray(SAMPLE_SIZE * inputFormat.channels)
private val converter = ByteToIntConverter(inputFormat)
private var nextAutoFlush = System.currentTimeMillis() + AUTOFLUSH_INTERVAL
override fun audioFrameCaptured(data: ByteArray) {
if (output) {
//Calculate and print the RMS value of the captured audio samples.
converter.bytesToInts(data, dataAsInts)
val quadraticSum = dataAsInts.sumByDouble { it.toDouble() * it.toDouble() }
val rms = sqrt(quadraticSum / dataAsInts.size)
println("RMS: $rms")
}
if (AUTOFLUSH && nextAutoFlush <= System.currentTimeMillis()) {
println("Auto flushing audio input...")
input.flush()
nextAutoFlush = System.currentTimeMillis() + AUTOFLUSH_INTERVAL
}
}
}
val thread = input.AudioCaptureThread(listener, SAMPLE_SIZE)
//Handle user inputs.
loop@
while (true) {
readLine()
listener.output = false
println("Output paused. Enter \"flush\", \"restart\", \"reopen\", \"stop\", \"quit\" or nothing...")
val line = readLine()?.trim()
when (line) {
"flush" -> {
println("Flushing audio input...")
input.flush()
}
"restart" -> {
println("Restarting audio input...")
input.stop()
input.start()
}
"reopen" -> {
println("Reopening audio input...")
input.stop()
input.close()
input.open(inputFormat, bufferSize)
input.start()
}
"stop", "quit" -> {
println("Stopping...")
break@loop
}
}
println("Output unpaused.")
listener.output = true
}
//Clean up.
thread.interrupt()
thread.join()
input.stop()
input.close()
}
class ByteToIntConverter(format: AudioFormat) {
//Determine the amount of bytes per integer, the byte order and the offset for the conversion.
private val bytesPerInt = ceil(format.sampleSizeInBits.toFloat() / 8.0f).toInt()
private val order = if (format.isBigEndian) ByteOrder.BIG_ENDIAN else ByteOrder.LITTLE_ENDIAN
private val srcPos = if (format.isBigEndian) 4 - bytesPerInt else 0
//Determine the formula to use for getting the most significant (i.e. highest order) byte of a number.
private val getHighestByte = if (format.isBigEndian)
{ intIndex: Int -> intIndex * bytesPerInt }
else
{ intIndex: Int -> (intIndex + 1) * bytesPerInt - 1 }
private val signed = format.encoding == AudioFormat.Encoding.PCM_SIGNED
//Create a buffer for converting exactly four bytes into an integer.
private val localBytes = ByteArray(4)
private val byteBuffer = ByteBuffer.wrap(localBytes).order(order)
fun bytesToInts(data: ByteArray, result: IntArray) {
//Perform the actual conversion.
for (i in 0 until data.size / bytesPerInt) {
//Determine if the highest bit of the is set.
val highestBit = data[getHighestByte(i)].toInt() and 0x80 != 0
//Create exactly four bytes that represent the same number.
val initialValue = if (highestBit && signed) 0xFF.toByte() else 0x00.toByte()
for (j in 0 until 4)
localBytes[j] = initialValue
for (j in 0 until bytesPerInt)
localBytes[srcPos + j] = data[i * bytesPerInt + j]
//Convert the four bytes into an integer.
byteBuffer.position(0)
result[i] = byteBuffer.int
}
}
}
//AudioInput.kt
package audiotest
import javax.sound.sampled.*
import kotlin.math.min
/**
* Represents an audio input device.
*/
class AudioInput(val mixer: Mixer, val line: TargetDataLine) {
/**
* Gets all supported [AudioFormat]s for this audio input device.
*/
fun getSupportedAudioFormats(): Array<AudioFormat> = (line.lineInfo as DataLine.Info).formats
/**
* Opens the audio input device with the given [format] and the given [bufferSize]. [bufferSize] might be negative
* if the lines default buffer size should be used.
*/
fun open(format: AudioFormat, bufferSize: Int = -1) {
val bufferSizeInBytes = bufferSize * format.frameSize
if (!line.isOpen) {
//Open the line.
if (bufferSize > 0)
line.open(format, bufferSizeInBytes)
else
line.open(format)
}
//Check if the buffer size was set correctly.
if (bufferSize > 0 && line.bufferSize != bufferSizeInBytes)
System.err.println(
"Couldn't set the buffer size to the desired $bufferSizeInBytes bytes! Actual buffer " +
"size is ${line.bufferSize} bytes instead..."
)
}
/**
* Closes the audio input device.
*/
fun close() {
line.close()
}
/**
* Starts the audio input device, so it may engage in data I/O.
*/
fun start() {
line.start()
}
/**
* Stops the audio input device.
*/
fun stop() {
line.stop()
}
/**
* Flushes the audio input devices internal buffer.
*/
fun flush() {
line.flush()
}
override fun toString(): String = mixer.mixerInfo.name
protected fun finalize() {
stop()
close()
}
/**
* A thread that continuously captures audio samples in packets of [sampleSize] audio frames until interrupted.
*/
inner class AudioCaptureThread(val listener: AudioInputListener, val sampleSize: Int) : Thread() {
private var running = true
init {
start()
}
override fun run() {
//Create a buffer for the audio samples.
val bytesToRead = sampleSize * line.format.frameSize
val data = ByteArray(bytesToRead)
while (!isInterrupted && running) {
//Read the next set of audio samples.
var bytesRead = 0
while (bytesRead < bytesToRead) {
bytesRead += line.read(data, bytesRead, min(bytesToRead, bytesToRead - bytesRead))
}
//Update the listener.
listener.audioFrameCaptured(data)
}
}
override fun interrupt() {
super.interrupt()
running = false
}
}
companion object {
/**
* Gets a list of all available [AudioInput]s.
*/
fun getAudioInputs(): ArrayList<AudioInput> {
val result = ArrayList<AudioInput>()
//Iterate over all available Mixers.
for (info in AudioSystem.getMixerInfo()) {
val mixer = try {
AudioSystem.getMixer(info)
} catch (e: SecurityException) {
System.err.println("Couldn't access Mixer \"${info.name}\" due to security restrictions!")
e.printStackTrace()
continue
}
//Iterate over all available TargetDataLines of the current Mixer.
for (lineInfo in mixer.targetLineInfo) {
val line = try {
mixer.getLine(lineInfo)
} catch (e: LineUnavailableException) {
System.err.println(
"Couldn't get the TargetDataLine for Mixer \"${info.name}\" as it is " +
"currently unavailable!"
)
e.printStackTrace()
continue
} catch (e: SecurityException) {
System.err.println(
"Couldn't get the TargetDataLine for Mixer \"${info.name}\" due to " +
"security restrictions!"
)
e.printStackTrace()
continue
}
if (line is TargetDataLine)
result.add(AudioInput(mixer, line))
}
}
return result
}
}
}
//AudioInputListener.kt
package audiotest
/**
* A listener that gets called when an [AudioInput] has captured some audio samples.
*/
interface AudioInputListener {
/**
* This method will get called whenever a new set of audio samples was captured.
*/
fun audioFrameCaptured(data: ByteArray)
}
---------- END SOURCE ----------
CUSTOMER SUBMITTED WORKAROUND :
As described in JDK-8211428, I can confirm that closing and reopening the TargetDataLine gets rid of the delay. However, this is not ideal since doing this would result in some arbitrary amount of time where I wouldn't be able to capture audio samples. Furthermore, the Javadoc states that some lines can't be reopened at all after being closed, so this workaround might not work on all computers.
FREQUENCY : always
- duplicates
-
JDK-8211428 Unexplained latency on audio retrieved from microphone
-
- Open
-