FULL PRODUCT VERSION :
java version "1.7.0-ea"
Java(TM) SE Runtime Environment (build 1.7.0-ea-b74)
Java HotSpot(TM) 64-Bit Server VM (build 17.0-b03, mixed mode)
ADDITIONAL OS VERSION INFORMATION :
Microsoft Windows Vista Business (64-bit) service pack 2
Microsoft Windows [Version 6.0.6002]
EXTRA RELEVANT SYSTEM CONFIGURATION :
Intel(R) Core(TM)2 Quad CPU Q6600, 2.4O GHz, 8 GB RAM
A DESCRIPTION OF THE PROBLEM :
Using FileChannel.map method for filling a file, larger than amount of RAM, may lead to "dying" the system due to disk swapping. I already reported about this problem in March 2007 (report #935035), but now I've detected that this problem is much more serious that I thought.
Let consider the following simple test:
package net.algart.arrays.demo.jre;
import java.io.*;
import java.nio.*;
import java.nio.channels.FileChannel;
import java.security.*;
import java.lang.reflect.Method;
public class SimpleMappingNewFileTest {
static final int BLOCK_SIZE = 256 * 1024 * 1024; // 256 MB
private static void unsafeUnmap(final MappedByteBuffer mbb) throws PrivilegedActionException {
AccessController.doPrivileged(new PrivilegedExceptionAction<Object>() {
public Object run() throws Exception {
Method getCleanerMethod = mbb.getClass().getMethod("cleaner");
getCleanerMethod.setAccessible(true);
Object cleaner = getCleanerMethod.invoke(mbb); // sun.misc.Cleaner instance
Method cleanMethod = cleaner.getClass().getMethod("clean");
cleanMethod.invoke(cleaner);
return null;
}
});
}
public static void main(String[] args) throws Exception {
if (args.length < 3) {
System.out.println("Usage: " + SimpleMappingNewFileTest.class.getName()
+ " tempFileNameBegin fileSize numberOfTests [-force]");
return;
}
String tempFileNameBegin = args[0];
long fileLength = Long.parseLong(args[1]);
int numberOfTests = Integer.parseInt(args[2]);
boolean doForce = args.length > 3 && args[3].equals("-force");
long numberOfBlocks = (fileLength + BLOCK_SIZE - 1) / BLOCK_SIZE;
fileLength = numberOfBlocks * BLOCK_SIZE; // increasing to nearest number k*BLOCK_SIZE
ByteBuffer pattern = ByteBuffer.allocateDirect(BLOCK_SIZE);
for (int j = 0; j < BLOCK_SIZE; j++) {
pattern.put((byte)j);
}
for (int count = 0; count < numberOfTests; count++) {
File file = new File(tempFileNameBegin + "_" + count);
if (file.exists()) {
file.delete();
}
RandomAccessFile raf = new RandomAccessFile(file, "rw");
raf.setLength(fileLength);
for (long i = 0; i < numberOfBlocks; i++) {
long pos = i * BLOCK_SIZE;
MappedByteBuffer mbb = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, pos, BLOCK_SIZE);
pattern.rewind();
mbb.put(pattern);
if (doForce) {
mbb.force();
}
// unsafeUnmap(mbb);
// System.gc();
System.out.printf("\r%s %d MB...", file, (pos + BLOCK_SIZE) / 1048576);
}
raf.close();
}
System.out.println(" done");
}
}
It creates numberOfTests files and fills them by some sequence of non-zero bytes, with or without "force()" call after filling each MappedByteBuffer. The problem appears if the summary size of all filled files is greater than the amount of RAM of the computer. Namely, the test occupies more and more Windows memory (you can see it at "Performance" tab of Windows Task Manager) and, when all physical memory is occupied, Windows begins to take memory from other processes (you can see it at "Processes" tab of the Task Manager, "Working Set" columns). After several seconds, the system "dies": becomes inoperable for tens or minutes or even hours. In this situation, it is almost impossible to stop the "bad" Java program, and usually the only way to revive the computer is the cold reboot.
On my computer there is 8 GB physical memory (~6 GB free), and the following calls allow to demostrate the problem:
java net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10
or
java net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10 -force
Here "java" means calling 64-bit JVM. These call should create and fill 10 files testfile_0, testfile_1, ..., testfile_9, the size of every file is 1 GB. But after filling first 6-7 files the system "dies" due to catastrophic swapping. Note that force() call does not help: in any case, the Java garbage collector does not unmap mapped byte buffers and the system "dies". This problem occurs both in Java 1.7 (build 1.7.0-ea-b74, Java HotSpot(TM) 64-Bit Server VM build 17.0-b03, mixed mode) and 1.6 (build 1.6.0_16-b01, Java HotSpot(TM) 64-Bit Server VM build 14.2-b01, mixed mode).
In 32-bit JVM 1.7, there is the similar problem, if we don't use "-force" flag. Namely, in Java 1.7 an attempt to create and fill 10 files per 1 GB without using force() method (the call listed above) leads to system "death" after first 6-7 GB. In 32-bit Java 1.6 we have either "Map failure" error (if the block size is 256 MB: see the bug #6776490), or "Cleaner terminated abnormally" (if we reduce the block size to 8 MB: see the bugs #6521677, #4938372).
Unfortunately, I don't see suitable solution of this problem: it actually blocks developing 64-bit Java applications that should map more than several gigabytes of memory. Any 64-bit application, which maps and maps large files, will lead to system crash after mapping first 10-20 gigabytes.
I found only two solutions, both not good. The 1st is calling "unsafeUnmap" method, listed in my test, which is a "hack" of current Java API. The 2nd is very frequent calling "System.gc()", that, however, does not help without calling "force()" method.
STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
Please compile the test listed above and run it (NN is an integer larger than the amount of your RAM in gigabytes, for example, "10" for 8-gigabyte computer):
"C:\Program Files\Java\jdk1.7.0\jre\bin\java" net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 NN
"C:\Program Files\Java\jdk1.7.0\jre\bin\java" net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 NN -force
"C:\Program Files\Java\jdk1.6.0_16\jre\bin\java" net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10
"C:\Program Files\Java\jdk1.6.0_16\jre\bin\java" net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10 -force
"C:\Program Files (x86)\Java\jdk1.7.0\jre\bin\java" -Xmx512m net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10
(first 4 calls are 64-bit, the 5th is 32-bit) lead to "dying" the system;
"C:\Program Files (x86)\Java\jdk1.6.0_16\jre\bin\java" -Xmx512m net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10
leads to "Map failed" exception or to fatal termilation of JVM "Cleaner terminated abnormally", depending on the block size;
"C:\Program Files (x86)\Java\jdk1.7.0\jre\bin\java" -Xmx512m net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10 -force
"C:\Program Files (x86)\Java\jdk1.6.0_16\jre\bin\java" -Xmx512m net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10 -force
only these 32-bit calls work fine.
EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
Stable behaviour. The garbage collector should "know" about the amount of physical RAM and unmap unused instances of MappedByteBuffer in time.
ACTUAL -
The computer "dies" with extreme swapping until the program will finish. While this period, it is difficult or even impossibly to stop the test (via Ctrl+C or Task Manager), because OS almost does not react to keyboard and mouse.
REPRODUCIBILITY :
This bug can be reproduced always.
---------- BEGIN SOURCE ----------
package net.algart.arrays.demo.jre;
import java.io.*;
import java.nio.*;
import java.nio.channels.FileChannel;
import java.security.*;
import java.lang.reflect.Method;
public class SimpleMappingNewFileTest {
static final int BLOCK_SIZE = 256 * 1024 * 1024; // 256 MB
private static void unsafeUnmap(final MappedByteBuffer mbb) throws PrivilegedActionException {
AccessController.doPrivileged(new PrivilegedExceptionAction<Object>() {
public Object run() throws Exception {
Method getCleanerMethod = mbb.getClass().getMethod("cleaner");
getCleanerMethod.setAccessible(true);
Object cleaner = getCleanerMethod.invoke(mbb); // sun.misc.Cleaner instance
Method cleanMethod = cleaner.getClass().getMethod("clean");
cleanMethod.invoke(cleaner);
return null;
}
});
}
public static void main(String[] args) throws Exception {
if (args.length < 3) {
System.out.println("Usage: " + SimpleMappingNewFileTest.class.getName()
+ " tempFileNameBegin fileSize numberOfTests [-force]");
return;
}
String tempFileNameBegin = args[0];
long fileLength = Long.parseLong(args[1]);
int numberOfTests = Integer.parseInt(args[2]);
boolean doForce = args.length > 3 && args[3].equals("-force");
long numberOfBlocks = (fileLength + BLOCK_SIZE - 1) / BLOCK_SIZE;
fileLength = numberOfBlocks * BLOCK_SIZE; // increasing to nearest number k*BLOCK_SIZE
ByteBuffer pattern = ByteBuffer.allocateDirect(BLOCK_SIZE);
for (int j = 0; j < BLOCK_SIZE; j++) {
pattern.put((byte)j);
}
for (int count = 0; count < numberOfTests; count++) {
File file = new File(tempFileNameBegin + "_" + count);
if (file.exists()) {
file.delete();
}
RandomAccessFile raf = new RandomAccessFile(file, "rw");
raf.setLength(fileLength);
for (long i = 0; i < numberOfBlocks; i++) {
long pos = i * BLOCK_SIZE;
MappedByteBuffer mbb = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, pos, BLOCK_SIZE);
pattern.rewind();
mbb.put(pattern);
if (doForce) {
mbb.force();
}
// unsafeUnmap(mbb);
// System.gc();
System.out.printf("\r%s %d MB...", file, (pos + BLOCK_SIZE) / 1048576);
}
raf.close();
}
System.out.println(" done");
}
}
---------- END SOURCE ----------
CUSTOMER SUBMITTED WORKAROUND :
The only stable workaround, known to me, is "unsafeUnmap" method.
java version "1.7.0-ea"
Java(TM) SE Runtime Environment (build 1.7.0-ea-b74)
Java HotSpot(TM) 64-Bit Server VM (build 17.0-b03, mixed mode)
ADDITIONAL OS VERSION INFORMATION :
Microsoft Windows Vista Business (64-bit) service pack 2
Microsoft Windows [Version 6.0.6002]
EXTRA RELEVANT SYSTEM CONFIGURATION :
Intel(R) Core(TM)2 Quad CPU Q6600, 2.4O GHz, 8 GB RAM
A DESCRIPTION OF THE PROBLEM :
Using FileChannel.map method for filling a file, larger than amount of RAM, may lead to "dying" the system due to disk swapping. I already reported about this problem in March 2007 (report #935035), but now I've detected that this problem is much more serious that I thought.
Let consider the following simple test:
package net.algart.arrays.demo.jre;
import java.io.*;
import java.nio.*;
import java.nio.channels.FileChannel;
import java.security.*;
import java.lang.reflect.Method;
public class SimpleMappingNewFileTest {
static final int BLOCK_SIZE = 256 * 1024 * 1024; // 256 MB
private static void unsafeUnmap(final MappedByteBuffer mbb) throws PrivilegedActionException {
AccessController.doPrivileged(new PrivilegedExceptionAction<Object>() {
public Object run() throws Exception {
Method getCleanerMethod = mbb.getClass().getMethod("cleaner");
getCleanerMethod.setAccessible(true);
Object cleaner = getCleanerMethod.invoke(mbb); // sun.misc.Cleaner instance
Method cleanMethod = cleaner.getClass().getMethod("clean");
cleanMethod.invoke(cleaner);
return null;
}
});
}
public static void main(String[] args) throws Exception {
if (args.length < 3) {
System.out.println("Usage: " + SimpleMappingNewFileTest.class.getName()
+ " tempFileNameBegin fileSize numberOfTests [-force]");
return;
}
String tempFileNameBegin = args[0];
long fileLength = Long.parseLong(args[1]);
int numberOfTests = Integer.parseInt(args[2]);
boolean doForce = args.length > 3 && args[3].equals("-force");
long numberOfBlocks = (fileLength + BLOCK_SIZE - 1) / BLOCK_SIZE;
fileLength = numberOfBlocks * BLOCK_SIZE; // increasing to nearest number k*BLOCK_SIZE
ByteBuffer pattern = ByteBuffer.allocateDirect(BLOCK_SIZE);
for (int j = 0; j < BLOCK_SIZE; j++) {
pattern.put((byte)j);
}
for (int count = 0; count < numberOfTests; count++) {
File file = new File(tempFileNameBegin + "_" + count);
if (file.exists()) {
file.delete();
}
RandomAccessFile raf = new RandomAccessFile(file, "rw");
raf.setLength(fileLength);
for (long i = 0; i < numberOfBlocks; i++) {
long pos = i * BLOCK_SIZE;
MappedByteBuffer mbb = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, pos, BLOCK_SIZE);
pattern.rewind();
mbb.put(pattern);
if (doForce) {
mbb.force();
}
// unsafeUnmap(mbb);
// System.gc();
System.out.printf("\r%s %d MB...", file, (pos + BLOCK_SIZE) / 1048576);
}
raf.close();
}
System.out.println(" done");
}
}
It creates numberOfTests files and fills them by some sequence of non-zero bytes, with or without "force()" call after filling each MappedByteBuffer. The problem appears if the summary size of all filled files is greater than the amount of RAM of the computer. Namely, the test occupies more and more Windows memory (you can see it at "Performance" tab of Windows Task Manager) and, when all physical memory is occupied, Windows begins to take memory from other processes (you can see it at "Processes" tab of the Task Manager, "Working Set" columns). After several seconds, the system "dies": becomes inoperable for tens or minutes or even hours. In this situation, it is almost impossible to stop the "bad" Java program, and usually the only way to revive the computer is the cold reboot.
On my computer there is 8 GB physical memory (~6 GB free), and the following calls allow to demostrate the problem:
java net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10
or
java net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10 -force
Here "java" means calling 64-bit JVM. These call should create and fill 10 files testfile_0, testfile_1, ..., testfile_9, the size of every file is 1 GB. But after filling first 6-7 files the system "dies" due to catastrophic swapping. Note that force() call does not help: in any case, the Java garbage collector does not unmap mapped byte buffers and the system "dies". This problem occurs both in Java 1.7 (build 1.7.0-ea-b74, Java HotSpot(TM) 64-Bit Server VM build 17.0-b03, mixed mode) and 1.6 (build 1.6.0_16-b01, Java HotSpot(TM) 64-Bit Server VM build 14.2-b01, mixed mode).
In 32-bit JVM 1.7, there is the similar problem, if we don't use "-force" flag. Namely, in Java 1.7 an attempt to create and fill 10 files per 1 GB without using force() method (the call listed above) leads to system "death" after first 6-7 GB. In 32-bit Java 1.6 we have either "Map failure" error (if the block size is 256 MB: see the bug #6776490), or "Cleaner terminated abnormally" (if we reduce the block size to 8 MB: see the bugs #6521677, #4938372).
Unfortunately, I don't see suitable solution of this problem: it actually blocks developing 64-bit Java applications that should map more than several gigabytes of memory. Any 64-bit application, which maps and maps large files, will lead to system crash after mapping first 10-20 gigabytes.
I found only two solutions, both not good. The 1st is calling "unsafeUnmap" method, listed in my test, which is a "hack" of current Java API. The 2nd is very frequent calling "System.gc()", that, however, does not help without calling "force()" method.
STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
Please compile the test listed above and run it (NN is an integer larger than the amount of your RAM in gigabytes, for example, "10" for 8-gigabyte computer):
"C:\Program Files\Java\jdk1.7.0\jre\bin\java" net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 NN
"C:\Program Files\Java\jdk1.7.0\jre\bin\java" net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 NN -force
"C:\Program Files\Java\jdk1.6.0_16\jre\bin\java" net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10
"C:\Program Files\Java\jdk1.6.0_16\jre\bin\java" net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10 -force
"C:\Program Files (x86)\Java\jdk1.7.0\jre\bin\java" -Xmx512m net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10
(first 4 calls are 64-bit, the 5th is 32-bit) lead to "dying" the system;
"C:\Program Files (x86)\Java\jdk1.6.0_16\jre\bin\java" -Xmx512m net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10
leads to "Map failed" exception or to fatal termilation of JVM "Cleaner terminated abnormally", depending on the block size;
"C:\Program Files (x86)\Java\jdk1.7.0\jre\bin\java" -Xmx512m net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10 -force
"C:\Program Files (x86)\Java\jdk1.6.0_16\jre\bin\java" -Xmx512m net.algart.arrays.demo.jre.SimpleMappingNewFileTest testfile 1000000000 10 -force
only these 32-bit calls work fine.
EXPECTED VERSUS ACTUAL BEHAVIOR :
EXPECTED -
Stable behaviour. The garbage collector should "know" about the amount of physical RAM and unmap unused instances of MappedByteBuffer in time.
ACTUAL -
The computer "dies" with extreme swapping until the program will finish. While this period, it is difficult or even impossibly to stop the test (via Ctrl+C or Task Manager), because OS almost does not react to keyboard and mouse.
REPRODUCIBILITY :
This bug can be reproduced always.
---------- BEGIN SOURCE ----------
package net.algart.arrays.demo.jre;
import java.io.*;
import java.nio.*;
import java.nio.channels.FileChannel;
import java.security.*;
import java.lang.reflect.Method;
public class SimpleMappingNewFileTest {
static final int BLOCK_SIZE = 256 * 1024 * 1024; // 256 MB
private static void unsafeUnmap(final MappedByteBuffer mbb) throws PrivilegedActionException {
AccessController.doPrivileged(new PrivilegedExceptionAction<Object>() {
public Object run() throws Exception {
Method getCleanerMethod = mbb.getClass().getMethod("cleaner");
getCleanerMethod.setAccessible(true);
Object cleaner = getCleanerMethod.invoke(mbb); // sun.misc.Cleaner instance
Method cleanMethod = cleaner.getClass().getMethod("clean");
cleanMethod.invoke(cleaner);
return null;
}
});
}
public static void main(String[] args) throws Exception {
if (args.length < 3) {
System.out.println("Usage: " + SimpleMappingNewFileTest.class.getName()
+ " tempFileNameBegin fileSize numberOfTests [-force]");
return;
}
String tempFileNameBegin = args[0];
long fileLength = Long.parseLong(args[1]);
int numberOfTests = Integer.parseInt(args[2]);
boolean doForce = args.length > 3 && args[3].equals("-force");
long numberOfBlocks = (fileLength + BLOCK_SIZE - 1) / BLOCK_SIZE;
fileLength = numberOfBlocks * BLOCK_SIZE; // increasing to nearest number k*BLOCK_SIZE
ByteBuffer pattern = ByteBuffer.allocateDirect(BLOCK_SIZE);
for (int j = 0; j < BLOCK_SIZE; j++) {
pattern.put((byte)j);
}
for (int count = 0; count < numberOfTests; count++) {
File file = new File(tempFileNameBegin + "_" + count);
if (file.exists()) {
file.delete();
}
RandomAccessFile raf = new RandomAccessFile(file, "rw");
raf.setLength(fileLength);
for (long i = 0; i < numberOfBlocks; i++) {
long pos = i * BLOCK_SIZE;
MappedByteBuffer mbb = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, pos, BLOCK_SIZE);
pattern.rewind();
mbb.put(pattern);
if (doForce) {
mbb.force();
}
// unsafeUnmap(mbb);
// System.gc();
System.out.printf("\r%s %d MB...", file, (pos + BLOCK_SIZE) / 1048576);
}
raf.close();
}
System.out.println(" done");
}
}
---------- END SOURCE ----------
CUSTOMER SUBMITTED WORKAROUND :
The only stable workaround, known to me, is "unsafeUnmap" method.
- duplicates
-
JDK-4724038 (fs) Add unmap method to MappedByteBuffer
- Closed