Reputation: 53
I am reading a log file through buffered reader mechanism which is taking Total execution time taken in millis: 12944 ,please advise how can I improve the performance and bring down this time , Please advise is nio is more better performance than buffered Reader..!! The file size is of 10MB since it is a log file..!! please advise also how this same thing could be achieved with nio also..!!
public class BufferedRedeem
{
public static void main(String[] args)
{
BufferedReader br = null;
long startTime = System.currentTimeMillis();
try
{
String sCurrentLine;
br = new BufferedReader(new FileReader("C://abc.log"));
while ((sCurrentLine = br.readLine()) != null)
{
}
long elapsedTime = System.currentTimeMillis() - startTime;
System.out.println("Total execution time taken in millis: " + elapsedTime);
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
try
{
if (br != null)
br.close();
}
catch (IOException ex)
{
ex.printStackTrace();
}
}
}
}
Upvotes: 1
Views: 2420
Reputation: 533500
Since the OP is keen to see how this could be done using NIO.
As the file is small, it is hard to see the difference but it can be measured.
public static void main(String... args) throws IOException {
PrintWriter pw = new PrintWriter("abc.log");
for (int i = 0; i < 100 * 1000; i++) {
pw.println("0123456789012345678901234567890123456789012345678901234567890123456789012345678901234567890123456789");
}
pw.close();
long start2 = System.nanoTime();
int count2 = 0;
BufferedReader br = new BufferedReader(new FileReader("abc.log"));
while (br.readLine() != null) count2++;
br.close();
long time2 = System.nanoTime() - start2;
System.out.printf("IO: Took %,d ms to read %,d lines%n", time2 / 1000 / 1000, count2);
long start = System.nanoTime();
FileChannel fc = new FileInputStream("abc.log").getChannel();
ByteBuffer bb = ByteBuffer.allocateDirect((int) fc.size());
fc.read(bb);
fc.close();
bb.flip();
CharBuffer cb = ByteBuffer.allocateDirect(bb.remaining() * 2).order(ByteOrder.nativeOrder()).asCharBuffer();
CharsetDecoder cd = Charset.forName("UTF-8").newDecoder();
cd.decode(bb, cb, true);
cb.flip();
StringBuilder sb = new StringBuilder();
int count = 0;
while (cb.remaining() > 0) {
char ch = cb.get();
if (isEndOfLine(cb, ch)) {
// process sb
count++;
sb.setLength(0);
} else {
sb.append(ch);
}
}
long time = System.nanoTime() - start;
System.out.printf("NIO as UTF-8: Took %,d ms to read %,d lines%n", time / 1000 / 1000, count);
long start3 = System.nanoTime();
FileChannel fc2 = new FileInputStream("abc.log").getChannel();
MappedByteBuffer bb2 = fc2.map(FileChannel.MapMode.READ_ONLY, 0, fc2.size());
bb.flip();
StringBuilder sb3 = new StringBuilder();
int count3 = 0;
while (bb2.remaining() > 0) {
char ch = (char) bb2.get();
if (isEndOfLine(bb2, ch)) {
// process sb
count3++;
sb3.setLength(0);
} else {
sb3.append(ch);
}
}
fc2.close();
long time3 = System.nanoTime() - start3;
System.out.printf("NIO as ISO-8859-1: Took %,d ms to read %,d lines%n", time3 / 1000 / 1000, count3);
}
private static boolean isEndOfLine(CharBuffer cb, char ch) {
if (ch == '\r') {
if (cb.remaining() >= 1 && cb.get() == '\n') {
return true;
}
cb.position(cb.position() - 1);
return true;
} else if (ch == '\n') {
return true;
}
return false;
}
private static boolean isEndOfLine(ByteBuffer bb, char ch) {
if (ch == '\r') {
if (bb.remaining() >= 1 && bb.get() == '\n') {
return true;
}
bb.position(bb.position() - 1);
return true;
} else if (ch == '\n') {
return true;
}
return false;
}
prints each line is 102 bytes long so the file is ~ 10 MB.
IO: Took 112 ms to read 100,000 lines
NIO as UTF-8: Took 207 ms to read 100,000 lines
NIO as ISO-8859-1: Took 87 ms to read 100,000 lines
As I mentioned before, its unlikely to be worth the extra complexity of using NIO to save 35 ms.
BTW: If you have a HDD and the file is not in memory, only the speed of your drive will matter.
Upvotes: 4
Reputation: 328608
The file size is of 10MB since it is a log file
Then if you have a decent computer, reading the whole file at once should not be an issue (requires Java 7):
public static void main(String[] args) {
try {
long start = System.nanoTime();
List<String> lines = Files.readAllLines(Paths.get("C:/temp/test.log"), Charset.
forName("UTF-8"));
System.out.println("Lines read: " + lines.size());
System.out.println("Total execution time taken in millis: "
+ ((System.nanoTime() - start) / 1000000));
} catch (IOException ex) {
ex.printStackTrace();
}
}
Note: reading a 6MB file takes 75 ms on my computer with that method.
Upvotes: 0
Reputation: 22514
You have a System.out.println(sCurrentLine);
inside your loop this is usually VERY inefficient as it basically involves flushing the output in every call.
Can you try just putting the lines in a ArrayList instead of outputting then and measuring that time? Does it spend a similiar amount of time that way?
Upvotes: 1
Reputation: 4847
Your execution time is mostly due to the System.out.println(sCurrentLine);
. Instead of just sysout i assume you would want to do some processing or filtering.
If you want to check the speed of BufferedReader use a counter to count the number of lines read and just print the count.
Upvotes: 0