Chief DMG
Chief DMG

Reputation: 143

How to quickly search a large file for a String in Java?

I am trying to search a large text file (400MB) for a particular String using the following:

File file = new File("fileName.txt");
try {
    int count = 0;
    Scanner scanner = new Scanner(file);
    while(scanner.hasNextLine()) {
        if(scanner.nextLine().contains("particularString")) {
            count++;
            System.out.println("Number of instances of String: " + count);
        }
    }
} catch (FileNotFoundException e){
    System.out.println(e);
}

This works fine for small files however for this particular file and other large ones it takes far too long (>10mins).

What would be the quickest, most efficient way of doing this?

I have now changed to the following and it completes within seconds -

try {
        int count = 0;
        FileReader fileIn = new FileReader(file);
        BufferedReader reader = new BufferedReader(fileIn);
        String line;
        while((line = reader.readLine()) != null) {
            if((line.contains("particularString"))) {
                count++;
                System.out.println("Number of instances of String " + count);
            }
        }
    }catch (IOException e){
        System.out.println(e);
    }

Upvotes: 13

Views: 21407

Answers (3)

mtj
mtj

Reputation: 3554

Scanner is simply not useful in this case. Under the hood, it does all kinds of input parsing, checking, caching and whatnot. If your case is simply "iterate over all lines of a file", use something that is based on a simple BufferedReader.

In your particular case, I recommend using Files.lines.

Example:

  long count = Files.lines(Paths.get("testfile.txt"))
     .filter(s -> s.contains("particularString"))
     .count();
  System.out.println(count);

(Note that this particular case of the streaming api probably does not cover what you are actually trying to achieve - unfortunately your question does not indicate what the result of the method should be.)

On my system, I get about 15% of Scanner runtime with Files.lines() or a buffered reader.

Upvotes: 2

radai
radai

Reputation: 24192

1st figure out how long it takes you to actually read the entire file's contents vs how long it takes to scan them for your pattern.

if your results are dominated by the read time (and assumming you read it properly, so channels or at the very least buffered readers) there's not much to do.

if its the scanning time that dominates you could read all lines and then ship small batches of lines to be searched in to a work queue, where you could have multiple threads picking up line batches and searching in them.

ballpark figures

  • assuming 50 MB/sec as the hard drive read speed (and thats slow by modern standards) you should be able to read up the entire file into memory in <10 seconds.
  • looking at MD5 hashing speed benchmarks (example here) shows us that the hashing rate can be at least as fast (often faster) than disk read speed. also, string searching is faster, simpler and parallelizes better than hashing.

given those 2 estimates i think a proper implementation can easily land you a run time on the order of 10 seconds (if you start kicking off search jobs as you read line batches), and be largely dominated by your disk read time.

Upvotes: 8

Mindaugas Nakrošis
Mindaugas Nakrošis

Reputation: 120

Use a method from Scanner object - FindWithinHorizon. Scanner will internally make a FileChannel to read the file. And for pattern matching it will end up using a Boyer-Moore algorithm for efficient string searching.

Upvotes: -1

Related Questions