Grandstack
Grandstack

Reputation: 33

Extracting data from thousands of lines

I have an .obj file, so far I have been using a messy tokenizer to split the lines. I've found this is really inefficient.

public static String getSpecificToken(String s, int t) {
    Scanner tokens = new Scanner(s);
    String token = "";
    
    for (int i = 0; i < t; i++) {
        if(!tokens.hasNext()) { break; }
        token = tokens.next();
    }
    
    tokens.close();
    return token;
}

The object file formatting looks like this, I struggle to find the most efficient way to split this.

v 1.4870 0.3736 2.2576

v 1.5803 0.3451 2.1859

v 1.6275 0.3111 2.2261

v 1.6343 0.0783 2.4352

v 1.5180 0.0644 2.5398

v 1.4568 0.0720 2.5205

v 1.3953 0.0795 2.5013

Upvotes: 1

Views: 87

Answers (2)

trashgod
trashgod

Reputation: 205775

This example compares java.util.Scanner with java.io.StreamTokenizer, suggesting a slight edge for the latter. You might be able to use a similar approach to profile your use case.

Upvotes: 3

Rohit Jain
Rohit Jain

Reputation: 213193

Simply use String#split to split them on space, no need to use Tokenizer here. In fact you should avoid using Tokenizer as far as possible: -

Scanner tokens = new Scanner(s);
String[] tokenArr = tokens.split("\\s+");  // Split on 1 or more space

for (String token: tokenArr) {
    System.out.println(token);
}

Upvotes: 3

Related Questions