Reputation: 49553
Here is the current code in my application:
String[] ids = str.split("/");
When profiling the application, a non-negligeable time is spent string splitting. Also, the split
method takes a regular expression, which is superfluous here.
What alternative can I use in order to optimize the string splitting? Is StringUtils.split
faster?
(I would've tried and tested myself but profiling my application takes a lot of time.)
Upvotes: 54
Views: 63743
Reputation: 36456
StringTokenizer
is much faster for simple parsing like this (I did some benchmarking with a while back and you get huge speedups).
StringTokenizer st = new StringTokenizer("1/2/3","/");
String[] arr = new String[st.countTokens()];
arr[0] = st.nextToken();
If you want to eek out a little more performance, you can do it manually as well:
String s = "1/2/3"
char[] c = s.toCharArray();
LinkedList<String> ll = new LinkedList<String>();
int index = 0;
for(int i=0;i<c.length;i++) {
if(c[i] == '/') {
ll.add(s.substring(index,i));
index = i+1;
}
}
String[] arr = ll.size();
Iterator<String> iter = ll.iterator();
index = 0;
for(index = 0; iter.hasNext(); index++)
arr[index++] = iter.next();
Upvotes: 9
Reputation: 18320
String.split(String)
won't create regexp if your pattern is only one character long. When splitting by single character, it will use specialized code which is pretty efficient. StringTokenizer
is not much faster in this particular case.
This was introduced in OpenJDK7/OracleJDK7. Here's a bug report and a commit. I've made a simple benchmark here.
$ java -version
java version "1.8.0_20"
Java(TM) SE Runtime Environment (build 1.8.0_20-b26)
Java HotSpot(TM) 64-Bit Server VM (build 25.20-b23, mixed mode)
$ java Split
split_banthar: 1231
split_tskuzzy: 1464
split_tskuzzy2: 1742
string.split: 1291
StringTokenizer: 1517
Upvotes: 63
Reputation: 198093
If you can use third-party libraries, Guava's Splitter
doesn't incur the overhead of regular expressions when you don't ask for it, and is very fast as a general rule. (Disclosure: I contribute to Guava.)
Iterable<String> split = Splitter.on('/').split(string);
(Also, Splitter
is as a rule much more predictable than String.split
.)
Upvotes: 24
Reputation: 791
Use Apache Commons Lang » 3.0 's
StringUtils.splitByWholeSeparator("ab-!-cd-!-ef", "-!-") = ["ab", "cd", "ef"]
If you need non regex split and wants the results in String array, then use StringUtils, I compared StringUtils.splitByWholeSeparator with Guava's Splitter and Java's String split, and found StringUtils is faster.
Upvotes: 1
Reputation: 431
Seeing as I am working at large scale, I thought it would help to provide some more benchmarking, including a few of my own implementations (I split on spaces, but this should illustrate how long it takes in general):
I'm working with a 426 MB file, with 2622761 lines. The only whitespace are normal spaces (" ") and lines ("\n").
First I replace all lines with spaces, and benchmark parsing one huge line:
.split(" ")
Cumulative time: 31.431366952 seconds
.split("\s")
Cumulative time: 52.948729489 seconds
splitStringChArray()
Cumulative time: 38.721338004 seconds
splitStringChList()
Cumulative time: 12.716065893 seconds
splitStringCodes()
Cumulative time: 1 minutes, 21.349029036000005 seconds
splitStringCharCodes()
Cumulative time: 23.459840685 seconds
StringTokenizer
Cumulative time: 1 minutes, 11.501686094999997 seconds
Then I benchmark splitting line by line (meaning that the functions and loops are done many times, instead of all at once):
.split(" ")
Cumulative time: 3.809014174 seconds
.split("\s")
Cumulative time: 7.906730124 seconds
splitStringChArray()
Cumulative time: 4.06576739 seconds
splitStringChList()
Cumulative time: 2.857809996 seconds
Bonus: splitStringChList(), but creating a new StringBuilder every time (the average difference is actually more like .42 seconds):
Cumulative time: 3.82026621 seconds
splitStringCodes()
Cumulative time: 11.730249921 seconds
splitStringCharCodes()
Cumulative time: 6.995555826 seconds
StringTokenizer
Cumulative time: 4.500008172 seconds
Here is the code:
// Use a char array, and count the number of instances first.
public static String[] splitStringChArray(String str, StringBuilder sb) {
char[] strArray = str.toCharArray();
int count = 0;
for (char c : strArray) {
if (c == ' ') {
count++;
}
}
String[] splitArray = new String[count+1];
int i=0;
for (char c : strArray) {
if (c == ' ') {
splitArray[i] = sb.toString();
sb.delete(0, sb.length());
} else {
sb.append(c);
}
}
return splitArray;
}
// Use a char array but create an ArrayList, and don't count beforehand.
public static ArrayList<String> splitStringChList(String str, StringBuilder sb) {
ArrayList<String> words = new ArrayList<String>();
words.ensureCapacity(str.length()/5);
char[] strArray = str.toCharArray();
int i=0;
for (char c : strArray) {
if (c == ' ') {
words.add(sb.toString());
sb.delete(0, sb.length());
} else {
sb.append(c);
}
}
return words;
}
// Using an iterator through code points and returning an ArrayList.
public static ArrayList<String> splitStringCodes(String str) {
ArrayList<String> words = new ArrayList<String>();
words.ensureCapacity(str.length()/5);
IntStream is = str.codePoints();
OfInt it = is.iterator();
int cp;
StringBuilder sb = new StringBuilder();
while (it.hasNext()) {
cp = it.next();
if (cp == 32) {
words.add(sb.toString());
sb.delete(0, sb.length());
} else {
sb.append(cp);
}
}
return words;
}
// This one is for compatibility with supplementary or surrogate characters (by using Character.codePointAt())
public static ArrayList<String> splitStringCharCodes(String str, StringBuilder sb) {
char[] strArray = str.toCharArray();
ArrayList<String> words = new ArrayList<String>();
words.ensureCapacity(str.length()/5);
int cp;
int len = strArray.length;
for (int i=0; i<len; i++) {
cp = Character.codePointAt(strArray, i);
if (cp == ' ') {
words.add(sb.toString());
sb.delete(0, sb.length());
} else {
sb.append(cp);
}
}
return words;
}
This is how I used StringTokenizer:
StringTokenizer tokenizer = new StringTokenizer(file.getCurrentString());
words = new String[tokenizer.countTokens()];
int i = 0;
while (tokenizer.hasMoreTokens()) {
words[i] = tokenizer.nextToken();
i++;
}
Upvotes: 8
Reputation: 484
You can write the split function yourself , which is going to be the fastest. Here is the link that proves it , it worked for me too, optimized my code by 6X
StringTokenizer - reading lines with integers
Split: 366ms IndexOf: 50ms StringTokenizer: 89ms GuavaSplit: 109ms IndexOf2 (some super optimised solution given in the above question): 14ms CsvMapperSplit (mapping row by row): 326ms CsvMapperSplit_DOC (building one doc and mapping all rows in one go): 177ms
Upvotes: 0
Reputation: 3796
StringTokenizer is faster than any other splitting method, but getting the tokenizer to return the delimiters along with the tokenized string improves performance by something like 50%. That is achieved by using the constructor java.util.StringTokenizer.StringTokenizer(String str, String delim, boolean returnDelims)
. Here some other insights on that matter: Performance of StringTokenizer class vs. split method in Java
Upvotes: 1
Reputation: 4443
The String's split method is probably a safer choice. As of at least java 6 (though the api reference is for 7) they basically say that use of the StringTokenizer is discouraged. Their wording is quoted below.
"StringTokenizer is a legacy class that is retained for compatibility reasons although its use is discouraged in new code. It is recommended that anyone seeking this functionality use the split method of String or the java.util.regex package instead."
Upvotes: 0
Reputation: 691735
Guava has a Splitter which is more flexible that the String.split()
method, and doesn't (necessarily) use a regex. OTOH, String.split()
has been optimized in Java 7 to avoid the regex machinery if the separator is a single char. So the performance should be similar in Java 7.
Upvotes: 2