Reputation:
Currently I have this implementation working reading from a bytestream and writing out to file. I am wondering if this is particularly dangerous or discouraged against, in the essence of time I'm unable to test all the different implementations of this mechanism and this seems to be working. Any advice would be greatly appreciated.
SharedByteArrayInputStream stream = (SharedByteArrayInputStream) content;
ArrayList<Byte> bites = new ArrayList<Byte>();
byte bite = 0;
while((bite=(byte) stream.read())!=-1){
bites.add(bite);
}
byte[] bytes = new byte[bites.size()];
for(int x = 0; x < bites.size(); x++){
bytes[x] = (byte) bites.get(x);
}
String aloha = new String(bytes, Charset.forName( "ISO-8859-1" ));
writer.append(aloha+"\n");
stream.close();
I know it looks silly but it works.
Thanks again for any input
Upvotes: 1
Views: 3194
Reputation: 8405
I assume you're only creating the temporary ArrayList because you cannot determine the length of the input. Try using ByteArrayOutputStream instead.
Consider the following code:
SharedByteArrayInputStream stream = (SharedByteArrayInputStream) content;
ByteArrayOutputStream bOut = new ByteArrayOutputStream();
//Reading in chunks is better performance-wise than reading one byte at once.
int r;
byte[] buffer = new byte[32 * 1000];
//Read and write into the ByteArrayOutputStream
while((r = stream.read(buffer) != -1){
bOut.write(buffer, 0, r);
}
String aloha = new String(bOut.toByteArray(), Charset.forName( "ISO-8859-1" ));
writer.append(aloha+"\n");
stream.close();
Your code uses a lot more memory than necessary and iterates through two loops when only one is required making it very inefficient. The ByteArrayOutputStream is a better implementation that is both more efficient and probably has a smaller memory footprint.
Upvotes: 1
Reputation: 11006
I see a few problems, I'll list them in order of importance.
You are reading an entire byte stream into bites
, then writing bites
to another stream (possibly the disk.) This is bad because you're consuming twice the memory needed with this intermediate structure. It also takes more CPU to perform this operation, so its slower.
You are not closing your writer. Be sure to close all streams after use.
Use the overload for read accepting a byte array instead of reading a byte at a time. Reading one byte at a time is comparatively slow. When processing large amounts of data its noticeable.
Here's your code with the changes I suggest:
EDIT: As pointed out by c.s., you are writing to a file, and needn't convert your bytes to a string at all, as they'll just wind up as bytes again in the file. (I misread, wasn't sure you were writing to a file and so didn't include this.)
Instead of a writer, use a file output stream. I also suggest you not append a \n
to your data as its unnecessary.
FileOutputStream fileOutputStream = new FileOutputStream(filepath);
SharedByteArrayInputStream stream = (SharedByteArrayInputStream) content;
byte bite = 0;
byte[] buffer = new byte[1024];
//here we're reading more than one byte at a time.
while((bite=(byte) stream.read(buffer))!=-1){
//write to file output stream instead.
fileOutputStream.write(buffer,0,bite);
//don't append new line character.
}
stream.close();
//close the output stream if you're done.
fileOutputStream.close();
This solution will work with any size of data, and will be much faster than your previous code.
Upvotes: 1
Reputation: 2236
File f = new File(//PATHFILE);
FileOutputStream fOut = new FileOutputStream(f);
InputStream is=//InputStream
byte data[] = new byte[1024];
int count;
while ((count = is.read(data)) != -1) {
fOut.write(data, 0, count);
}
fOut.flush();
fOut.close();
is.close();
That's my code and works perfectly
Upvotes: 2