Reputation: 173
I'm trying to upload large files to a web application using the Spark framework, but I'm running into out of memory errors. It appears that spark is caching the request body in memory. I'd like either to cache file uploads on disk, or read the request as a stream.
I've tried using the streaming support of Apache Commons FileUpload, but it appears that calling request.raw().getInputStream() causes Spark to read the entire body into memory and return an InputStream view of that chunk of memory, as done by this code. Based on the comment in the file, this is so that getInputStream can be called multiple times. Is there any way to change this behavior?
Upvotes: 2
Views: 1415
Reputation:
I recently had the same problem and I figured out that you could bypass the caching. I do so with the following function:
public ServletInputStream getInputStream(Request request) throws IOException {
final HttpServletRequest raw = request.raw();
if (raw instanceof ServletRequestWrapper) {
return ((ServletRequestWrapper) raw).getRequest().getInputStream();
}
return raw.getInputStream();
}
This has been tested with Spark 2.4.
I'm not familiar with the inner workings of Spark so one potentiall, minor downside with this function is that you don't know if you get the cached InputStream or not, the cached version is reusable, the non-cached is not.
To get around this downside I suppose you could implement a function similar to the following:
public boolean hasCachedInputStream(Request request) {
return !(raw instanceof ServletRequestWrapper);
}
Upvotes: 1
Reputation: 4181
Short answer is not that I can see.
SparkServerFactory builds the JettyHandler, which has a private static class HttpRequestWrapper, than handles the InputStream into memory.
All that static stuff means no extending available.
Upvotes: 0