Nikola Anusev
Nikola Anusev

Reputation: 7088

WCF REST, streamed upload of files and httpRuntime maxRequestLength property

I have created a simple WCF service to prototype file uploading. The service:

[ServiceContract]
public class Service1
{
    [OperationContract]
    [WebInvoke(Method = "POST", UriTemplate = "/Upload")]
    public void Upload(Stream stream)
    {
        using (FileStream targetStream = new FileStream(@"C:\Test\output.txt", FileMode.Create, FileAccess.Write))
        {
            stream.CopyTo(targetStream);
        }
    }
}

It uses webHttpBinding with transferMode set to "Streamed" and maxReceivedMessageSize, maxBufferPoolSize and maxBufferSize all set to 2GB. httpRuntime has maxRequestLength set to 10MB.

The client issues HTTP requests in the following way:

HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(@"http://.../Service1.svc/Upload");

request.Method = "POST";
request.SendChunked = true;
request.AllowWriteStreamBuffering = false;
request.ContentType = MediaTypeNames.Application.Octet;

using (FileStream inputStream = new FileStream(@"C:\input.txt", FileMode.Open, FileAccess.Read))
{
    using (Stream outputStream = request.GetRequestStream())
    {
        inputStream.CopyTo(outputStream);
    }
}

Now, finally, what's wrong:

When uploading the file 100MB big, the server returns HTTP 400 (Bad request). I've tried to enable WCF tracing, but it shows no error. When I increase httpRuntime.maxRequestLength to 1GB, the file gets uploaded without problems. The MSDN says that maxRequestLength "specifies the limit for the input stream buffering threshold, in KB".

This leads me to believe that the whole file (all 100MB of it) is first stored in "input stream buffer" and only then it is available to my Upload method on server. I can actually see that the size of file on server does not gradually increase (as I would expect), instead, in the moment it is created it is already 100MB big.

The question: How can I get this to work so that the "input stream buffer" is reasonably small (say, 1MB) and when it overflows, my Upload method gets called? In other words, I want the upload to be truly streamed without having to buffer the whole file anywhere.

EDIT: I now discovered the httpRuntime contains another setting that is relevant here - requestLengthDiskThreshold. It seems that when the input buffer grows beyond this threshold, it is no longer stored in memory, but instead, on filesystem. So at least the whole 100MB big file is not kept in memory (this is what I was most afraid of), however, I still would like to know whether there is some way to avoid this buffer altogether.

Upvotes: 18

Views: 5172

Answers (2)

cvlad
cvlad

Reputation: 608

If you are using .NET 4 and hosting your service in IIS7+, you may be affected an ASP.NET bug which is described in the following blog post:

http://blogs.microsoft.co.il/blogs/idof/archive/2012/01/17/what-s-new-in-wcf-4-5-improved-streaming-in-iis-hosting.aspx

Basically, for streamed requests, the ASP.NET handler in IIS will buffer the whole request before handing over control to WCF. And this handler obeys the maxRequestLength limit.

As far as I know, there is no workaround for the bug and you have the following options:

  • upgrade to .NET 4.5
  • self-host your service instead of using IIS
  • use a binding that is not based on HTTP, so that the ASP.NET handler is not involved

Upvotes: 9

Chris Walter
Chris Walter

Reputation: 2466

This may be a bug in the streaming implementation. I found a MSDN article that suggests doing exactly what you are describing at http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/fb9efac5-8b57-417e-9f71-35d48d421eb4/. Unfortunately the Microsoft employee suggesting the fix found a bug in the implementation and didn't follow up with details on a fix.

That said it looks like the implementation is broken which you could test by profiling your code with a memory profiler and verifying whether or not the entire file is being stored in memory. If the entire file is being stored in memory, you won't be able to fix this issue, unless somebody finds a configuration issue with your code.

That said, while using requestLengthDiskThreshold could technically work, it will dramatically increase your write times as each file will have to be written first as temp data, read from temp data, written again as final, and finally the temp data deleted. As you have already said you are dealing with extremely large files so I doubt such a solution is acceptable.

Your best bet is to use a chunking framework and manually reconstruct the file. I found instructions on how to write such logic at http://aspilham.blogspot.com/2011/03/file-uploading-in-chunks-using.html but have not had the time to check it for accuracy.

I'm sorry I can't tell you why your code isn't working as documented, but something similar to the 2nd example should be able to work without ballooning your memory footprint.

Upvotes: 7

Related Questions