Reputation: 17786
Consider the following code:
var container = new BlobContainerClient(...);
// fileStream is a stream delivering 10 MB of data
await container.UploadBlobAsync("name-of-blob", fileStream);
Using Fiddler Proxy to watch the HTTP requests, I can see that this ends up in 4 HTTP PUT requests (the address is 127.0.0.1 as I am locally testing using the Azurite emulator):
The first two requests (603 and 607) are 4 MB in size, the third one (613) is 2 MB in size and the fourth one (614) finally commits all sent blocks.
Instead of making 3 requests (4 MB + 4 MB + 2 MB) for the data, is it somehow possible to stream the 10 MB of data in one request to save some overhead?
As the data is sent in 4 MB chunks, does this mean that the Azure Storage clients wait until it got 4 MB from the fileStream
to start sending, meaning 4 MB of RAM is used for caching? My intention to use a fileStream
was to reduce memory usage by directly passing through the fileStream to Azure Blob Storage.
I am using Azure.Storage.Blobs
version 12.8.0 (the latest stable version as I am writing this).
Upvotes: 0
Views: 1411
Reputation: 2683
Ad 1) The maximum size of a single block in a PUT operation depends on the Azure Storage server version (see here). For testing purpose I just created a new storage account in Azure and started an UploadBlobAsync()
operation with a 10.5 MB video file, Fiddler shows me this
A single PUT operation with 10544014 Byte. Note the x-ms-version
request header which gives the client the ability to specify which version it wants to use (see here). I suppose your local emulator is just using an older API version.
Ad 2) Yes, for larger files UploadBlobAsync()
will chunk the request, reads a set of bytes from the stream, performs a PUT, reads the next set of bytes, performs a PUT and so on.
Upvotes: 2