Reputation: 133
I'm using the Azure Storage Client for PHP
I would like to be able to stream data chunk by chunk, to reduce the memory overhead associated with reading the entire file into memory. I have looked through the library docs, but I haven't found anything that would help.
At the moment I use fopen()
, but as far as I'm aware it reads the entire file into memory. Would it be possible to do something along the lines of:
while ($data = fread($file, 8192)) {
//some library function
}
Any help would be greatly appreciated!
Upvotes: 0
Views: 588
Reputation: 136196
It is certainly possible to do so. The methods you would want to use are createBlobBlock
(or createBlobBlockAsync
and commitBlobBlocks
(or commitBlobBlocksAsync
).
Essentially the idea is that you read some chunk of data from the source and upload that chunk (known as block
in Azure Storage). For that you would use either createBlobBlock
or createBlobBlockAsync
. Each block must be assigned a unique id.
Once all blocks are uploaded, you would commit the blocks to create the block. To commit the blocks, you would use either commitBlobBlocks
or commitBlobBlocksAsync
.
For more details, please see these two links: Put Block
and Put Block List
Upvotes: 1