MikeS
MikeS

Reputation: 707

SQLFileStream with a chunked file

I'm a little stuck in trying to upload files into our SQL DB using FileStream. I've followed this example http://www.codeproject.com/Articles/128657/How-Do-I-Use-SQL-File-Stream but the difference is we upload the file in 10mb chunks.

On the first chunk a record is created in the DB with empty content (so that a file is created) and then OnUploadChunk is called for each chunk.

The file is uploading ok but when I check, a new file has been created for each chunk, so for a 20mb file for example I have one which is 0kb, another which is 10mb and the final one which is 20mb. I'm expecting one file of 20mb.

I'm guessing this is perhaps to do with getting the transaction context or incorrectly using TransactionScope which I dont quite fully grasp yet. I presume this may be different for each chunk with it going to and from client to server.

Here is the method which is called every time a chunk is sent from the client (using PlupLoad if of any relevance).

protected override bool OnUploadChunk(Stream chunkStream, string DocID)
{
    BinaryReader b = new BinaryReader(chunkStream);
    byte[] binData = b.ReadBytes(chunkStream.Length);

        using (TransactionScope transactionScope = new TransactionScope())
        {

            string FilePath = GetFilePath(DocID); (Folder path the file is sitting in)

            //Gets size of file that has been uploaded so far
            long currentFileSize = GetCurrentFileSize(DocID)

            //Essentially this is just Select GET_FILESTREAM_TRANSACTION_CONTEXT()
            byte[] transactionContext = GetTransactionContext();

            SqlFileStream filestream = new SqlFileStream(FilePath, transactionContext, FileAccess.ReadWrite);

            filestream.Seek(currentFileSize, SeekOrigin.Begin);
            filestream.Write(binData, 0, (int)chunkStream.Length);
            filestream.Close();
            transactionScope.Complete(); 
          }
}

UPDATE:

I've done a little research and I believe the issue is around this:

FILESTREAM does not currently support in-place updates. Therefore an update to a column with the FILESTREAM attribute is implemented by creating a new zero-byte file, which then has the entire new data value written to it. When the update is committed, the file pointer is then changed to point to the new file, leaving the old file to be deleted at garbage collection time. This happens at a checkpoint for simple recovery, and at a backup or log backup.

So have I just got to wait for the garbage collector to remove the chunked files? Or should I perhaps be uploading the file somewhere on the file system first and then copying it across?

Upvotes: 0

Views: 473

Answers (1)

user875318
user875318

Reputation: 586

Yes, you will have to wait for Sql to clean up the files for you.

Unless you have other system constraints you should be able stream the entire file all at once. This will give you a single file on the sql side

Upvotes: 1

Related Questions