Reputation: 232
Is it possible to optimize this code using Parallel.Foreach
or something?
using (var zipStream = new ZipOutputStream(OpenWriteArchive()))
{
zipStream.CompressionLevel = CompressionLevel.Level9;
foreach (var document in docuemnts)
{
zipStream.PutNextEntry(GetZipEntryName(type));
using (var targetStream = new MemoryStream()) // document stream
{
DocumentHelper.SaveDocument(document.Value, targetStream, type);
targetStream.Position = 0; targetStream.CopyTo(zipStream);
}
GC.Collect();
};
}
The problem is DotNetZip's and SharpZipLib's ZipOutputStream
doesn't support position changing or seeking.
Writing to zip stream from multiple threads leads to error. It's also impossible to accumulate result streams into ConcurrentStack beacuse application can work with 1000+ documents and should to compress and save streams into the cloud on the fly.
Is there any way to solve this?
Upvotes: 2
Views: 2725
Reputation: 232
Resolved by using of ProducerConsumerQueue
(Producer-Consumer pattern).
using (var queue = new ProducerConsumerQueue<byte[]>(HandlerDelegate))
{
Parallel.ForEach(documents, document =>
{
using (var documentStream = new MemoryStream())
{
// saving document here ...
queue.EnqueueTask(documentStream.ToArray());
}
});
}
protected void HandlerDelegate(byte[] content)
{
ZipOutputStream.PutNextEntry(Guid.NewGuid() + ".pdf");
using (var stream = new MemoryStream(content))
{
stream.Position = 0; stream.CopyTo(ZipOutputStream);
}
}
Upvotes: 1
Reputation: 66
Try to decleare zipstream inside parallel foreach, like:
Parallel.ForEach(docuemnts, (document) =>
{
using (var zipStream = new ZipOutputStream(OpenWriteArchive()))
{
zipStream.CompressionLevel = CompressionLevel.Level9;
zipStream.PutNextEntry(GetZipEntryName(type));
using (var targetStream = new MemoryStream()) // document stream
{
DocumentHelper.SaveDocument(document.Value, targetStream, type);
targetStream.Position = 0; targetStream.CopyTo(zipStream);
}
GC.Collect();
}
});
Bye!
Upvotes: 0