Reputation: 1891
I have a Queue on which the Enqueue operation would be performed by one thread and Dequeue would be performed by another. Needless to say, I had to implement some thread safety for it.
I first tried using a lock on the queue before each Enqueue/Dequeue as it gives a better control for the locking mechanism. It worked well, but my curious mind led me to test some more.
I then tried using Queue.Synchronized wrapper keeping everything else the same. Now, I am not sure if its true, but the performance does seem a tad bit faster with this approach.
Do you think, there actually is some difference in perfomance between the two, or I am just imagining things here..? :)
Upvotes: 8
Views: 12266
Reputation: 317
Upvotes: 0
Reputation: 38444
When requesting Queue.Synchonized
you get a SynchronizedQueue
in return which uses a lock
very minimally around calls to Enqueue
and Dequeue
on an inner queue. Therefore, the performance should be the same as using a Queue
and managing locking yourself for Enqueue
and Dequeue
with your own lock
.
You are indeed imagining things - they should be the same.
Update
There is actually the fact that when using a SynchronizedQueue
you are adding a layer of indirection as you have to go through the wrapper methods to get to the inner queue which it is managing. If anything this should slow things down very fractionally as you've got an extra frame on the stack that needs to be managed for each call. God knows if in-lining cancels this out though. Whatever - it's minimal.
Update 2
I have now benchmarked this, and as predicted in my previous update:
"Queue.Synchronized" is slower than "Queue+lock"
I carried out a single-threaded test as they both use the same locking technique (i.e. lock
) so testing pure overhead in a "straight line" seems reasonable.
My benchmark produced the following results for a Release build:
Iterations :10,000,000
Queue+Lock :539.14ms
Queue+Lock :540.55ms
Queue+Lock :539.46ms
Queue+Lock :540.46ms
Queue+Lock :539.75ms
SynchonizedQueue:578.67ms
SynchonizedQueue:585.04ms
SynchonizedQueue:580.22ms
SynchonizedQueue:578.35ms
SynchonizedQueue:578.57ms
Using the following code:
private readonly object _syncObj = new object();
[Test]
public object measure_queue_locking_performance()
{
const int TestIterations = 5;
const int Iterations = (10 * 1000 * 1000);
Action<string, Action> time = (name, test) =>
{
for (int i = 0; i < TestIterations; i++)
{
TimeSpan elapsed = TimeTest(test, Iterations);
Console.WriteLine("{0}:{1:F2}ms", name, elapsed.TotalMilliseconds);
}
};
object itemOut, itemIn = new object();
Queue queue = new Queue();
Queue syncQueue = Queue.Synchronized(queue);
Action test1 = () =>
{
lock (_syncObj) queue.Enqueue(itemIn);
lock (_syncObj) itemOut = queue.Dequeue();
};
Action test2 = () =>
{
syncQueue.Enqueue(itemIn);
itemOut = syncQueue.Dequeue();
};
Console.WriteLine("Iterations:{0:0,0}\r\n", Iterations);
time("Queue+Lock", test1);
time("SynchonizedQueue", test2);
return itemOut;
}
[SuppressMessage("Microsoft.Reliability", "CA2001:AvoidCallingProblematicMethods", MessageId = "System.GC.Collect")]
private static TimeSpan TimeTest(Action action, int iterations)
{
Action gc = () =>
{
GC.Collect();
GC.WaitForFullGCComplete();
};
Action empty = () => { };
Stopwatch stopwatch1 = Stopwatch.StartNew();
for (int j = 0; j < iterations; j++)
{
empty();
}
TimeSpan loopElapsed = stopwatch1.Elapsed;
gc();
action(); //JIT
action(); //Optimize
Stopwatch stopwatch2 = Stopwatch.StartNew();
for (int j = 0; j < iterations; j++) action();
gc();
TimeSpan testElapsed = stopwatch2.Elapsed;
return (testElapsed - loopElapsed);
}
Upvotes: 18
Reputation: 241651
We can't answer this for you. Only you can answer it for yourself by getting a profiler and testing both scenarios (Queue.Synchronized
vs. Lock
) on real-world data from your application. It might not even be a bottleneck in your application.
That said, you should probably just be using ConcurrentQueue
.
Upvotes: 6