Reputation: 6105
I'm sending many messages using cloudQueue.BeginAddMessage and EndAddMessage. I'm limiting the amount of begins that haven't returned yet to 500. Yet I'm getting an exception with code 10048 (meaning socket exhaustion).
Microsoft.WindowsAzure.Storage.StorageException: Unable to connect to the remote server ---> System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: Only one usage of each socket address (protocol/network address/port) is normally permitted
Solution I found after search all advise to modify registry, however as this is planned in a worker role in Azure, I can't do that.
I have other functions that inserts to table service, they operate just as fast but does not have any problems. It seems almost like the EndAddMessage function doesn't close the connection or something (I have limited understanding of sockets).
My question: is there a bug on azure's side here? What should I do to fix this except artificially slowing the adding of messages down to a crawl?
Here's the test function I use to send messages. In my case, after about 16500 messages being added and callback ended properly and stable, it slows down and after a little while throws the mentioned exception.
I am sorry for the long code, but this should be copy paste for you to reproduce the problem.
The exception is thrown from AsyncCallback endAddCallback
.
static void Main()
{
Console.SetBufferSize(205, Int16.MaxValue - 1);
// Set the maximum number of concurrent connections (12*6 in my case)
ServicePointManager.DefaultConnectionLimit = 12 * Environment.ProcessorCount;
//setting UseNagleAlgorithm to true reduces network traffic by buffering small packets of data and transmitting them as a single packet, but setting to false can significantly reduce latencies for small packets.
ServicePointManager.UseNagleAlgorithm = false;
//if true, "Expect: 100-continue" header is sent to ensure a call can be made. This uses an entire roundtrip to the service point (azure), so setting to false sends the call directly.
ServicePointManager.Expect100Continue = false;
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(__CONN_STRING);
CloudQueueClient client = storageAccount.CreateCloudQueueClient();
CloudQueue queue = client.GetQueueReference(__QUEUE_NAME);
queue.CreateIfNotExists();
List<Guid> ids = new List<Guid>();
for (Int32 i = 0; i < 40000; i++)
ids.Add(Guid.NewGuid());
SendMessages(queue, ids.Select(id => new CloudQueueMessage(id.ToString())).ToList().AsReadOnly());
}
public static void SendMessages(CloudQueue queue, IReadOnlyCollection<CloudQueueMessage> messages)
{
List<CloudQueueMessage> toSend = messages.ToList();
Object exceptionSync = new Object();
Exception exception = null;
CountdownEvent cde = new CountdownEvent(toSend.Count);
AsyncCallback endAddCallback = asyncResult =>
{
Int32 endedItem = (Int32)asyncResult.AsyncState;
try
{
queue.EndAddMessage(asyncResult);
Console.WriteLine("SendMessages: Ended\t\t{0}\t/{1}", endedItem + 1, toSend.Count);
}
catch (Exception e)
{
Console.WriteLine("SendMessages: Error adding {0}/{1} to queue: \n{2}", endedItem + 1, toSend.Count, e);
lock (exceptionSync)
{
if (exception == null)
exception = e;
}
}
finally { cde.Signal(); }
};
for (Int32 i = 0; i < toSend.Count; i++)
{
lock (exceptionSync)
{
if (exception != null)
throw exception;
}
//if number of added but not ended is larger than the MAX, yield and check again.
while (true)
{
Int32 currentOngoing = (i- (cde.InitialCount - cde.CurrentCount));
if (currentOngoing > 500)
Thread.Sleep(5);
else
break;
}
Console.WriteLine("SendMessages: Beginning\t{0}\t/{1}", i + 1, toSend.Count);
queue.BeginAddMessage(toSend[i], endAddCallback, i);
}
cde.Wait();
if (exception != null)
throw exception;
Console.WriteLine("SendMessages: Done.");
}
Upvotes: 3
Views: 2096
Reputation: 6105
This has now been solved in Storage Client Library 2.0.5.1.
Alternatively, there is also a workaround: uninstalling KB2750149.
Upvotes: 0
Reputation: 364
The Cloud[Blob|Table|Queue]Client does not maintain state, and can be used across many objects.
This issue is related to ServicePointManager becoming overloaded. Queue stress scenarios tend to exacerbate this behavior since they perform many small requests (in your case a guid which is quite small). There are a few mitigations you an do that should alleviate this issue
Also, regarding your comment of table entities not showing the same behavior, The current wire protocol that the Table Service supports is Atom/Pub, which can be quite chatty (xml etc). Therefore a simple entity insert is much larger than a simple queue guid message. Essentially due to the size difference the table traffic is doing a better job utilizing the TCP layer below it, so this isn't a true apples to apples comparison.
If these solutions do not work for you, it would be helpful to get a few more pieces of information regarding your account so we can look at this on the back end.
joe
Upvotes: 2
Reputation: 61473
I suspect it's because CloudQueueClient isn't meant for multithreaded (async) access as you're doing it.
Try recreating the CloudQueue in SendMessages
like this
CloudQueueClient client = storageAccount.CreateCloudQueueClient();
CloudQueue queue = client.GetQueueReference(__QUEUE_NAME);
I've read in numerous forums that a CloudXXClient is meant to be used once and disposed. That principal might apply here.
There isn't much efficiency to be gained, as the ctor for the client doesn't send a request to the queue and has threading issues.
Upvotes: 0