Reputation: 402
I tried to upload a JSON
file containing a list of around 5000 JSONs
to Azure Cosmos dB
with Azure Migration Tool
and was able to do that. It uploaded all 5000 items.
However when I'm trying to do the same from a .NET application, using the following code, it's not uploading and the Azure
portal is giving an error message.
Code:
public static async Task BulkImport()
{
string json = File.ReadAllText(@"C:\Temp.json");
List<StudentInfo> lists = JsonConvert.DeserializeObject<List<StudentInfo>>(json);
CosmosClientOptions options = new CosmosClientOptions() { ConnectionMode = ConnectionMode.Gateway, AllowBulkExecution = true };
CosmosClient cosmosClient = new CosmosClient(EndpointUrl, AuthorizationKey, options);
try
{
Database database = await cosmosClient.CreateDatabaseIfNotExistsAsync(DatabaseName);
Console.WriteLine(database.Id);
Container container = await database.CreateContainerIfNotExistsAsync(ContainerName, "/id");
Console.WriteLine(container.Id);
List<Task> tasks = new List<Task>();
foreach (StudentInfo item in lists)
{
tasks.Add(container.CreateItemAsync(item, new PartitionKey(item.id))
.ContinueWith((Task<ItemResponse<FunctionInfo>> task) =>
{
Console.WriteLine("Status: " + task.Result.StatusCode + " Resource: " + task.Result.Resource.id);
}));
}
await Task.WhenAll(tasks);
}
catch (Exception ex)
{
Console.WriteLine("Exception = " + ex.Message);
}
}
Message :
I tried running the code with the list containing only 100 JSONs
and it's working fine!
Please help me regarding this. Thanks in advance!
Upvotes: 1
Views: 1348
Reputation: 5549
It is not an error. It is just a warning. You were trying to create documents with too many threads, which consumes too many RUs.
The Azure CosmosDB API probably implements Throttling pattern. So, when you hit the limitation, your request will be throttled.
Azure system also monitored this event, which gives you the notification on the portal. You may check the RUs you used in metrics pages. And you may increase the throughput to increase concurrency.
But, (if you do not want to increase throughput), you may consider to:
Upvotes: 1