Reputation: 343
I have checked multiple paging threads on Stack and I was not able to make a functional replica of the code.
I know in the following case, Graph returns 200 items per page, and if I wanna foreach
all of the items I need to switch to the next page. Please check the following code and tell me what I am doing wrong.
Always throws an exception in the catch where the NextPageRequest.GetAsync()
is being called.
Result of this code is that only 200 of 3xx items are removed.
var deleteListItems = new List<Microsoft.Graph.ListItem>();
var deleteListItemsPage = await graphServiceClient
.Sites[siteUrl]
.Lists[listName]
.Items
.Request(deleteQueryOptions)
.GetAsync();
deleteListItems.AddRange(deleteListItemsPage.CurrentPage);
do
{
foreach (var deleteItem in deleteListItemsPage)
{
awaitTasks.Add(graphServiceClient
.Sites[siteUrl]
.Lists[listName]
.Items[deleteItem.Id]
.Request()
.DeleteAsync());
}
try
{
Task.WaitAll(awaitTasks.ToArray());
}
catch
{
Console.WriteLine("ERROOR executing the tasks");
}
Console.WriteLine("Another PAGE of List Items successfully deleted");
try
{
deleteListItemsPage = await deleteListItemsPage.NextPageRequest.GetAsync();
deleteListItems.AddRange(deleteListItemsPage.CurrentPage);
}
catch
{
Console.WriteLine("There is no NextPageRequest for deleting items or ERROR occured.");
}
} while (deleteListItemsPage.NextPageRequest != null);
Upvotes: 0
Views: 1260
Reputation: 33132
A few observations:
The Task.WaitAll
pattern is technically correct but in practice, it is prone to human error. You should always use the async
/await
pattern unless there is a very compelling technical reason to do otherwise. It will save you a lot of headaches.
You should never use an empty catch { }
clause unless you want to eat any exception without warning or context (FTR, you don't want this). At the very least, you should catch the generic System.Exception
and assign it to a variable (catch (Exception e) {}
), even if the only you thing you do with it log the exception to the console (catch (Exception e) { Console.WriteLine(e); }
).
When you know an object has the potential to be null
, you should always check for null
rather than relying on a Try/Catch. The Try/Catch pattern relies on Exceptions which are a lot more expensive than a simple null check.
For a scenario like this, a better pattern would something along the lines of this:
private async Task DeleteItems()
{
// Construct the HTTP Request but DO NOT execute it just yet
var deleteListItemsRequest = graphServiceClient
.Sites[siteUrl]
.Lists[listName]
.Items
.Request(deleteQueryOptions);
// This loop is for iterating over the pages
do
{
// Populate the page by executing the HTTP request
var deleteListItemsPage = deleteListItemsRequest.GetAsync();
// Iterate over the current page
foreach (var listItem in deleteListItemsPage.CurrentPage)
{
// Execute the delete and wait for the response
await graphServiceClient
.Sites[siteUrl]
.Lists[listName]
.Items[deleteItem.Id]
.Request()
.DeleteAsync();
}
// Check for additional pages
if (deleteListItemsPage.NextPageRequest != null)
{
// If we have an additional page, assign it to the
// same HTTP Request object. Again, we DO NOT execute
// yet. It will get executed at the top of the loop
deleteListItemsRequest = deleteListItemsPage.NextPageRequest;
}
else
{
// If we don't have an additional page, set the current
// request to null so we can use this to trigger an exit
// from the loop
deleteListItemsRequest = null;
}
// Check if we have a Graph request, if not break out of the loop
} while (deleteListItemsRequest != null);
}
Upvotes: 1