Reputation: 1293
I am using azure functions using queue, timer and blob triggers. I publish the code from visual studio and run it in portal.. It runs for days (say on weekend) on development site...
It picks up records based on Timer function from database, adds records to queue based on the queue records goes pull requests, gets data and, pushes data to blob storage and files from blob storage starts processing... all good..
On Monday morning, I stop the process on portal.
I make some changes to code and test running it on visual studio, all blob storage, queue, timer configurations are pointing to the same end points.
It looks like processing of visual studio goes back in time and looks up records in blob storage again and process them again. Maybe from last time when visual studio project ran..
My question is... when blob storage trigger runs, does it place a mark on what blobs are processed based on where the project runs from or the mark is placed on what blobs are already processed and which ones are not.
It looks like every time I run the project from visual studio, blobs that are already processed using portal are getting processed again.. Any suggestions..
Upvotes: 0
Views: 302
Reputation: 29940
This is a known issue, you can refer to this similar stackoverflow issue, as well there is an open issue in the function app GitHub.
Upvotes: 1