Reputation: 1236
I have a function app written in C# created in Azure that is triggered by HttpTrigger.
I am trying to log several custom events to log the interesting time point for analyzing performance.
What I have done is to create TelemetryClient in the construct function.
static ThumbnailGenerator()
{
telemetryClient = new TelemetryClient(TelemetryConfiguration.CreateDefault());
}
And then in the function:
[FunctionName("Upload")]
[StorageAccount("AzureWebJobsStorage")]
public static async Task<IActionResult> Upload(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "upload/{name}")] HttpRequest req,
string name,
ILogger log,
ExecutionContext context)
{
var evt = new EventTelemetry("Upload function called");
evt.Context.User.Id = name;
telemetryClient.TrackEvent(evt);
telemetryClient.Flush();
....
DateTime start = DateTime.UtcNow;
// Log a custom dependency in the dependencies table.
var dependency = new DependencyTelemetry
{
Name = "Upload-Image-Operation",
Timestamp = start,
Duration = DateTime.UtcNow - start,
Success = true
};
telemetryClient.TrackDependency(dependency);
telemetryClient.Flush();
return new OkObjectResult(name + "Uploaded successfully.");
}
The application insights for this can correctly show the default RequestTelemetry and traces for the function all the time. However, it is very unstable for the custom event and DependencyTelemetry which sometimes shows in application insight and sometimes not at all.
I did many research and has added something like:
telemetryClient.Flush();
But the instability is still more or less the same.
The libariries I am using are:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<AzureFunctionsVersion>v3</AzureFunctionsVersion>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Azure.Storage.Blobs" Version="12.9.0" />
<PackageReference Include="Microsoft.ApplicationInsights" Version="2.14.0" />
<PackageReference Include="Microsoft.Azure.WebJobs.Extensions.Storage" Version="5.0.0-beta.2" />
<PackageReference Include="Microsoft.Azure.WebJobs.Logging.ApplicationInsights" Version="3.0.25" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.11" />
<PackageReference Include="SixLabors.ImageSharp" Version="1.0.3" />
<PackageReference Include="System.Diagnostics.DiagnosticSource" Version="5.0.1" />
</ItemGroup>
<ItemGroup>
<None Update="host.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
<None Update="local.settings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<CopyToPublishDirectory>Never</CopyToPublishDirectory>
</None>
</ItemGroup>
</Project>
Does anyone know why? It seems Azure is really unstable and buggy in this part. Please give me some tips.
Upvotes: 2
Views: 1934
Reputation: 29711
Make sure your telemetry configuration has the proper instrumentation key set. I am not sure that TelemetryConfiguration.CreateDefault()
gets the right value.
Also, I suggest injecting the TelemetryClient in the constructor using Dependency Injection. It is already availble out of the box. That way you do not have to create the instance yourself and you do not have to worry about the right way to set the instrumentation key:
[FunctionName("Upload")]
[StorageAccount("AzureWebJobsStorage")]
public static async Task<IActionResult> Upload(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = "upload/{name}")] HttpRequest req,
string name,
ILogger log,
TelemetryClient telemetryClient,
ExecutionContext context)
{
telemetryClient.XXX();
...
}
Flushing the client should not be necessary, only needed upon termination and even then you should wait a while as it is an async process.
Upvotes: 3
Reputation: 112795
It's possible that you are running into the Sampling feature of Application Insights.
[Sampling] is the recommended way to reduce telemetry traffic, data costs, and storage costs, while preserving a statistically correct analysis of application data.
This page also has some more details that are specific to Azure Functions:
When the rate of incoming executions exceeds a specified threshold, Application Insights starts to randomly ignore some of the incoming executions.
This sounds like the behavior that you're experiencing. You can use the host.json
file to configure which types of telemetry are excluded from sampling; try adding Event
to excludedTypes
:
{
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"maxTelemetryItemsPerSecond" : 20,
"excludedTypes": "Request;Exception;Event"
}
}
}
}
Upvotes: 1