I'm currently using an .NET 6.0 isolated function with an HTTP trigger.
Because the activity is not being filled, I've added a middleware that creates the activity based on the W3C traceparent header.
My expectation based on MS docs is that when using Application Insights, the operation_ParentId would relate directly to the ParentSpanId of an Activity where logging is done.
Yet that is not what I'm seeing. What I'm seeing is following
Application A sends a request using traceparent = 00-3abe9f15e940badc5f1521e6eb1eb411-bfd30439c918c783-00
In middleware of application B, an activity is started and a message is logged. I can also validate that the ParentId of the activity is equal to 00-3abe9f15e940badc5f1521e6eb1eb411-bfd30439c918c783-00. The ParentSpanId is equal to bfd30439c918c783
using var requestActivity = new Activity(context.FunctionDefinition.Name);
requestActivity.SetParentId(traceParent);
requestActivity.Start();
_logger.LogInformation("Invoking '{Name}'", context.FunctionDefinition.Name);
In application insights I see OperationId being equal to the WC3 trace-id 3abe9f15e940badc5f1521e6eb1eb411 as expected. However, the operation_ParentId is a span I've not seen before. It is neither the requestActivity.SpanId nor the requestActivity.ParentSpanId.
What is happening that I do not understand? Does Application Insights not use the active Activity when logging?
my app configuration
var host = new HostBuilder()
.ConfigureFunctionsWorkerDefaults(worker =>
{
worker.UseMiddleware<TracingMiddleware>();
})
.ConfigureServices(collection =>
{
})
.ConfigureLogging(x=>x.AddApplicationInsights())
.Build();
host.Run();
My middleware function
public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
{
using var requestActivity = new Activity(context.FunctionDefinition.Name);
SetParentId(context, requestActivity);
requestActivity.Start();
_logger.LogInformation("Invoking '{Name}'", context.FunctionDefinition.Name);
var t = Activity.Current;
System.Console.WriteLine();
System.Console.WriteLine($"Activity.TraceId: {t.TraceId}");
System.Console.WriteLine($"Activity.ParentId: {t.ParentId}");
System.Console.WriteLine($"Activity.SpanId: {t.SpanId}");
await next(context);
var statusCode = (context.Items != null)
? context.GetHttpResponseData()?.StatusCode
: System.Net.HttpStatusCode.OK;
_logger.LogInformation(
"Executed '{Name}', Result {Result}, Duration = {Duration}ms",
context.FunctionDefinition.Name,
statusCode,
(DateTime.UtcNow - requestActivity.StartTimeUtc).TotalMilliseconds);
requestActivity.Stop();
}
As per the MS-Doc the Operation ID is equal to TraceId. And the Operation_parent Id has the combination of both Trace-ID with Parent-ID. The Format has .
The operation_ParentId field is in the format
<trace-id>.<parent-id>, where both trace-id and parent-id are
taken from the trace header that was passed in the request.
I have noticed that It was not actually same. I have tired with many possible ways. I end up with The operation-ID is not equal to the Trace-ID and the Operation-parent-ID is not equal to < Trace-ID>.< Parent-ID>.
In my middle ware i have added the current context like below
ILogger logger = context.GetLogger<MyCustomMiddleware>();
//logger.LogInformation("From function: {message}", message);
using var requestActivity = new Activity(context.FunctionDefinition.Name);
//t.SetParentId(context, requestActivity);
requestActivity.Start();
logger.LogInformation("Invoking '{Name}'", context.FunctionDefinition.Name);
//requestActivity.SetParentId(Convert.ToString(requestActivity.RootId));
requestActivity.SetParentId(requestActivity.TraceId,requestActivity.SpanId,ActivityTraceFlags.None);
var t = Activity.Current;
//Activity.SetParentId(context, requestActivity,);
//t.SetParentId(context, requestActivity)
logger.LogInformation($"Activity.TraceId: {t.TraceId}");
logger.LogInformation($"Activity.ParentId: {t.ParentId}");
logger.LogInformation($"Activity.SpanId: {t.SpanId}");
logger.LogInformation($"Activity.Id: {t.Id}");
logger.LogInformation($"Activity.ParentSpanId: {t.ParentSpanId}");
logger.LogInformation($"Activity.RootId: {t.RootId}");
logger.LogInformation($"Activity.Parent: {t.Parent}");
Result I got.
Refer here for one of my workarounds with more details.
Related
In a signup custom policy, after the user is created, I want to add him or her to a group. I tried to do it the same way I get the group membership in my signin policy, with a custom Azure function that calls the GraphAPI.
For teststing purpose, I first tried calling GraphAPI with Postman to see if it works. I got it working following the docs and came back with this query :
POST https://graph.microsoft.com/v1.0/groups/{{b2c-beneficiaire-group-id}}/members/$ref
Body:
{
"#odata.id": "https://graph.microsoft.com/v1.0/users/{{b2c-user-id}}"
}
And that work just fine. I get a 204 response and the user is in fact now a member of the group.
Now here's the part where I try to replicate it in my Azure function :
var url = $"https://graph.microsoft.com/v1.0/groups/{groupId}/members/$ref)";
var keyOdataId = "#odata.id";
var valueODataId = $"https://graph.microsoft.com/v1.0/users/{userId}";
var bodyObject = new List<KeyValuePair<string, string>>
{
new KeyValuePair<string, string>(keyOdataId, valueODataId)
};
var jsonData = $#"{{ ""{keyOdataId}"": ""{valueODataId}"" }}";
var groupBody = new StringContent(jsonData, Encoding.UTF8, "application/json");
log.LogInformation($"{url} + body:{await groupBody.ReadAsStringAsync()}");
using (var response = await httpClient.PostAsync(url, groupBody))
{
log.LogInformation("HttpStatusCode=" + response.StatusCode.ToString());
if (!response.IsSuccessStatusCode)
{
throw new InvalidOperationException($"{response.StatusCode} - Reason:{response.ReasonPhrase}. Content:{await response.Content.ReadAsStringAsync()}");
}
}
I've tried a few variations (with HttpRequest and other things) but I always end up with an Odata error :
"BadRequest","message":"The request URI is not valid. Since the segment 'members' refers to a collection,
this must be the last segment in the request URI or it must be followed by an function or action
that can be bound to it otherwise all intermediate segments must refer to a single resource."
From what I see it is related to the OData query (the $ref part). Do you have any idea about what do I have to do to make it work?
It looks like a typo in your url which ends with )
var url = $"https://graph.microsoft.com/v1.0/groups/{groupId}/members/$ref)";
I want to schedule splunk report to an azure web-hook and persist it into Cosmos DB.(after from processing ) This tutorial gave me some insight on how to process and persist data into cosmos db via the azure functions ( in java ). To solve the next part of the puzzle I"m reaching out for some advise on how to go about:
How to setup and host a webhook on Azure ?
Should I set a HttpTrigger , inside the EventHubOutput function and deploy it into the function app.? Or should I use the Webhook from Azure Event Grid ?(not clear on how to do this ). I'm NOT looking to stream any heavy volumes of data and want to keep the consumption cost low. So , which route should I take here?. Any pointers to tutorials will be of help here.
How do I handle a webhook data processing on #EventHubOutput ( referring the java example in the tutorial) ?. What is the setup and configuration I need to do here ? Any working examples will be of help .
I ended up using just the #HttpTrigger and binding the output using #CosmosDBOutput to persist the data. Something like this , would like to know if there are any better approaches.
public class Function {
#FunctionName("PostData")
public HttpResponseMessage run(
#HttpTrigger(
name = "req",
methods = {HttpMethod.GET, HttpMethod.POST},
authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
#CosmosDBOutput( name = "databaseOutput", databaseName = "SplunkDataSource",
collectionName = "loginData",
connectionStringSetting = "CosmosDBConnectionString")
OutputBinding<String> document,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
// Parse the payload
String data = request.getBody().get();
if (data == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST).body(
"Please pass a name on the query string or in the request body").build();
} else {
// Write the data to the Cosmos document.
document.setValue(data);
context.getLogger().info("Persisting payload to db :" + data);
return request.createResponseBuilder(HttpStatus.OK).body(data).build();
}
}
I'm looking to have our distributed event logging have proper correlation. For our Web Applications this seems to be automatic. Example of correlated logs from one of our App Services API:
However, for our other (non ASP, non WebApp) services were we use Log4Net and the App Insights appender our logs are not correlated. I tried following instructions here: https://learn.microsoft.com/en-us/azure/azure-monitor/app/correlation
Even after adding unique operation_Id attributes to each operation, we're not seeing log correlation (I also tried "Operation Id"). Example of none correlated log entry:
Any help on how to achieve this using log4net would be appreciated.
Cheers!
Across services correlation IDs are mostly propagated through headers. When AI is enabled for Web Application, it reads IDs/Context from the incoming headers and then updates outgoing headers with the appropriate IDs/Context. Within service, operation is tracked with Activity object and every telemetry emitted will be associated with this Activity, thus sharing necessary correlation IDs.
In case of Service Bus / Event Hub communication, the propagation is also supported in the recent versions (IDs/Context propagate as metadata).
If service is not web-based and AI automated correlation propagation is not working, you may need to manually get incoming ID information from some metadata if any exists, restore/initiate Activity, start AI operation with this Activity. When telemetry item is generated in scope of that Activity, it will get proper IDs and will be part of the overarching trace. With that, if telemetry is generated from Log4net trace that was executed in the scope of AI operation context then that telemetry should get right IDs.
Code sample to access correlation from headers:
public class ApplicationInsightsMiddleware : OwinMiddleware
{
// you may create a new TelemetryConfiguration instance, reuse one you already have
// or fetch the instance created by Application Insights SDK.
private readonly TelemetryConfiguration telemetryConfiguration = TelemetryConfiguration.CreateDefault();
private readonly TelemetryClient telemetryClient = new TelemetryClient(telemetryConfiguration);
public ApplicationInsightsMiddleware(OwinMiddleware next) : base(next) {}
public override async Task Invoke(IOwinContext context)
{
// Let's create and start RequestTelemetry.
var requestTelemetry = new RequestTelemetry
{
Name = $"{context.Request.Method} {context.Request.Uri.GetLeftPart(UriPartial.Path)}"
};
// If there is a Request-Id received from the upstream service, set the telemetry context accordingly.
if (context.Request.Headers.ContainsKey("Request-Id"))
{
var requestId = context.Request.Headers.Get("Request-Id");
// Get the operation ID from the Request-Id (if you follow the HTTP Protocol for Correlation).
requestTelemetry.Context.Operation.Id = GetOperationId(requestId);
requestTelemetry.Context.Operation.ParentId = requestId;
}
// StartOperation is a helper method that allows correlation of
// current operations with nested operations/telemetry
// and initializes start time and duration on telemetry items.
var operation = telemetryClient.StartOperation(requestTelemetry);
// Process the request.
try
{
await Next.Invoke(context);
}
catch (Exception e)
{
requestTelemetry.Success = false;
telemetryClient.TrackException(e);
throw;
}
finally
{
// Update status code and success as appropriate.
if (context.Response != null)
{
requestTelemetry.ResponseCode = context.Response.StatusCode.ToString();
requestTelemetry.Success = context.Response.StatusCode >= 200 && context.Response.StatusCode <= 299;
}
else
{
requestTelemetry.Success = false;
}
// Now it's time to stop the operation (and track telemetry).
telemetryClient.StopOperation(operation);
}
}
public static string GetOperationId(string id)
{
// Returns the root ID from the '|' to the first '.' if any.
int rootEnd = id.IndexOf('.');
if (rootEnd < 0)
rootEnd = id.Length;
int rootStart = id[0] == '|' ? 1 : 0;
return id.Substring(rootStart, rootEnd - rootStart);
}
}
Code sample for manual correlated operation tracking in isolation:
async Task BackgroundTask()
{
var operation = telemetryClient.StartOperation<DependencyTelemetry>(taskName);
operation.Telemetry.Type = "Background";
try
{
int progress = 0;
while (progress < 100)
{
// Process the task.
telemetryClient.TrackTrace($"done {progress++}%");
}
// Update status code and success as appropriate.
}
catch (Exception e)
{
telemetryClient.TrackException(e);
// Update status code and success as appropriate.
throw;
}
finally
{
telemetryClient.StopOperation(operation);
}
}
Please note that the most recent version of Application Insights SDK is switching to W3C correlation standard, so header names and expected format would be different as per W3C specification.
I have implemented REST API calls using a standalone c# console application. The API returns JSON which i'm deserializing and then storing it in the database.
Now i want to implement the entire logic in Azure platform so that it can invoked by passing start date and an end date and store location (it should run for three location) Below is the code:
static void Main()
{
MakeInventoryRequest();
}
static async void MakeInventoryRequest()
{
using (var client = new HttpClient())
{
var queryString = HttpUtility.ParseQueryString(string.Empty);
// Request headers
client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", "5051fx6yyy124hhfyuscf34f57ce9");
// Request parameters
queryString["query.locationNumbers"] = "4638";
queryString["availableFromDate"] = "2019-01-01";
queryString["availableToDate"] = "2019-03-07";
var uri = "https://api-test.location.cloud/api/v1/inventory?" + queryString;
using (var request = new HttpRequestMessage(HttpMethod.Get, uri))
using (var response = await client.SendAsync(request))
{
var stream = await response.Content.ReadAsStreamAsync();
if (response.IsSuccessStatusCode == true)
{
List<Inventory> l1 = DeserializeJsonFromStream<List<Inventory>>(stream);
InsertInventoryRecords(l1);
}
if (response.IsSuccessStatusCode == false)
{
throw new Exception("Error Response Code: " + response.StatusCode.ToString() + "Content is: " + response.Content.ReadAsStringAsync().Result.ToString());
}
}
}
}
Please suggest the best possible design using Azure components
With the information in hand I think you have multiple options , you need to find out which works for you the best . You can use Cloud service to host the console app ( you will have to change it to worker role , Visual studio will help you to convert that ) . I am not sure about the load which you are expecting but you can always increase and decrease the instance and these can be deployed to different geographies .
I see that you are persisting the data , if you want to do that you can use many of the SQL offerings . For invoking the REST API you can also azure functions and ADF.
Please feel free to comment if you want any more details on the same.
I am trying to integrate Azure App Insights with an Azure Function App (HttpTriggered). I want to add my own keys and values in the "customDimensions" object of the requests table. Right now it only shows the following:
On query
requests
| where iKey == "449470fb-****" and id == "5e17e23e-****"
I get this:
LogLevel: Information
Category: Host.Results
FullName: Functions.FTAID
StartTime: 2017-07-14T14:24:10.9410000Z
param__context: ****
HttpMethod: POST
param__req: Method: POST, Uri: ****
Succeeded: True
TriggerReason: This function was programmatically called via the host APIs.
EndTime: 2017-07-14T14:24:11.6080000Z
I want to add more key values such as:
EnvironmentName: Development
ServiceLine: Business
Based on this answer, I implemented the ITelemetryInitializer interface as follows:
public class CustomTelemetry : ITelemetryInitializer
{
public void Initialize(ITelemetry telemetry)
{
var requestTelemetry = telemetry as RequestTelemetry;
if (requestTelemetry == null) return;
requestTelemetry.Context.Properties.Add("EnvironmentName", "Development");
}
}
Here is how the run.csx code for the Azure Function App looks like:
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ExecutionContext context, TraceWriter log)
{
// Initialize the App Insights Telemetry
TelemetryConfiguration.Active.InstrumentationKey = System.Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process);
TelemetryConfiguration.Active.TelemetryInitializers.Add(new CustomTelemetry());
TelemetryClient telemetry = new TelemetryClient();
var jsonBody = await req.Content.ReadAsStringAsync();
GetIoItemID obj = new GetIoItemID();
JArray output = obj.GetResponseJson(jsonBody, log, telemetry);
var response = req.CreateResponse(HttpStatusCode.OK);
response.Content = new StringContent(output.ToString(), System.Text.Encoding.UTF8, "application/json");
return response;
}
But this did not work...
I believe, since you're creating the TelemetryClient yourself in this example, you don't need to bother with the telemetry initializer, you could just do
var telemetry = new TelemetryClient();
telemetry.Context.Properties["EnvironmentName"] = "Development";
directly, and everything sent by that instance of that telemetry client will have those properties set.
You'd need that telemetry initializer if you don't have control over who's creating the telemetry client and want to touch every item of telemetry created wherever?
I don't know how that TelemetryClient instance gets used downstream in azure functions though, so i'm not entirely positive, though.
Edit: from azure functions post about this, it says:
We’ll be working hard to get Application Insights ready for production
workloads. We’re also listening for any feedback you have. Please file
it on our GitHub. We’ll be adding some new features like better
sampling controls and automatic dependency tracking soon. We hope
you’ll give it a try and start to gain more insight into how your
Functions are behaving. You can read more about how it works at
https://aka.ms/func-ai
and the example from that func-ai link has a couple things:
1) it creates the telemetry client statically up front once (instead of in each call to the function)
private static TelemetryClient telemetry = new TelemetryClient();
private static string key = TelemetryConfiguration.Active.InstrumentationKey = System.Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process);
and inside the function it is doing:
telemetry.Context.Operation.Id = context.InvocationId.ToString();
to properly do correlation with events you might create with your telemetry client so you might want to do that too.
2) it appears that the telemetry client you create you can use, but they create their own telemetry client and send data there, so anything you touch in your telemetry client's context isn't seen by azure functions itself.
so, to me that leads me to something you can try:
add a static constructor in your class, and in that static constructor, do the telemetry initializer thing you were doing above. possibly this gets your telemetry initializer added to the context before azure functions starts creating its request and calling your method?
If that doesn't work, you might need to post on their GitHub or email the person listed in the article for more details on how to do this?