Blazor PWA cache shared between Azure deployment slots? - azure

I have an Azure App Service with 2 deployment slots, each with a version of a Blazor PWA.
Depending on a global variable in each slot, functionality and API endpoints differ.
I will push build number 10 on the main deployment slot and build number 10.1 on the other slot. Now my issue is that both slots will have the same build number and obviously same functionality/endpoints.
Here is my service worker:
const CACHE_VERSION = 10.1; // This is changed depending on slot
self.addEventListener('fetch', (event) => { return; });
self.addEventListener('activate', function (event) {
event.waitUntil(
caches.keys().then(function (names) {
for (let name of names) {
caches.delete(name);
console.log('Deleted cache: ' + name);
}
})
);
});
I need my service worker to not cache anything and pull network only.
How do I fix this?

Related

How to query Azure App Insights logs using Node.JS

There is an example how it is possible to query
LogAnalytics Workspace Logs or
Metrics for individual resources
using Node.Js:
But I could not find if there is an option to query Logs from AppInsights or from resource directly.
I need it to automate reporting of the performance, so I am planning to query requests table (we send logs using https://github.com/microsoft/ApplicationInsights-Java). Currently report is done manually using Performance blade of the AppInsights - checking Avg and 99 percentile for requests with specific filters on URL
You can setup logs ingestion from your application to Log Analytics Workspace using Diagnostic Settings. For example, if you host your app in WebApp you can send AppServiceHTTPLogs. And then in your Node.JS app you can use #azure/monitor-query package with a similar query:
let dataset=AppServiceHTTPLogs
| where CsHost == 'PUT_YOUR_HOSTNAME_HERE'
| where ScStatus == 200
| where CsUriStem contains 'FILTER_BY_URL_IF_YOU_NEED_IT';
dataset
| summarize arg_max(TimeTaken, CsUriStem)
| union(dataset
| summarize avg(TimeTaken), percentiles(TimeTaken, 99)
| extend CsUriStem='Overall')
That one is a close approximation of the performance blade from app insights.
And then your whole app could be
const azureLogAnalyticsWorkspaceId = "WORKSPACE_ID";
const logsQueryClient = new LogsQueryClient(new DefaultAzureCredential());
export async function runWebAppPerformance(startDate: Date, endDate: Date) {
const query = "PUT_YOUR_QUERY_HERE";
const result = await logsQueryClient.queryWorkspace(
azureLogAnalyticsWorkspaceId,
query,
{
startTime: startDate, endTime: endDate
}
);
if (result.status === LogsQueryResultStatus.Success) {
const tablesFromResult: LogsTable[] = result.tables;
if (tablesFromResult.length === 0) {
console.log(`No results for query`);
return;
}
processTables(tablesFromResult);
} else {
console.log(`Error processing the query - ${result.partialError}`);
}
}
async function processTables(tablesFromResult: LogsTable[]) {
const table = tablesFromResult[0];
const urlIndex = table.columnDescriptors.findIndex(c => c.name === "CsUriStem");
const timeTakenIndex = table.columnDescriptors.findIndex(c => c.name === "TimeTaken");
const avgIndex = table.columnDescriptors.findIndex(c => c.name === "avg_TimeTaken");
const ninetyNineindex = table.columnDescriptors.findIndex(c => c.name === "percentile_TimeTaken_99");
for (const row of table.rows) {
if (row[urlIndex] === "Overall"){
console.log(`${row[urlIndex]} (ms):`);
console.log(`Average: ${row[avgIndex]}; \t 99%: ${row[ninetyNineindex]}`);
}
else {
console.log(`MAX (ms)`);
console.log(`${row[urlIndex]}: \t ${row[timeTakenIndex]}`);
}
}
}
How to query Azure App Insights logs using Node.JS
In Azure Portal, Create Application Insights Instance and Copy the Instrumentation key from the overview page
Create a sample NodeJS Web App in Visual Studio code
We can add the instrumentation key in localhost or can be updated once after nodejs application is deployed to Azure.Here I have added the required application insight setting and deployed the App
In server.js, add
let appInsights = require('applicationinsights');
appInsights.setup("cc580d32-a7eb-41d7-b0e0-90ea0889fd10");
appInsights.start();
From the root folder of the Application, open the terminal and run
npm install applicationinsights --save
Deploy the Application to Azure
Browse the Application
View Logs in Application Insights
Application Insights queries are based on KQL
Navigate to Azure Portal => Your Application Insights Instance => Logs under Monitoring = > Click on traces
Metrics for individual resources using Node.Js
Navigate to metrics under Monitoring
Please refer Node.js services with Application Insights for more information

Azure Functions Dependency Tracking for SQL Server and Service Bus Into Application Insights

Previously I have Azure Web App (.net core) and It successfully track the SQL Server and Service Bus dependency into Application Insights. It is not working some how with Azure Functions.
Environment
dotnet 6
dotnet-isolated mode
log level default set to "Information".
Azure Environment using Consumption plan for Azure Functions.
Application Insights key is configured.
I have Azure API management at front and backend is Azure Function and that call SQL Server and Service Bus.
Api Management Service to Azure function dependency successfully resolved but Azure Function to other component is not working.
I know I am posting my own answer. Also there are chance that in future there may be some good solution or it get integrated the way it is in in-process mode.
By then follow steps.
Add Package
Microsoft.ApplicationInsights.WorkerService
In program.cs in configuring host.
services.AddApplicationInsightsTelemetryWorkerService();
More info at
https://learn.microsoft.com/en-us/azure/azure-monitor/app/worker-service
The only way I've managed to solve this issue so far was by setting up custom Middleware
.ConfigureFunctionsWorkerDefaults(config =>
{
config.UseMiddleware<AiContextMiddleware>();
})
In the IServiceCollection you need to setup simply
.AddApplicationInsightsTelemetryWorkerService()
public class AiContextMiddleware : IFunctionsWorkerMiddleware
{
private readonly TelemetryClient _client;
private readonly string _hostname;
public AiContextMiddleware(TelemetryClient client)
{
_client = client;
_hostname = Environment.GetEnvironmentVariable("AI_CLOUD_ROLE_NAME");
}
public async Task Invoke(FunctionContext context, FunctionExecutionDelegate next)
{
var operationId = ExtractOperationId(context.TraceContext.TraceParent);
// Let's create and start RequestTelemetry.
var requestTelemetry = new RequestTelemetry
{
Name = context.FunctionDefinition.Name,
Id = context.InvocationId,
Properties =
{
{ "ai.cloud.role", _hostname},
{ "AzureFunctions_FunctionName", context.FunctionDefinition.Name },
{ "AzureFunctions_InvocationId", context.InvocationId },
{ "AzureFunctions_OperationId", operationId }
},
Context =
{
Operation =
{
Id = operationId,
ParentId = context.InvocationId,
Name = context.FunctionDefinition.Name
},
GlobalProperties =
{
{ "ai.cloud.role", _hostname},
{ "AzureFunctions_FunctionName", context.FunctionDefinition.Name },
{ "AzureFunctions_InvocationId", context.InvocationId },
{ "AzureFunctions_OperationId", operationId }
}
}
};
var operation = _client.StartOperation(requestTelemetry);
try
{
await next(context);
}
catch (Exception e)
{
requestTelemetry.Success = false;
_client.TrackException(e);
throw;
}
finally
{
_client.StopOperation(operation);
}
}
private static string ExtractOperationId(string traceParent)
=> string.IsNullOrEmpty(traceParent) ? string.Empty : traceParent.Split("-")[1];
}
It's definitely not a perfect solution as you then get two starting logs, but as end result, you get all logs traces + dependencies correlated to an operation.
I've solved this issue in the first place like that, now I'm revisiting whether there are any better ways to solve this.
Let me know too whether you managed to solve this issue on your side.

Why are release annotations automatically created in my release pipelines?

I noticed that release annotations are now added for a couple of our Application Insights instances when we execute our release pipelines. We have not done any of the things mentioned in the article. Where are those annotations coming from?
It seems to happen only on our application release pipelines, not infrastructure. Most of our applications run on App Service, but not all have release annotations.
The Azure App Service Deploy Task automatically creates release annotations.
The only prerequisite is that the App Service has a valid APPINSIGHTS_INSTRUMENTATIONKEY in its AppSettings.
This is the relevant code block:
https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/Common/AzureRmDeploy-common/operations/ReleaseAnnotationUtility.ts
export async function addReleaseAnnotation(endpoint: AzureEndpoint, azureAppService: AzureAppService, isDeploymentSuccess: boolean): Promise<void> {
try {
var appSettings = await azureAppService.getApplicationSettings();
var instrumentationKey = appSettings && appSettings.properties && appSettings.properties.APPINSIGHTS_INSTRUMENTATIONKEY;
if(instrumentationKey) {
let appinsightsResources: ApplicationInsightsResources = new ApplicationInsightsResources(endpoint);
var appInsightsResources = await appinsightsResources.list(null, [`$filter=InstrumentationKey eq '${instrumentationKey}'`]);
if(appInsightsResources.length > 0) {
var appInsights: AzureApplicationInsights = new AzureApplicationInsights(endpoint, appInsightsResources[0].id.split('/')[4], appInsightsResources[0].name);
var releaseAnnotationData = getReleaseAnnotation(isDeploymentSuccess);
await appInsights.addReleaseAnnotation(releaseAnnotationData);
console.log(tl.loc("SuccessfullyAddedReleaseAnnotation", appInsightsResources[0].name));
}

google-cloud/resource' cloud function not listing all projects in response

I am using a cloud function written in node.js to list projects this is the index.js file containing the method, When I trigger this function I am getting only 1 project printed. ProjectA -> the cloud function also resides in ProjectA, I have another ProjectB which is not getting printed which is also in ACTIVE mode. I have owner permission for both the projects.
const {Resource} = require('#google-cloud/resource');
const resource = new Resource();
async function getProjects() {
try {
// Lists all current projects
const [projects] = await resource.getProjects();
console.log(`success in getProjects() call`);
// Set a uniform endTime for all the resulting messages
const endTime = new Date();
const endTimeStr = endTime.toISOString();
// sample 2019-11-12T17:58:26.068483Z
for (var i=0; i<projects.length;i++) {
console.log("Total Projects ",projects.length) //Printing as 1 instead of correct 2
// Only publish messages for active projects
if (projects[i]["metadata"]["lifecycleState"] === config.ACTIVE) {
// Construct a Pub/Sub message
console.log(`About to send Pub/Sub message ${projects[i]}}`);
const pubsubMessage = {
"token": config.METRIC_EXPORT_PUBSUB_VERIFICATION_TOKEN,
"project_id": projects[i]["id"],
"end_time": endTimeStr
}
}
}
} catch(err) {
console.error("Error in getProjects()");
console.error(err);
throw err;
}
}
However if i try the google api link
https://cloud.google.com/resource-manager/reference/rest/v1/projects/list#try-it
I am getting 2 projects as response which i have access to.
When you execute a Cloud Function you choose a service account that will execute it, normally it's the "App Engine default service account (project-id#appspot.gserviceaccount.com), that service account should have the "Project Owner" role.
The API call from the API explorer uses an API key that it's tied to your user account no the service account used to execute the Cloud Functions, that's why it shows you all your projects.
To fix your issue just add the service account, that you're using to execute the Cloud Function, to all your Projects with the Project Owner role, although other roles (like Project Viewer) are enough to list it.

Calling CosmosDB server from Azure Cloud Function

I am working on an Azure Cloud Function (runs on node js) that should return a collection of documents from my Azure Cosmos DB for MongoDB API account. It all works fine when I build and start the function locally, but fails when I deploy it to Azure. This is the error: MongoNetworkError: failed to connect to server [++++.mongo.cosmos.azure.com:++++] on first connect ...
I am new to CosmosDB and Azure Cloud Functions, so I am struggling to find the problem. I looked at the Firewall and virtual networks settings in the portal and tried out different variations of the connection string.
As it seems to work locally, I assume it could be a configuration setting in the portal. Can someone help me out?
1.Set up the connection
I used the primary connection string provided by the portal.
import * as mongoClient from 'mongodb';
import { cosmosConnectionStrings } from './credentials';
import { Context } from '#azure/functions';
// The MongoDB Node.js 3.0 driver requires encoding special characters in the Cosmos DB password.
const config = {
url: cosmosConnectionStrings.primary_connection_string_v1,
dbName: "****"
};
export async function createConnection(context: Context): Promise<any> {
let db: mongoClient.Db;
let connection: any;
try {
connection = await mongoClient.connect(config.url, {
useNewUrlParser: true,
ssl: true
});
context.log('Do we have a connection? ', connection.isConnected());
if (connection.isConnected()) {
db = connection.db(config.dbName);
context.log('Connected to: ', db.databaseName);
}
} catch (error) {
context.log(error);
context.log('Something went wrong');
}
return {
connection,
db
};
}
2. The main function
The main function that execute the query and returns the collection.
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('Get all projects function processed a request.');
try {
const { db, connection } = await createConnection(context);
if (db) {
const projects = db.collection('projects')
const res = await projects.find({})
const body = await res.toArray()
context.log('Response projects: ', body);
connection.close()
context.res = {
status: 200,
body
}
} else {
context.res = {
status: 400,
body: 'Could not connect to database'
};
}
} catch (error) {
context.log(error);
context.res = {
status: 400,
body: 'Internal server error'
};
}
};
I had another look at the firewall and private network settings and read the offical documentation on configuring an IP firewall. On default the current IP adddress of your local machine is added to the IP whitelist. That's why the function worked locally.
Based on the documentation I tried all the options described below. They all worked for me. However, it still remains unclear why I had to manually perform an action to make it work. I am also not sure which option is best.
Set Allow access from to All networks
All networks (including the internet) can access the database (obviously not advised)
Add the inbound and outbound IP addresses of the cloud function project to the whitelistThis could be challenging if the IP addresses changes over time. If you are on the consumption plan this will probably happen.
Check the Accept connections from within public Azure datacenters option in the Exceptions section
If you access your Azure Cosmos DB account from services that don’t
provide a static IP (for example, Azure Stream Analytics and Azure
Functions), you can still use the IP firewall to limit access. You can
enable access from other sources within the Azure by selecting the
Accept connections from within Azure datacenters option.
This option configures the firewall to allow all requests from Azure, including requests from the subscriptions of other customers deployed in Azure. The list of IPs allowed by this option is wide, so it limits the effectiveness of a firewall policy. Use this option only if your requests don’t originate from static IPs or subnets in virtual networks. Choosing this option automatically allows access from the Azure portal because the Azure portal is deployed in Azure.

Resources