What is AppStateTracker in an Azure App Service? - azure

An extension called AppStateTracker is causing issues on my Azure web app, what extension is this?
what is it, and why are we only seeing it on one services. What differs that service from the rest of our services. I see it in the Activity Log when I check the JSON for the "Update website extensions"

AppStateTracker is an update which enables config level tracking for your web app from Application Change Analysis blade and simply collects data from the environment. So frequent changes to your application will create frequent updates but will have no adverse effects on your application.
AppStateTracker is a dormant extension, it gets activated when Azure makes PUT calls to the application. It would wake up your process if its not always on, however in terms of actual impact on the application - there is nothing invasive that can affect anything. It only scans environment variables and settings and never attaches or does anything with running process nor modify anything. It is part of Change Analysis which is a completely independent product.
If you want to see less of these updates, you can choose to disable file and configuration change tracking on web app following the instructions below:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/change-analysis#application-change-analysis-in-the-diagnose-and-solve-problems-tool.

Related

The best way to publish new version to Azure app/services?

Say I have 1 azure app which calls 1 azure api service. Now I need to update both applications to a newer version, in the most extended scale, i.e. database not compatible, api has changes to existing method signatures that are not compatible to old version invocation either. I use visual studio's publish profile to directly update. The problem I've been facing is that during the publish process, although it's only a few seconds of time, there're still active end users doing things on the web app and making api calls. I've personally seen results in such situations which are unstable, unpredictable and the saved data might be simply corrupt data.
So is there a better way to achieve some sort of 'flash update' which causes absolutely no side effect to end users? Thanks.
You should look at a different deployment strategy. First update the database, maybe with accepting null values, deploy a new API next to the current one. Validate it. Switch the traffic from current to new. Same for the website. It is a blue green deployment strategy, requires some more effort but solves the downtime or errors. https://www.martinfowler.com/bliki/BlueGreenDeployment.html
For the web app, you should use the deployment slots, deploy your new version to a staging slot and once you are ready, it is a matter of pointing the site URL to the new slot. This doesn't take anytime at all.
For the database, I believe you should freeze updates, take a backup and let the users work in readonly mode, and once you finish all your DB migration and changes, point the application to the new database and that is it.

How can I configure a website to start automatically using TFS release management?

[I'm posting this to record what I actually found out after hours of painful trial-and-error.]
I have a website that I need to be "always running" (because in this case it has a Hangfire job that's responsible for kicking off a scheduled task every 5 minutes), and by default, websites are only started up when the first request is received.
So, how can I ensure that the website is started automatically? And, how can I configure this via the TFS release management tool?
[This answer isn't specific to Hangfire, but see the Hangfire documentation's discussion of this issue for details of how it affects Hangfire, but note that the recommended work-around is somewhat involved, and much more complex than the solution below. See also a separate and quite comprehensive discussion on the Hangfire support forum that gives several alternative solutions.]
In IIS, each website is associated with an Application Pool (App Pool). You can configure your App Pool to start automatically via IIS Manager by changing the "Start Mode" to AlwaysRunning in "Advanced Settings" for the App Pool:
However, starting the App Pool doesn't start the website (or websites) associated with it. The website does not get loaded until the first request is received.
In IIS8 (or IIS7.5 with an extension), a new setting was added that allows us to work around this. You can ensure that the website gets sent a request as soon as the App Pool starts by setting "Preload Enabled" to True in "Advanced Settings" for the website:
The combination of these settings ensure that the website will automatically start up when IIS starts, and immediately after the App Pool is recycled, etc.
But, how can you get those settings applied automatically as part of a TFS release pipeline, rather than having to remember to set them manually?
In your release definition, you presumably have an "IIS Web App Management" task, which sets up the App Pool and the website. In the configuration panel for this step, there should be an "Advanced" box with an "Additional AppCmd.exe Commands" entry field. You can use AppCmd to apply the settings described above.
AppCmd has the most confusing command-line syntax I've yet seen outside of code-golf competitions, but here's the incantation that worked for me:
set config /section:applicationPools -[name='myAppPoolName'].startMode:AlwaysRunning
set app "mySiteName/" /preloadEnabled:true
Note that if you have configuration variables defined for your App Pool name and website name, then you can use those rather than hard-coding the name, such as:
set config /section:applicationPools -[name='$(appPoolName)'].startMode:AlwaysRunning
I hope this helps somebody... Thanks for listening :-)

Application Insights Reporting Duplicate Events for each Server Request

I have an API App running under Azure App Service, with Application Insights installed to track server side telemetry of API calls. When viewing Application Insights in the Azure portal, I am seeing two events for every one server call. Each event has an exact duplicate with the same timestamp, response time, telemetry, etc. I have verified that only one event is in the web server logs, so I'm not accidentally calling the same function twice from the client.
Here are a couple of screenshots to illustrate:
What could be causing this? And how can I fix it?
There is a one known scenario that may lead to the data duplication:
Application is not onboarded to AI SDK is deployed as an Azure
Web App
AI Extension is installed to the app -> after this step
you start to receive data without need to modify your code
Later on you decided to use more powerful features of AI let's say custom
event tracking and on-boarded your application to AI from VS and
re-deployed.
Now you may end up in the situation when HTTP module is registered twice and you start to receive duplicate request data. It happens because AI nuget packages add HTTP module definition in web.config, but extension installation drops additional assembly into your application bin folder that registers HTTP module dynamically during app start - Microsoft.AI.HttpModule.dll (Microsoft.ApplicationInsights.Extensibility.HttpModule.dll in previous versions). To correctly handle this case you need to remove extension leftovers during your application deployment by choosing "Settings->Remove additional files from destination" in case of deploying from VS.

Can I modify an app manifest and re-sign the SharePoint .app file?

I am building a SharePoint 2013 provider-hosted app using the high-trust model. This allows a customer to deploy the .app to their App Catalog and make it available to all SharePoint Sites. The provider-hosted portion of the app runs in an IIS box (cluster) which the customer also deploys (on-premise) with setup instructions and automated tools.
The .app file structure includes the application manifest - which specifies the precise endpoint where the provider-hosted portion resides, and also specifies whitelisted endpoints which the add-in can call. These are all specified by entering in URLs, hostnames, and port numbers into edit fields in Visual Studio in the 'Deploy App' form just before the .app file is built and digitally signed.
This seems to work just fine for a single app built by IT folks internally, if the org is small enough... but I really want to be able to distribute this solution to more than one customer. In order to do so, I would have to ask the customer for their respective endpoints, enter them into my build tools, and rebuild the .app for them. This just doesn't seem right... no customer wants to talk to the developer first and have a custom-built app. And why should they? No code is changing...
Upon investigation into the .app file format, it turns out it is really just a simple .zip file - and inside (voila!) there is the app manifest! Unfortunately, if you edit the app manifest and re-zip the file, the digital signature is broken, and the .app no longer works. (grrrr...)
What I want to do is simply reconfigure the app manifest to match the environment where it is deployed. This can happen programmatically during setup/installation time, or perhaps even just prior to download, but cannot be a process that involves developers typing into visual studio and pressing Rebuild. That simply won't scale.
Is there a tool that exists that can help with this problem? If not, does anyone have experience with the signing of .app files programmatically? I'm open to skinning this cat in any way possible.
This is a wild idea and not maybe even possible.
Create web ui, where clients enter their endpoints.
Have internal process that invokes MSBUILD/TFS to package app with endpoint
change app manifest with pre-build powershell
Then provide app via email or download?
http://www.sharepointconfig.com/2013/10/building-sharepoint-2013-apps-with-tfs-2013/
This is more of a workaround than a true answer - but would work:
For on-premise deployments of high-trust SharePoint 2013 apps - build the application with "known endpoints" - essentially hard-coded endpoints that can be deployed locally. Then instruct the customer to redirect those endpoints using DNS records or hosts file entries. In addition, the client would need to generate a local wildcard certificate signed by their own trusted root in order to satisfy the SharePoint 2013 app model requirements for appdomain and server-to-server communication.
This is by no means ideal, but for certain environments it might be the most practical approach. This also allows scaling for the IIS WebApp to occur at the customer-site, where it realistically belongs for a high-trust app.
This approach avoids the need to automate build tools and also avoids building a separate instance for every customer - both of which are somewhat undesirable. It might, for those reasons, be slightly less costly - but it also pushes some responsibility to the customer. Namely - hard-coding a DNS entry locally for machines in the topology.

Best practices for applying changes to a SharePoint application

I feel like I need a better defined framework for updating my SharePoint (MOSS 2007) application with custom code changes. I am creating wsp solution files with features and new types and such, but once those get tested and deployed, I feel like it's a bit of a leap of faith, and that makes me nervous and occasionally reluctant to deploy changes. After deployment, it's difficult to correlate the current state of the SharePoint application with the specific code that is deployed on that SharePoint server. What features are actually installed and on which sites? Which features are activated or deactivated? Which version of this custom field or content type is really there? Things like this. If an error crops up, I have to rely on my assumptions about what code is there and actually running, or I have to spend time digging through deployed assemblies and the 12 hive -- not impossible, but pretty unpleasant.
What steps should I take to improve my ability to unambiguously determine the state of the application and find the code that truly represents that state? Are there third-party tools that can help with this?
I feel your pain... Application Developyment Lifecycle with SharePoint 2007 leaves me with a bitter taste in my mouth.
To answer your question. We built our own deployment utility that does a few things for us.
Checks state of key Timer Jobs (too many times we would do a deployment to find one WFE that did not get deployment)
Checks state of key Services on all our web front ends (again we want to know health of farm before we start kicking off timer jobs).
Shows file version and date of selected assemblies from GAC (does this across all Web Front Ends). We have seen problems before where assemblies did not get installed correctly across the farms.
Updates web.config settings based on an custom XML scheme we provide. We ran into some problems with web.config updates so we have thought about creating a utility to validate the web.config (specifically make sure there are no duplicate entries for specific keys).
Push content type updates (first time content types are deployed via feature it works great, but as soon as you need to update that content type it gets tough).
Checks status of WSP package after deployment or upgrade.
This utility uses the SharePoint API to do most of this work. Some of it is done by checking WMI Events.
Unfortunately the SharePoint development experience is lacking in this regard. As long as you are "namespacing" all features deployed using solution packages, you can use solution management from central admin to keep track of versions, and what gets deployed to which site collection.
Features are scoped from all levels from the farm to an individual web; so maintenence from that level is a little tough. I just try to organize all deployed code from the (top down) solution level.
It gets even more complicated when deploying custom timer jobs, event handlers, etc; I really hope that version next will address a lot of these common developer concerns.
Isn't the only way that you have a planned/controlled deployment process and a version management system like TFS
In the current project I am involved in we have:
Continuous builds
Daily Builds on a development server
When we release something to test we merge the code to the Main bransch in the version management system (TFS)
When tested and ready for production then we merge the main bransch to the release bransch
Using this structured way we always knows what is deployed in what environment and can also track all changes based on environment or changes in requirements(are also tracked in TFS)

Resources