Say I have 1 azure app which calls 1 azure api service. Now I need to update both applications to a newer version, in the most extended scale, i.e. database not compatible, api has changes to existing method signatures that are not compatible to old version invocation either. I use visual studio's publish profile to directly update. The problem I've been facing is that during the publish process, although it's only a few seconds of time, there're still active end users doing things on the web app and making api calls. I've personally seen results in such situations which are unstable, unpredictable and the saved data might be simply corrupt data.
So is there a better way to achieve some sort of 'flash update' which causes absolutely no side effect to end users? Thanks.
You should look at a different deployment strategy. First update the database, maybe with accepting null values, deploy a new API next to the current one. Validate it. Switch the traffic from current to new. Same for the website. It is a blue green deployment strategy, requires some more effort but solves the downtime or errors. https://www.martinfowler.com/bliki/BlueGreenDeployment.html
For the web app, you should use the deployment slots, deploy your new version to a staging slot and once you are ready, it is a matter of pointing the site URL to the new slot. This doesn't take anytime at all.
For the database, I believe you should freeze updates, take a backup and let the users work in readonly mode, and once you finish all your DB migration and changes, point the application to the new database and that is it.
Related
Just created a simple ASP.NET MvC project, to list blood pressure measurements. I opted to use SQLite as a database as it is (supposedly) embedded into the project, therefore eliminating the need for an external database. Which is expensive, and the reason why I chose to go with SQLite. That way I would only need to host the web app, which is free, if I chose the free tier, F1.
Publishing through VS2022 is successful, and the app shows correctly, except it shows none of the measurements. Which renders the app ((no) pun intended) useless, at least as a cloud app. I have done some research, and changed the publishingsettings a couple of times, but this is how they look right now.
Configuration: Release
Target Framework: net6.0
Deployment Mode: Self-contained
Target Runtime: win-x86
File Publish Options: None of the options chosen
Databases: Default Connection - Use this connection string at runtime:
=> Data source=bloodpressuremeasurements.db
Entity Framework Migrations: BloodPressureContext (name of the DbContext)
- Apply this migration on publish: NOT chosen, since it gave me an exception and publish failed
Site Extension Options: Install ASP.NET Core Logging Integration Site Extension
- NOT chosen
I also tried changing the option for the db file to Copy To Output Directory: Copy always.
That didn't change a thing. What am I missing?
The website works now as intended, with all the data shown. It looks like the problem stems from scaffolding read and write methods, which made Visual Studio 2022 pull in EntityFrameworkCore.SqlServer. Which is not what I wanted, since I'm using SQLite.
That in turn created some service dependencies under Connected Services, one of them being SQL Server something. It also appeared under the Publish menu, and seems to have caused the compilator to view the connection string as an SQL Server database connection.
I created a new app, and copied the code from the first one. I was careful not to scaffold, as I only need a Get method, to show all measurements. I need none of the other methods in CRUD, neither Post, Delete, nor Update. I will add new measurements by running the app again locally, and read the measurements from a CSV file (did that in the beginning). Then I will publish the app anew, with the updated SQLite database.
An extension called AppStateTracker is causing issues on my Azure web app, what extension is this?
what is it, and why are we only seeing it on one services. What differs that service from the rest of our services. I see it in the Activity Log when I check the JSON for the "Update website extensions"
AppStateTracker is an update which enables config level tracking for your web app from Application Change Analysis blade and simply collects data from the environment. So frequent changes to your application will create frequent updates but will have no adverse effects on your application.
AppStateTracker is a dormant extension, it gets activated when Azure makes PUT calls to the application. It would wake up your process if its not always on, however in terms of actual impact on the application - there is nothing invasive that can affect anything. It only scans environment variables and settings and never attaches or does anything with running process nor modify anything. It is part of Change Analysis which is a completely independent product.
If you want to see less of these updates, you can choose to disable file and configuration change tracking on web app following the instructions below:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/change-analysis#application-change-analysis-in-the-diagnose-and-solve-problems-tool.
I have a reasonably complex Access 2013 web app which is now in production on hosted O365 Sharepoint. I would like to take a backup (package snapshot) into a test environment, and then migrate this to production once development is complete (I certainly don't want to do development on the production system!). The problem is that the snapshot also backs up all data so uploading the new package over the top of the existing package in the sharepoint app repository reverts the data to the time of snapshot as well. Alternatively, rolling back to the original snapshot if there are issues would lose all data after the new package was applied.
I can easily get a second version of the app going by saving as a new application etc but this creates a new product ID etc in the app store. We also use a Access 2013 desktop accdb frontend to hook directly into the Azure SQL database to do all the stuff that the web app can't provide (formatted reports etc) so I cant just create a new app every time without dealing with all of the credential and database renaming issues.
So my question is, does anybody know how to safely operate a test environment for Access 2013 web app development? One needs to be able to apply an updated version, or rollback to the old one if there are problems without rolling back the data. With the desktop client I can just save a new copy of the accdb file every time obviously. I dont mind creating a new instance or link to the app on sharepoint each time, however this obviously generates a totally new database (server name, db location, login id's etc) as well. You would hope there is a way to upload and replace your app without touching the data, else how else can you develop without working directly in production?
Any answers would be really appreciated.
Thanks.
While regular publish to Azure with WebDeploy, had checked Execute Code First Migrations, which i did before.
But this time the Use this connection string at runtime, was also checked, and i published without noticing it. as a result the remote azure db was wiped and instead is seeded with what looks like a default database with aspnetmemembership tables and _Migrations table that only has migrations related to identity tables.
The production data w db structures is gone and I did not yet setup backup on azure, doing it now.
Is there way to restore the database from some sort of auto backup on azure, i have web version w 1Gb size selected, I do not see any options
this suggests that web version would not have any daily backup, but also that web version is discountinued as of april, but i still have it. http://msdn.microsoft.com/en-us/library/jj650016.aspx
and another questions, i understand everything that happened? But it seems extremly dangerous that its so easy to wipe out the whole database and VS shows no warning nor publishing to azure notifies of anything. Is there anything that can be done to prevent dumb but yet very costly erros like this ?
TIA
I feel like I need a better defined framework for updating my SharePoint (MOSS 2007) application with custom code changes. I am creating wsp solution files with features and new types and such, but once those get tested and deployed, I feel like it's a bit of a leap of faith, and that makes me nervous and occasionally reluctant to deploy changes. After deployment, it's difficult to correlate the current state of the SharePoint application with the specific code that is deployed on that SharePoint server. What features are actually installed and on which sites? Which features are activated or deactivated? Which version of this custom field or content type is really there? Things like this. If an error crops up, I have to rely on my assumptions about what code is there and actually running, or I have to spend time digging through deployed assemblies and the 12 hive -- not impossible, but pretty unpleasant.
What steps should I take to improve my ability to unambiguously determine the state of the application and find the code that truly represents that state? Are there third-party tools that can help with this?
I feel your pain... Application Developyment Lifecycle with SharePoint 2007 leaves me with a bitter taste in my mouth.
To answer your question. We built our own deployment utility that does a few things for us.
Checks state of key Timer Jobs (too many times we would do a deployment to find one WFE that did not get deployment)
Checks state of key Services on all our web front ends (again we want to know health of farm before we start kicking off timer jobs).
Shows file version and date of selected assemblies from GAC (does this across all Web Front Ends). We have seen problems before where assemblies did not get installed correctly across the farms.
Updates web.config settings based on an custom XML scheme we provide. We ran into some problems with web.config updates so we have thought about creating a utility to validate the web.config (specifically make sure there are no duplicate entries for specific keys).
Push content type updates (first time content types are deployed via feature it works great, but as soon as you need to update that content type it gets tough).
Checks status of WSP package after deployment or upgrade.
This utility uses the SharePoint API to do most of this work. Some of it is done by checking WMI Events.
Unfortunately the SharePoint development experience is lacking in this regard. As long as you are "namespacing" all features deployed using solution packages, you can use solution management from central admin to keep track of versions, and what gets deployed to which site collection.
Features are scoped from all levels from the farm to an individual web; so maintenence from that level is a little tough. I just try to organize all deployed code from the (top down) solution level.
It gets even more complicated when deploying custom timer jobs, event handlers, etc; I really hope that version next will address a lot of these common developer concerns.
Isn't the only way that you have a planned/controlled deployment process and a version management system like TFS
In the current project I am involved in we have:
Continuous builds
Daily Builds on a development server
When we release something to test we merge the code to the Main bransch in the version management system (TFS)
When tested and ready for production then we merge the main bransch to the release bransch
Using this structured way we always knows what is deployed in what environment and can also track all changes based on environment or changes in requirements(are also tracked in TFS)