I'm working on a project that utilises Azure DevOps for building our .net core application suite.
Last couple of days we've noticed the builds are being queued for hours, we've paid for additional build time on the Azure Hosted Build Agents but there's just such a wait for the builds.
It seems to be random as sometimes the builds happen right away, we don't have any Azure support plan so I thought i'd ask the question if anyone is experiencing similar issues.
This is most likely due to a temporary outage that occurred recently and is now showing as resolved: https://status.dev.azure.com/_event/231083118
You'll need to establish first that you're running into a concurrency limit - fortunately, there are now analytics for that in preview in Azure DevOps:
First, enable the feature (bottom of the screen shot):
Then, go to Project Settings -> Agent Pools -> Azure Pipelines -> Analytics
It would seem to me that if you're not crossing the "Concurrency" line when this is occurring, you could open a ticket with Microsoft. If you are crossing the line, you would need to determine whether to purchase more parallel jobs, or to self-host.
Related
I am trying to create a deployment plan for my web application in Azure which can support 2 environments (Dev/Staging). Basically, I want the code checked in by developers to be deployed to the Dev machine end of the day. And then the latest Dev changes to be merged with staging branch and if no merge conflict happened a new publish goes to staging machine. Can anyone help me where to start and what feature in azure i can use to server this?
You could use Azure Devops to trigger scheduled builds (at the end of the day) to perform the routine you are describing. you could also use releases and trigger them automatically from the builds that happen on a schedule.
You would also need to deploy vsts agents on the machines or allow for vsts agents to somehow talk to the machines to upload code to them
I am updating my Azure App Service from Azure DevOps. Currently, my release is like this:
Stop the App Service,
Update the App Service, and
Start the App Service.
My question is whether it reasonable to stop the App Service during the update? When I select a release template from Azure DevOps for Azure App Service, there are't any stop/start steps, only the update step. So I am wondering if the stop/start is even needed?
What we have done mostly is:
Stop staging slot
Deploy to slot
Start slot
Swap staging to production
Stop staging slot
Martin's suggestion on Take app offline is also a good one!
We prefer to deploy to slots and then swap so we incur minimal impact to production and can also rollback easily.
Stopping/taking app offline can prevent file locking issues.
It probably depends on your app. If you don't have any issues when you just update your app (such as the a file is in use issue) you can consider to use the Take App Offline flag which will place an app_offline.htm file in the root directory of the App Service during the update (then it will be removed). This way user will recognize that something is happening with the app.
However, I often ended up doing the same like you: Stop, Update, Start 😉
There are (5) options for safe-deployment (atomic updates) to Azure Web Apps. Here is my preferred order ranked by priority and feature richness:
Run-from-Package + ZipDeploy (makes site read-only)
ZipDeploy (using kudu REST api - automatically takes site offline)
Azure CLI (az webapp)
msdeploy (-enableRule:AppOffline, or stop/start site to enforce atomicity)
FTP (using publish profile, make sure to upload appoffline.htm)
There are numerous other deployment options like cloud sync, github continuous, local git, etc - but they are all built upon Kudu APIs (as is Azure CLI).
Note: If you're using Azure DevOps - it's supports nearly all these options - leverage the Azure App Service Deploy task
Agree with both Martin and juunas. If you want to deploy without impacting users then you need to use the slot swap approach. juunas brings up the great point of easily rolling back too. Our approach includes another slot we call "hotfix". This adds a few benefits:
Having an environment with production configs so that you can optionally do additional testing before actually doing the swap.
Roll back in prod even when devs have already deployed into a staging environment.
Allows you to test bugs in the current and previous versions of the code. Helpful when someone says "well it worked before this deployment"...
This is what it looks like.
When the MVC application is deployed to azure environment, there is slowness in page loading and also response time of web site get delayed for few seconds once the deployment is done.
When the application is deployed in production environment, this slowness make the bad user experience.
Automation test scripts fails due to delay in response of site immediate after deployment
Deployment is done from Visual studio 2013 to Azure Web App services using Visual Studio Publish option settings.
What we have tried:
Deployment is scheduled once in 30 days and also in mid night, however the user in other part of world face issues when deployment happens.
Can some one help me to resolve issue and there should not be any difference to user when deployment happens in production.
There is a slight increase in loading time, for the first request, after an application is deployed to an Azure Web App. What happens behind the scenes is that the underlying web application must pre-compile the MSIL into machine code before it can serve the site. See https://msdn.microsoft.com/en-us/library/ms366723.aspx for more details.
The application pool, used by the Web App, is also regularly recycled in case of in-activity. The same pre-compilation happens then as well. This downtime can be minimized by enabling "Always-On" for the Web App. See https://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/ for more details on how to enable it. The always-on feature regularly pings the site to keep it from going inactive.
Also, to minimize downtime, when doing a deployment to a Azure Web App. Have a look at using deployment slots, https://azure.microsoft.com/en-us/documentation/articles/web-sites-staged-publishing/. Idea here is that you first deploy to the deployment slot (an own web app instead, get it warmed up) and swap it to be the production slot. This way achieving minimal downtime for the Web App. To automatize this process there is a feature called Auto Swap https://azure.microsoft.com/en-us/documentation/articles/web-sites-staged-publishing/#configure-auto-swap-for-your-web-app that does this for you.
Deployment slots are available for standard and premium apps while always-on is available for basic, standard and premium apps.
We have a bug in our WebJob running in our live environment, I have identified the bug and fixed it, this I can verify in our Dev-environment. I published my WebJob as a "Azure WebJob" to our live environment but the bug i still present. To add to the confusion the bug now just occurs sometimes. So for some reason the old code is running somewhere sometimes.
Can someone please help me understand this?
I had a similar problem. We deploy using a stage environment in Azure and it turned out that the "old" WebJobs (running code with an old version of the entity framework model) where still running on the queue. These jobs where then fetching messages and consuming them. To add to the problem the exception was consumed in a try catch and the status of the WebJob was success.
Check if you have a stage environment (add -stage to the Webapp name) and if so go in to the Azure management portal and stop them.
Note, it is not enough to stop the Webapp, you must stop the WebJobs directly. This is done (in the new portal) under Settings->WebJobs and then right-clicking on the webjobs name selecting stop.
I spent ages looking into this problem. Turns out I had the web job project running in a console on my PC at work! No matter what I did on Azure the presence of this exe running and using the same storage for the web jobs meant that the old code running on my work PC picked up the jobs before Azure did. Easy fix: just make sure no exes are running outside of Azure!
In our case the web app was published to physical path /site/www instead of the default /site/wwwroot, because of this the Azure web portal interface adds the WebJobs to folder /site/jobs, but the webdeploy via VS or Azure are still trying to publish the webjobs inside the /site/www.
More details at Publishing WebJobs with Azure Pipelines
we had a strange issue a couple of times now with our Azure service.
We have a cloud service installed that has a web application running on it.
The service was created sometime around beginning of december and he first deployment was done that time as well. After that we did multiple deployments to the cloud service, but (it happens a couple of times now) sometime the Azure decides to roll back the deployment to initial one - the one that was made 2 months ago. This happened this midnight once again and we see that the files creation date on the "restored" or "rolled back" instance is 12/5/2013, which seems to be the date when we did an initial deployment.
A question:
1) Why does that happen?
2) How can we determine what caused this rollback?
3) How can we prevent the rollback?
or
4) How can we make a "snapshot" of the cloud service so when the Rollback happens, it actually rolls back to the latest stable image?
Thanks,
Denis
How are you doing "After that we did multiple deployments to the cloud service"? Are you doing this via WebDeploy or via RDP to the Azure VM?
PaaS cloud service VMs are stateless. The code that is running your website will frequently be rebuilt from the original .cspkg that was uploaded. See http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for a bit more info.
If you want to make changes to your webrole then you need to upload a new cspkg. See http://msdn.microsoft.com/en-us/library/windowsazure/hh472157.aspx for more information.
If you are deploying via WebDeploy you should know that these changes are only intended for development/testing cycle and that the changes are only temporary. See http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx for more information, in particular the "For development and testing purposes only" section.