Force azure fabric to update services with updated version numbers - azure

Basically I am looking for a way to deploy azure fabrics where services are updated based on nothing but the version number.
I am working on deploying a service fabric application via azure devops. I have written a script that does a diff and updates the version numbers on ApplicationManifest.xml and ServiceManifest.xml. This script has been tested and it updates the versions correctly for the services that have been changed.
Now, when I try to deploy, I get the following error message:
##[error]The content in CodePackage Name:Code and Version:1.0.111 in Service Manifest 'MyMicroServicePkg' has changed, but the version
number is the same.
This error message keeps showing for one service after another until I have updated the version on every single package. Basically it is forcing me to update every single package.
Here is the publish profile I am using:
<?xml version="1.0" encoding="utf-8"?>
<PublishProfile xmlns="http://schemas.microsoft.com/2015/05/fabrictools">
<ClusterConnectionParameters .... />
<ApplicationParameterFile Path="..\ApplicationParameters\PublishProfName.xml" />
<CopyPackageParameters CompressPackage="true" />
<UpgradeDeployment Mode="Monitored" Enabled="true">
<Parameters FailureAction="Rollback" Force="True" />
</UpgradeDeployment>
</PublishProfile>
Here is the devops task on yaml:
- task: ServiceFabricDeploy#1
inputs:
publishProfilePath: $(publishProfilePath)
applicationPackagePath: $(applicationPackagePath)
serviceConnectionName: ${{ parameters.connection }}
overrideApplicationParameter: true
upgradeMode: Monitored
FailureAction: Rollback
# 30 mins timeout
UpgradeTimeoutSec: 3600
I have looked up this issue online. Generally people talk about how to make sure all services with code changes have versions updated. In my case, I am positive the versions are updated for the changed services.
How do I configure the deployment such that it does not do any code comparison and updates only the services that have an updated version?

The gist of the answer here, based on my experience, is that Service Fabric only deploys the entire application as a single versioned collection of each of the internal packages, but leaves specifics of service deployment accessible to you for customization. So even if you haven't made changes to a single service within that application, ultimately it's still going to be bundled up in the application deployed to the cluster and you're still going to have to make a decision about whether to deploy the service or not (or use the provided Deplooy-FabricInstaller.ps1 script to handle such deployments).
My suggestion (based off my own pipeline) is that if you really trust that your script is detecting changed versions properly and you really want to avoid upgrades for services you haven't made changes to (presumably to cut down on deployment time) is to shift around your approach and instead don't fight the application deployment and instead optimize service installation.
Build the solution as you normally would, and utilize the ServiceFabricUpdateManifests#2 task to let it replace all the versions of each service automatically.
At this point, I generally diverge from the standard pipeline.
I've got a script that removes the DefaultServices from ApplicationManifest.xml to remove automatic service deployments
I run a separate script that checks to see if the application already exists or not (to inform whether it's a fresh deployment or an upgrade)
Based on it being an install/upgrade I take a different deployment task route and capture information about the existing services in the cluster
Once the application has been fully deployed, I wrote a custom script that's similar to the included Deploy-FabricApplication.ps1 script that iterates through each of the services and itself installs, upgrades or removes them based on their state in the new deployment utilizing ServiceTemplates to populate configuration values.
Because I control the whole of the actual service install process then, I can infer based on data I collect and pass through the build pipeline what should and shouldn't actually be deployed independent of application deployment.
You might find the customization process a bit easier going this route compared to the approach you're taking.

Related

Azure devops artefact retention

I’ve got a mono repo, which has 10 separate CICD pipelines written in yaml.
I’ve noticed lately that we’ve lost a vast number of runs, and some of them had successful production releases.
Am I right in thinking that the project rententiob settings applies to all pipelines? Rather than individual?
I’ve been reading on the ms website and I think in order to retain them going forward, I have to use the API via a powershell script.
I assume the said script needs to run after a successful deployment to production.
I’m quite surprised that there isn’t a global option to say ‘keep all production releases’
The project Retention policy settings will be applied to all pipeline runs not individual. So you could not use this setting to retention specific successful production releases directly.
To achieve this, you could use the PowerShell script to retention these specific runs with "Condition". Add the PowerShell script as the last task of your deployment to check if this one needs to be retained. Refer to this official doc: https://learn.microsoft.com/en-us/azure/devops/pipelines/build/run-retention?view=azure-devops
Here is an example to retention forever based on condition:
- powershell: |
   $contentType = "application/json";
   $headers = #{ Authorization = 'Bearer $(System.AccessToken)' };
   $rawRequest = #{ daysValid = 365000 ; definitionId = $(System.DefinitionId); ownerId = 'User:$(Build.RequestedForId)'; protectPipeline = $false; runId = $(Build.BuildId) };
   $request = ConvertTo-Json #($rawRequest);
   $uri = "$(System.CollectionUri)$(System.TeamProject)/_apis/build/retention/leases?api-version=6.0-preview.1";
   Invoke-RestMethod -uri $uri -method POST -Headers $headers -ContentType $contentType -Body $request;
  displayName: 'PowerShell Script'
  condition: {Your customize condition}

How to restore NuGet package in Azure Pipeline?

I am new to Azure DevOps and trying to create my first Azure pipeline. I have a ASP.NET MVC project and there are a few NuGet packages that need to be restored before the MSBuild step.
Unfortunately, the NuGet restore is failing with the following error:
The pipeline is not valid. Job Job_1: Step 'NuGetCommand' references
task 'NuGetCommand' at version '2.194.0' contains an execution handler
that relies on NodeJS version '6' which is restricted by your
administrator.
NodeJS 6 came disabled out of the box so we are not going to enable it.
My Questions:
Is there an alternative to NuGet restore that does not use NodeJS?
Is there a way to update the NodeJS6 to a higher version?
update 23-Nov-2021
I have found a work around for the time being. I am using a custom PowerShell script to restore NuGet Packages and build Visual Studio project
$msBuildExe = 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\MSBuild.exe'
Write-Host "Restoring NuGet packages" -foregroundcolor green
& "$($msBuildExe)" "$($path)" /p:Configuration=Release /p:platform=x86 /t:restore
Note: $path here is the path to my .csproj file
Apparently, other people are also getting the same issue and it is just a matter of time that the task is updated by the OpenSource community.
Here are some similar issues being faced in other tasks as well:
https://github.com/microsoft/azure-pipelines-tasks/issues/15526
https://github.com/microsoft/azure-pipelines-tasks/issues/15511
https://github.com/microsoft/azure-pipelines-tasks/issues/15516
https://github.com/microsoft/azure-pipelines-tasks/issues/15525
It's AzureDevOps' NuGetCommand task that uses NodeJS, not NuGet itself. Therefore, you can find a way to restore without using Azure DevOps' NuGetCommand task.
Idea 1: use DotnetCoreCli task instead. However, this probably won't work for you since you said your project is ASP.NET MVC, rather than ASP.NET Core. Also, it also appears to need NodeJS to run.
Idea 2: Use MSBuild restore. You can test on your local machine whether or not this works by clearing your global packages folder, or temporarily configuring NuGet to use a different path, and then running msbuild -t:restore My.sln from a Developer PowerShell For Visual Studio prompt. If your project uses packages.config, rather than PackageReference, you'll need to also pass -p:RestorePackagesConfig=true (although maybe this is currently broken). I'm not an expert on Azure Pipelines tasks, so I don't know what it means that this task defines both PowerShell and Node execution entry points, but maybe it means it will work even if your CI agent doesn't allow NodeJS.
Idea 3: Don't use any of the built-in tasks, just use - script: or - task: PowerShell#2, but even that is a little questionable whether it'll work since even the powershell task defines a Node execution entry point. I'm guessing it will work, but I don't have access to a CI agent where NodeJS is forbidden, so I couldn't test even if I wanted to. Anyway, if this works, then you can run MSBuild yourself (but it might also be your responsibility to find msbuild.exe if it's not on the path). Or you can download nuget.exe yourself and execute it in your script. The point is, if you can get Azure Pipeline's script task working, you can run any script and do everything you need yourself.
Idea 4: Use Microsoft Hosted agents. They have documented all the software they pre-install on the machines, which includes Node JS. Downside is that once you exceed the free quota it costs money, and I've worked for companies where it's easier to get money to buy hardware once-off, and pretend that maintenance of the server is free, even though it reduces team productivity, rather than pay for a monthly service. So, I'll totally understand if this is not an option for you.
Idea 5: Talk to whoever maintains your CI agents and convince them to allow & install NodeJS. It's clearly a fundamental part of Azure Pipelines. The tasks are open source on github, and you can see that pretty much all of them use NodeJS to orchestrate whatever work it does. Frankly, I thought the agent software itself was a NodeJS application, so I'm surprised that it runs without NodeJS.

Azure release management with visual studio online

I'm looking for some advices concerning release management in azure
I've found a lot of ressources, but I still have some questions
I have an asp.net 4 solution (to make it simple : one asp.net project, one database project, one test project)
I'm using GIT in visual studio online
At this moment I have one app service and one sql server database in azure.
I have a build that download nuget packages, build, execute a dacpac for the database, executes the tests from the project (I have integration tests that uses the database) and finaly deploys the app on an azure app services
What I want to do seems a "normal stuff" :
I want to cerate the build, then deploy it on a "dev" environnement in azure, then in a "qa" environnement, then in a "stagging" and in "prod"
In my web project, I created different web.config transformations (one for each environnement)
I've seen the releases in visual studio online and I get that its for the deployment part in different environnements
What I have questions with :
In Azure :
Do I create 1 app service by environnement ? or do I create a single app service and use slots ?
Do I create 1 sql server for each environnement or is it best to use a single sql server and to have one database for each environnement ?
In visual studio online :
How do I do the tasks ?
In the build part, what configuration do I use ? Which environnement do I select ?
In the build, how do I manage the database project ? I thing the deployment part should be in the release part, but I don't see how to configure the connexionstring ?
For the tests -> Do I execute them in the release part? How do I change the connexionstrings and appsettings? there is no transformations for the app.config
I've seen that there is test-plans as well, but I don't really get it
Can somebody help me to see a little bit better in all of that ?
I cannot answer all of these, but I can answer a few.
I would create separate Web App instances for your separate environments. With slots, your slots exist on the same Web App and share computing resources. If something goes horribly wrong (your staging code pegs CPU to 100% or eats all of your RAM), this will cause problems for your production slot. I see slots as part of A/B testing or to aid in deployment.
You will likely need to create a separate database per environment as well. This is almost always required if you will be upgrading your database schema at any point in the future and introduce breaking changes to your database schema. For example, what happens if your production code requires a specific field in a database table, but your next version of the database removes that field?
You mentioned you're using web.config transforms, but I want to throw out another option that we've found to be easier and have fewer moving parts and sources for error. If you're just changing connection strings and AppSettings, consider using the Web App's application settings per environment. These override whatever is in your web.config. Doing so means you can forget about web.config transforms and not have one more thing that could possibly go wrong in a deployment.
Since you're using a Database project, to deploy your database, check out the VSTS Azure SQL Database Deployment task. It'll use your database project to create a DACPAC, and then deploy that to your target server.

Deploy two VSTS repositories to one Azure web app

Let's say I have an Azure App Service web app at foo.azurewebsites.net. The code for the web app (a simple Node.js server and React frontend) is hosted on VSTS, and a custom deployment script is configured build and deploy the web app every time code is pushed to the repository's master branch. In other words, the standard web app configuration.
Now, all of my API code (just a Node.js server) is in another repository on VSTS. I'd like to be able to do the following:
Have all requests to foo.azurewebsites.net/api be handled by the API server (an implication of this, which I would nonetheless like to state explicitly, is that the server can ask the browser to set cookies that the web app can then read, and vice versa).
Set up similar continuous deployment for the API server, such that it gets redeployed whenever there are code changes in the API repo.
Be able to maintain the web app and API repositories completely separately.
This seems like a fairly standard scenario...is there an accepted solution? I came across this, but it seems like a pretty hacky way to do it, not to mention the fact that I have no idea what the correct URL is for the web hook for VSTS and can't seem to find any information on it. Also, that example doesn't cover how to deal with point (1) above.
EDIT: Additional clarification
Note that the accepted answer on this question is not what I'm looking for. It describes how to pull from a second repository at deployment time, but not how to have that second repository trigger deployments, or how to handle the fact that the the second repository is its own server. Additionally, it introduces a dependency between the two repositories, since the deploy.cmd is presumably under source control in the first repository.
EDIT: Virtual Directories
Thanks to #CtrlDot for pointing out that Virtual Directories are the way to solve (1). Still seeking guidance on (2) and (3).
I think the concept you are referring to is called Virtual Directories
I'm not sure which VSTS task you are using to deploy, but based on the article provided, you should be to configure it to target only the virtual directory you want to deploy to.
EDIT
Sorry for not being more clear. The AzureRmWebAppDeployment task has a parameter for virtual application name. You would simply set that in your deployment pipeline for the API project (/api) and for the main project (leave it blank)

Deploy the same Azure binaries to multiple subscriptions

We are trying to work out a good continuous deployment setup using TFS, Visual Studio and Azure. At our company, each developer has their own Azure subscription that we use for testing, as well as shared QA1/QA2/PROD subscriptions that we can deploy to. We have matching TFS XAML build definitions for each of these, running Powershell scripts with parameters and PublishSettings files.
This all gives us a set of .cspkg and .cspkg files, and in theory we can deploy the right cspkg with the correct cspkg to any Azure system.
The problem we are encountering now though is that we want to start using the Redis Cache service. Installing the nuget package writes subscription-specific settings into the web.config, to point at the cache. This means that the cspkg is now complied specifically for the Azure subscription.
We could use SlowCheetah to merge web.config files on build, but this means that we would have to compile the package for each build definition, and as the number of developers increases this is obviously going to become unsustainable.
I am looking for a way to keep our old generic packages and still use the Redis Cache. We can connect to the cache in code during app_start, but then we can't use it to store IIS session state. I understand that the Azure Load Balancer is meant to keep users on the same server, but I'm unsure how that will work as we swap servers in/out.
It feels like we are approaching the problem wrong and there should be a simple solution that we are overlooking.
We are using Azure Tools 2.6, Visual Studio 2013, TFS 2015r2.
I think there are always 3 ways of doing this.
1st one is config during build, which is building one thing for one thing you described, which is not desired in most of scenario.
2nd is config during deployment, which means you open the cspkg file, change config, then put it back before upload without re-compile.
3nd is config after deployment, have a configuration management tool adjust the config file for you on the fly.
We use octopus deploy to archive #2 above, our CI tool feed octopus with cspkg and cscfg, octopus handles the rest. I would definitely not going after #1 but consider #3 is a valid option too.
As of today we store all our connection settings in .cscfg files. Even if for security reasons, we avoid storing any production connection strings in source control, only QA. And we have CI for QA, but not production. This way it works well for us, we just maintain different .cscfg for different environments (subscriptions)
However, in near future I think we will move to Key Vault for this.

Resources