Deploy the same Azure binaries to multiple subscriptions - azure

We are trying to work out a good continuous deployment setup using TFS, Visual Studio and Azure. At our company, each developer has their own Azure subscription that we use for testing, as well as shared QA1/QA2/PROD subscriptions that we can deploy to. We have matching TFS XAML build definitions for each of these, running Powershell scripts with parameters and PublishSettings files.
This all gives us a set of .cspkg and .cspkg files, and in theory we can deploy the right cspkg with the correct cspkg to any Azure system.
The problem we are encountering now though is that we want to start using the Redis Cache service. Installing the nuget package writes subscription-specific settings into the web.config, to point at the cache. This means that the cspkg is now complied specifically for the Azure subscription.
We could use SlowCheetah to merge web.config files on build, but this means that we would have to compile the package for each build definition, and as the number of developers increases this is obviously going to become unsustainable.
I am looking for a way to keep our old generic packages and still use the Redis Cache. We can connect to the cache in code during app_start, but then we can't use it to store IIS session state. I understand that the Azure Load Balancer is meant to keep users on the same server, but I'm unsure how that will work as we swap servers in/out.
It feels like we are approaching the problem wrong and there should be a simple solution that we are overlooking.
We are using Azure Tools 2.6, Visual Studio 2013, TFS 2015r2.

I think there are always 3 ways of doing this.
1st one is config during build, which is building one thing for one thing you described, which is not desired in most of scenario.
2nd is config during deployment, which means you open the cspkg file, change config, then put it back before upload without re-compile.
3nd is config after deployment, have a configuration management tool adjust the config file for you on the fly.
We use octopus deploy to archive #2 above, our CI tool feed octopus with cspkg and cscfg, octopus handles the rest. I would definitely not going after #1 but consider #3 is a valid option too.

As of today we store all our connection settings in .cscfg files. Even if for security reasons, we avoid storing any production connection strings in source control, only QA. And we have CI for QA, but not production. This way it works well for us, we just maintain different .cscfg for different environments (subscriptions)
However, in near future I think we will move to Key Vault for this.

Related

Deploying a DLL from a pipeline to the root folder of an App Service

We have a site design that makes use of modules that are developed separately from the master site. Thru reflection, we pick up the modules when the main app starts.
This works fine in local development and on a normal web server. But in the Azure environment when we try to use FTP to deploy the modules to our Azure-hosted site we are unable to because the main Azure deployment is read-only (because it is running from a package).
Is it possible to not have the main site running from a package? Is it acceptable to run it that way?
Is there another way to deploy Dlls to the Azure-hosted site without having them be part of the main site's build and deploy? Ultimately we are trying to avoid rebuilding the main site every time we want to add a module.
our Azure-hosted site we are unable to because the main Azure
deployment is read-only (because it is running from a package).
You could set WEBSITE_RUN_FROM_PACKAGE=0in your app setting to make not read only. WEBSITE_RUN_FROM_PACKAGE=1 is read only.
Is it possible to not have the main site running from a package? Is it
acceptable to run it that way?
You could consider switch your deployment methods to Zip Deploy to make your Azure-hosted site not read only.
Refer to this doc.
What can help in this situation is to build and publish the modules using Azure Artifacts.
There are several approaches please check the best practices page over here:
https://learn.microsoft.com/en-us/azure/devops/artifacts/concepts/best-practices?view=azure-devops
Depending on your approach the build, the release but also local development can use these published modules.
Example
You can for example use a private nuget feed:
Publish the modules from your modules pipeline:
https://learn.microsoft.com/en-us/azure/devops/pipelines/artifacts/nuget?toc=%2Fazure%2Fdevops%2Fartifacts%2Ftoc.json&view=azure-devops&tabs=yaml
Consume these from visual studio:
https://learn.microsoft.com/en-us/azure/devops/artifacts/nuget/consume?view=azure-devops&tabs=windows
And consume them from the website pipeline, this can be a build but also a release if you want to side load them:
https://learn.microsoft.com/en-us/azure/devops/pipelines/packages/nuget-restore?source=recommendations&view=azure-devops
To keep a record of what modules are used in the website, I advise to build or release the website if the modules are changed.

How to restore NuGet package in Azure Pipeline?

I am new to Azure DevOps and trying to create my first Azure pipeline. I have a ASP.NET MVC project and there are a few NuGet packages that need to be restored before the MSBuild step.
Unfortunately, the NuGet restore is failing with the following error:
The pipeline is not valid. Job Job_1: Step 'NuGetCommand' references
task 'NuGetCommand' at version '2.194.0' contains an execution handler
that relies on NodeJS version '6' which is restricted by your
administrator.
NodeJS 6 came disabled out of the box so we are not going to enable it.
My Questions:
Is there an alternative to NuGet restore that does not use NodeJS?
Is there a way to update the NodeJS6 to a higher version?
update 23-Nov-2021
I have found a work around for the time being. I am using a custom PowerShell script to restore NuGet Packages and build Visual Studio project
$msBuildExe = 'C:\Program Files (x86)\Microsoft Visual Studio\2019\Enterprise\MSBuild\Current\Bin\MSBuild.exe'
Write-Host "Restoring NuGet packages" -foregroundcolor green
& "$($msBuildExe)" "$($path)" /p:Configuration=Release /p:platform=x86 /t:restore
Note: $path here is the path to my .csproj file
Apparently, other people are also getting the same issue and it is just a matter of time that the task is updated by the OpenSource community.
Here are some similar issues being faced in other tasks as well:
https://github.com/microsoft/azure-pipelines-tasks/issues/15526
https://github.com/microsoft/azure-pipelines-tasks/issues/15511
https://github.com/microsoft/azure-pipelines-tasks/issues/15516
https://github.com/microsoft/azure-pipelines-tasks/issues/15525
It's AzureDevOps' NuGetCommand task that uses NodeJS, not NuGet itself. Therefore, you can find a way to restore without using Azure DevOps' NuGetCommand task.
Idea 1: use DotnetCoreCli task instead. However, this probably won't work for you since you said your project is ASP.NET MVC, rather than ASP.NET Core. Also, it also appears to need NodeJS to run.
Idea 2: Use MSBuild restore. You can test on your local machine whether or not this works by clearing your global packages folder, or temporarily configuring NuGet to use a different path, and then running msbuild -t:restore My.sln from a Developer PowerShell For Visual Studio prompt. If your project uses packages.config, rather than PackageReference, you'll need to also pass -p:RestorePackagesConfig=true (although maybe this is currently broken). I'm not an expert on Azure Pipelines tasks, so I don't know what it means that this task defines both PowerShell and Node execution entry points, but maybe it means it will work even if your CI agent doesn't allow NodeJS.
Idea 3: Don't use any of the built-in tasks, just use - script: or - task: PowerShell#2, but even that is a little questionable whether it'll work since even the powershell task defines a Node execution entry point. I'm guessing it will work, but I don't have access to a CI agent where NodeJS is forbidden, so I couldn't test even if I wanted to. Anyway, if this works, then you can run MSBuild yourself (but it might also be your responsibility to find msbuild.exe if it's not on the path). Or you can download nuget.exe yourself and execute it in your script. The point is, if you can get Azure Pipeline's script task working, you can run any script and do everything you need yourself.
Idea 4: Use Microsoft Hosted agents. They have documented all the software they pre-install on the machines, which includes Node JS. Downside is that once you exceed the free quota it costs money, and I've worked for companies where it's easier to get money to buy hardware once-off, and pretend that maintenance of the server is free, even though it reduces team productivity, rather than pay for a monthly service. So, I'll totally understand if this is not an option for you.
Idea 5: Talk to whoever maintains your CI agents and convince them to allow & install NodeJS. It's clearly a fundamental part of Azure Pipelines. The tasks are open source on github, and you can see that pretty much all of them use NodeJS to orchestrate whatever work it does. Frankly, I thought the agent software itself was a NodeJS application, so I'm surprised that it runs without NodeJS.

Azure data factory: Visual studio

I am learning azure data factory and would really like to do its development in Visual studio environment. I have VS 2019 installed on my machine and I don't see an option to develop ADF in it.
Is there any version of VS that ADF can be developed in or we are right now stuck with developing it in web UI for the time?
I know BI development tools needed additional plug in to VS environment to work. Does ADF need something similar to that too.
If not, how can we back up our work done in web ADF. Is there an option to link it somehow with the azure repo or GIT?
Starting with ADF V2, development is really intended to be done completely in the web interface. I had the same question as you at the time, but now the web tools are quite good and I don't give it a second thought. While I'm sure there are other options for developing and deploying the ARM templates, do yourself a favor and use the web UI.
By default, Data Factory only saves code changes on "Publish". An optional configuration allows source control via Git integration. You can use either either Azure DevOps or Github. I highly recommend this approach, even if you only ever work in the main branch (fine for lone developers, a bad idea for collaboration). In this case, Publish takes the current state of the main branch and surfaces your artifacts to the ADF service. That means you will still need to Publish for your changes to be live.
NOTE: Git integration is also supported in Azure Synapse, where it has tremendous value for collaboration across a wide variety of artifact types.

Azure Data Factory V2 multiple environments like in SSIS

I'm coming from a long SSIS background, we're looking to use Azure data factory v2 but I'm struggling to find any (clear) way of working with multiple environments. In SSIS we would have project parameters tied to the Visual Studio project configuration (e.g. development/test/production etc...) and say there were 2 parameters for SourceServerName and DestinationServerName, these would point to different servers if we were in development or test.
From my initial playing around I can't see any way to do this in data factory. I've searched google of course, but any information I've found seems to be around CI/CD then talks about Git 'branches' and is difficult to follow.
I'm basically looking for a very simple explanation and example of how this would be achieved in Azure data factory v2 (if it is even possible).
It works differently. You create an instance of data factory per environment and your environments are effectively embedded in each instance.
So here's one simple approach:
Create three data factories: dev, test, prod
Create your linked services in the dev environment pointing at dev sources and targets
Create the same named linked services in test, but of course these point at your tst systems
Now when you "migrate" your pipelines from dev to test, they use the same logical name (just like a connection manager)
So you don't designate an environment at execution time or map variables or anything... everything in test just runs against test because that's the way the linked servers have been defined.
That's the first step.
The next step is to connect only the dev ADF instance to Git. If you're a newcomer to Git it can be daunting but it's just a version control system. You save your code to it and it remembers every change you made.
Once your pipeline code is in git, the theory is that you migrate code out of git into higher environments in an automated fashion.
If you go through the links provided in the other answer, you'll see how you set it up.
I do have an issue with this approach though - you have to look up all of your environment values in keystore, which to me is silly because why do we need to designate the test servers hostname everytime we deploy to test?
One last thing is that if you a pipeline that doesn't use a linked service (say a REST pipeline), I haven't found a way to make that environment aware. I ended up building logic around the current data factories name to dynamically change endpoints.
This is a bit of a bran dump but feel free to ask questions.
Although it's not recommended - yes, you can do it.
Take a look at Linked Service - in this case, I have a connection to Azure SQL Database:
You have possibilities to use dynamic content for either the server name and database name.
Just add a parameter to your pipeline, pass it to the Linked Service and use in the required field.
Let me know whether I explained it clearly enough?
Yes, it's possible although not so simple as it was in VS for SSIS.
1) First of all: there is no desktop application for developing ADF, only the browser.
Therefore developers should make the changes in their DEV environment and from many reasons, the best way to do it is a way of working with GIT repository connected.
2) Then, you need "only":
a) publish the changes (it creates/updates adf_publish branch in git)
b) With Azure DevOps deploy the code from adf_publish replacing required parameters for target environment.
I know that at the beginning it sounds horrible, but the sooner you set up an environment like this the more time you save while developing pipelines.
How to do these things step by step?
I describe all the steps in the following posts:
- Setting up Code Repository for Azure Data Factory v2
- Deployment of Azure Data Factory with Azure DevOps
I hope this helps.

Azure release management with visual studio online

I'm looking for some advices concerning release management in azure
I've found a lot of ressources, but I still have some questions
I have an asp.net 4 solution (to make it simple : one asp.net project, one database project, one test project)
I'm using GIT in visual studio online
At this moment I have one app service and one sql server database in azure.
I have a build that download nuget packages, build, execute a dacpac for the database, executes the tests from the project (I have integration tests that uses the database) and finaly deploys the app on an azure app services
What I want to do seems a "normal stuff" :
I want to cerate the build, then deploy it on a "dev" environnement in azure, then in a "qa" environnement, then in a "stagging" and in "prod"
In my web project, I created different web.config transformations (one for each environnement)
I've seen the releases in visual studio online and I get that its for the deployment part in different environnements
What I have questions with :
In Azure :
Do I create 1 app service by environnement ? or do I create a single app service and use slots ?
Do I create 1 sql server for each environnement or is it best to use a single sql server and to have one database for each environnement ?
In visual studio online :
How do I do the tasks ?
In the build part, what configuration do I use ? Which environnement do I select ?
In the build, how do I manage the database project ? I thing the deployment part should be in the release part, but I don't see how to configure the connexionstring ?
For the tests -> Do I execute them in the release part? How do I change the connexionstrings and appsettings? there is no transformations for the app.config
I've seen that there is test-plans as well, but I don't really get it
Can somebody help me to see a little bit better in all of that ?
I cannot answer all of these, but I can answer a few.
I would create separate Web App instances for your separate environments. With slots, your slots exist on the same Web App and share computing resources. If something goes horribly wrong (your staging code pegs CPU to 100% or eats all of your RAM), this will cause problems for your production slot. I see slots as part of A/B testing or to aid in deployment.
You will likely need to create a separate database per environment as well. This is almost always required if you will be upgrading your database schema at any point in the future and introduce breaking changes to your database schema. For example, what happens if your production code requires a specific field in a database table, but your next version of the database removes that field?
You mentioned you're using web.config transforms, but I want to throw out another option that we've found to be easier and have fewer moving parts and sources for error. If you're just changing connection strings and AppSettings, consider using the Web App's application settings per environment. These override whatever is in your web.config. Doing so means you can forget about web.config transforms and not have one more thing that could possibly go wrong in a deployment.
Since you're using a Database project, to deploy your database, check out the VSTS Azure SQL Database Deployment task. It'll use your database project to create a DACPAC, and then deploy that to your target server.

Resources