I'd like to clarify the following gap regarding Azure ARM templates:
Let's suggest I have a master template with the following inside:
App Service plan creation
Azure SQL server creation
SQL elastic pool creation (using previously created Azure SQL server)
This template will be used for the initial creation of my cloud infrastructure.
Next, I will add a child (nested or linked) template to my master template.
The child template will contain the AppService Web App+SQL creation:
Web App creation (using App Service Plan defined in master template)
Azure SQL database creation (using Azure SQL server defined in master template)
Adding Azure SQL database to elastic pool (defined in master template)
I will omit several details like the initial creation of the Azure Key Vault and creation and store in this vault required credentials like SQL admin username\password or SSL certificates for my Web App.
So, what I want to have at the end of the template deployment execution is:
first template deployment
Creation of basic infrastructure (app service plan for web apps, SQL server added to elastic pool)
A single instance of an app service (web app+SQL) using previously created app service plan and elastic pool (where my SQL database will be placed)
second template deployment
A single (second) instance of an app service (web app+SQL) will be created using the existing infrastructure
N-template deployment
A single (N-instance) of an app service (web app+SQL) will be deployed <...>
The questions are:
Should I use nested or linked templates? What's the exact difference in my case?
Is my overall solution correct or should I modify it\find another approach?
I've already found the following post saying, for example, I can use resource lock (to prevent deletion) or use incremental mode for deployment (to keep existing resources) however, this doesn't answer my question regarding the entire approach.
Nested\Linked template can be used interchangeably. Its the same thing. One might argue that nested templates are inline templates and linked are actually linked, but it doesnt really matter, both are the same thing (they are implemented in the template slightly differently, but the result is the same). Child templates (and this is really how you want to call those).
As for the actual questions:
Why do you want to use child templates at all? I dont see a use case for those.
I dont see anything wrong with the approach, apart from using child templates just for the sake of using them.
If you want your approach to be "modular" (hence child template usage) you could as well use configuration and implementation separation to achieve the result (DRY method).
Related
I have a query to setup multiple environments at a time so that we can discreetly test multiple projects at once. Ideally we should be able to spin these environments up and down as necessary.
We have microservice based architecture and are mostly using azure PAAS services in our infrastructure.
Currently i have tried to automate our infrastructure through terraform its almost done but next step is deployment of code as services are not containerized so tried using azure pipelines but its a huge task, can i get any better idea for this that how we could do this.
Should look at leveraging Azure Pipeline Templates Once this is defined then can reuse it everywhere. For instance with terraform created a template for doing the plan and apply that just needs to be fed in the directory the terraform is located in. This saved time across all projects as we just need to reference our template and the rest was taken care of.
In terms of your other question with the ability to spin up and spin down this can be easily done if the application is architected with that in mind. Keep in mind for deployment certain things where names must be unique: storage account, app service and things that are potentially shared: i.e. network.
The other piece to consider is how to ensure these ad hoc environments are actually being spun down. Would recommend something like a tagging strategy or process that cleans up resources that haven't been deployed in x amount of days.
I want to provision resources in Azure using ARM where user can select the required input parameters like Vnet or Function app that are already provisioned.
Just like AWS have parameter types such as AWS::EC2::VPC::Id to list down the VPC available in a region. For AWS references https://aws.amazon.com/blogs/devops/using-the-new-cloudformation-parameter-types/
Is there something similar we can do in azure too.
Similar, yes, the same no... You can author a ui definition file that will allow you to restrict input. For some resources there are controls you can leverage but there's also the capability to write a control that uses semi-custom logic (that could call an Azure API to list skus for example).
This is about the ui defintion:
https://learn.microsoft.com/en-us/azure/azure-resource-manager/managed-applications/create-uidefinition-functions
And you can bundle it with a deployment template like this:
https://preview.portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F100-marketplace-sample%2Fazuredeploy.json/createUIDefinitionUri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2F100-marketplace-sample%2FcreateUiDefinition.json
The feature is in preview right now, so you need to use preview.portal.azure.com instead of portal.azure.com but rollout will finish in a few weeks.
No, this is not possible in ARM Templates. If you would be doing managed applications with arm templates you'd have some of the pickers (very limited set), but with regular arm templates you cant do that. you can create a powershell script that would mimic that for you.
This is (now, maybe not when this question was asked?) possible by using the Microsoft.Solutions.ResourceSelector UI element.
I have arm template to recreate resource group with resources and their settings. This works fine.
Use case:
Some developer goes to azure portal and update some settings for some resource. Is there a way how to get exact changes that can be applied to my template to take these changes in effect? (Update template in source control)
If I go to automation script in resource group I can see all resources but my template in source control is different (parameters, conditions, variables, multiple templates linked together ...). I can't see on first look what changes were done and I can't use any diff.
Maybe I missed completely something but how are you solving this issue?
Thanks.
It is not easy to see any changes to resources by comparing templates from within the portal. Best practice is to always use ARM templates (and CI/CD pipelines) to deploy ARM templates to provision resources. Keep these ARM templates under source control to track them.
Further than that, I think you have two main options to track these changes:
1) You can use the Azure Activity Log to track the changes. The Azure Activity Log is a subscription log that provides insight into subscription-level events that have occurred in Azure. This includes a range of data, from Azure Resource Manager operational data to updates on Service Health events.
2) Write a little intelligent code against the Management Plane API. A good starting point is https://resources.azure.com/subscriptions. You could write a little extract that pulls all your resources out daily and commits them to a git repo. This will only update for changes to templates. You can then analyse the delta as or when you need.
Conceptionally, the developer should never 'go[es] to azure portal and update some settings for some resource', except for his own development / unit testing work. He then should produce an updated ARM template for deployment in the TST etc environments, and document his unit-tested changes with the new template. If his update collides with your resources in TST he will probably come to you to explain his changes, and discuss the resolution.
I'm new to Azure, and a little confused about cloud services.
I'm making a Testing Environment that consist of multiple instances (of the same VM) where each instance has a REST API server (Consisting of 2 API functions: GetResults, SendFileForTesting) and a load-balancer that distributes the requests upon the VMs.
In each VM there is also a worker that processes the received files and saves the results in a shared DB.
The goal is, for the file processing to be distributed on the available VMs and the results to be saved in a shared place (So that the "GetResults" request would send all of the results to the client)
This is how it looks:
[LoadBalancer]
|
[Multiple VM nodes] - (API: GetResult, SendFileForTesting)
|
[Shared Result DB]
The question is, what is the best way to deploy this on azure?
Right now, I'm trying to create a load-balancer that has 3 clones of the same VM with the same REST API server and another VM that holds the shared DB.
Is there a better way to do this?
Thanks
In my opinion, I think VMSS is the best way to deploy it.
First, create two Azure VM, one is shared DB, another one is API server. configure API server to connect to shared DB. then capture this VM. After capture completed, we can use template to deploy a VM scale set with this image.
More information about create custom image, please refer to this link.
More information about use template to create VMSS with custom image, please refer to this link.
(this template LB rules is port 80, if you need more ports, please edit this template)
I'm trying to setup Staging and Live environments in Azure (September toolkit) and I want a separate Staging and Live database - with different connection strings. Obviously I can do this with web.config transformations back in Visual Studio, but is there a way I can automate a change of connection string during a VIP-swap - so that the staging site points to staging data and the live site to live data? I'd prefer not to have to deploy twice.
With the management APIs and the PowerShell Cmdlets, you can automate a large amount of the Azure platform and this can include coordinating a VIP switch and a connection string change.
This is the approach:
Add your database connection string to your ServiceConfiguration file.
Modify your app logic to read the connection string from the Azure specific config by using RoleEnvironment.GetConfigurationSettingValue rather than the more typical .NET config ConfigurationManager.ConnectionStrings API
Implement RoleEnvironmentChanging so that your logic will be notified if the Azure service configuration ever changes. Add code to update your app's connection string in here, again using RoleEnvironment.GetConfigurationSettingValue.
Deploy to staging with a ServiceConfiguration setting for your "staging" DB connection string
Write a PowerShell script that will invoke the VIP switch (build around the Move-Deployment cmdlet from the Windows Azure Platform PowerShell Cmdlets 2.0) and invoke a configuration change with a new ServiceConfiguration file that includes your "production" DB connection string (see Set-DeploymentConfiguration)
Taken together, step 5 will perform the VIP switch and perform a connection string update in a single automated operation.
I don't believe anything changes as far as the role is concerned when you do a VIP swap. Rather, it alters the load balancer configuration.
So nothing happens in your app to cause it to change configuration. The only thing I can think of is that the URL changes between the two. You could implement code that chose one of two connection strings, based on the URL with which it was accessed (assuming that we're only talking about a web role), but it seems messy.
Fundamentally, I think the issue is that staging isn't a separate test environment; it's a stepping stone into production. Thus, Microsoft's assumption is that the configuration doesn't change.