I want to deploy an azure resource manager template with 2 VMs, one Windows and the other Linux. I read about using copy variable, but that's basically deploying the same resource multiple times. I couldn't figure out a way to deploy 2 different instances of the same resource in the same template. Need your help. Thanks!
you could use a bunch of variables for that, but since windows and linux vm inputs are pretty different I suggest you do not do this, way to much customization. Easier just deploying 2 vms as 2 individual resources.
You can use arrays to achieve your goal:
"osType": [
"windows",
"linux"
]
and then you would have a bunch of variables like osimagewindows and osimagelinux and you would access them like this:
variables(concat('osimage', variables('osType')[copyIndex()]))
ps. Thats too much trouble for the value you are getting. do not bother (unless you want to do this as an exercise).
If your VMs are largely identical except for the OS disk, take a look at this sample:
https://github.com/Azure/azure-quickstart-templates/blob/master/101-vm-simple-zones/azuredeploy.json#L194
If you wanted to add in something like Password vs. SSH authn, see: https://github.com/Azure/azure-quickstart-templates/blob/master/100-marketplace-sample/azuredeploy.json#L298-L299
To use the copy loop, you'd need to have arrays for those conditionals, but when you start to add up the duplicate resources you would have (e.g. nics, PublicIps) dealing with the arrays and conditional in a copy loop may very well be simpler than duplicating the resources.
That help?
Related
I use terraform to initialize some OpenStack cloud resources.
I have a scenario where I would need to initialize/prepare a volume disk using a temporary compute resource. Once volume is fully initialized, I would no longer need the temporary compute resource but need to attach to another compute resource (different network configuration and other settings making reuse of first impossible). As you might have guessed, I cannot reach directly the expected long term goal without the intermediary step.
I know I could drive a state machine or some sort of processing queue from outside terraform to achieve this, but I wonder if it was possible to do it nicely in one single run of terraform.
The best I could think of, is that a main terraform script would trigger creation/destruction of the intermediate compute resource by launching a another terraform instance responsible just for the intermediate resources (using terraform apply followed by terraform destroy). However it requires extra care such as ensuring unique folder to deal with concurrent "main" resource initialization and makes the whole a bit messy I think.
I wonder if it was possible to do it nicely in one single run of terraform.
Sadly, no. Any "solution" which you could possibly implement for that (e.g. running custom scripts through local-exec, etc) in a single TF will only be convoluted mess, and will only lead to more issues that it solves in the long term.
The proper way, as you wrote, is to use dedicated CI/CD pipeline for a multistage deployment. Alternatively, don't use TF at all, and use other IaC tool.
I am working for an organisation which doesn't allow to make use of functions that are still under development, hence my problem.
I am running everything through Azure Pipelines, so I can't store variables I get from the Az-Cli and then use powershell for example to perform operations on that variable.
The issue specifically lies in Lists (Yes, really, like one of the most common and well documented structures in all of computer programming).
I am trying to get the available IP-addresses my created V-NET has. Keep in mind, this being a big organisation I am not able to specify these myself as it's a fairly common task with boilerplate yml code to create the vnet.
Hence, I try running the az-cli command right after in the pipeline:
az network vnet list-available-ips -g MyResourceGroup -n MyVNet
This correctly returns the available ip-addresses that I am looking for.
HOWEVER storing once of these values seem to be impossible. I am not allowed to run
--query [0]
after the command as this is a command currently under development.
I do not seem to be able to perform ANY action on the variable in which I stored this list. I am at lost here. How do I get access to 1 of the results in this list and then store this as separate variable? I need to be able to store this value in my library for further steps in my development pipeline
I'm trying to set up a situation where I drop files into a folder on one Azure VM, and they're automatically copied to another Azure VM. I was thinking about mapping a drive from the receiver to the sender and using a file watch/copy program to send the files over the mapped drive.
What's a good recommendation for a file watch/copy program that's simple and efficient, and what security setups do I need to get the two Azure boxes to "talk" to each other? They're in the same account/resource group/etc, so I'm not going outside of a virtual network or anything like that.
By default, VMs in the same virtual network can talk to each other (this is true even if default NSGs are applied). So you wouldn't have to do anything special to get that type of communication working.
To answer the second part, you might want to consider just using built-in FCI rules to execute a short script to do the copy. See this link for a short intro into FCI rules.
Alternatively, you could use a service such as Azure files to have files shared between those servers using CIFS. It really depends on why you are trying to have a copy of the file on two servers.
Hope that helps!
I have created 5 x ARM templates that combined deploys my application. Currently I have separate Templates/parameter files for the various assets (1 x servicebus, 1 x sql server, 1 x eventhub, etc)
Is this OK or should I merge them into 1 x template, 1 x parameter file that deploys everything?
Pro & cons? What is best practice here?
Its always advised to have seperate JSON File for azuredeploy.json and azuredeploy.parameters.json.
Reason:
Azuredeploy is the json file which actually holds your resouces and paramaters.json holds your paramaters. You can have one azuredeploy.json file and have multiple paramaters.json files. Like for example let say you different environements, Dev/Test/Prod, then you have seperate azuredeploy-Dev.paramaters.json, azuredeploy-Test.paramaters.json and so and so forth; you get the idea.
You can either merger seperate json files, one for service bus, one for VMs, etc. this will help when you want multiple people to work on seperate sections of your Resource group. Else you can merge them together.
BottomLine: You are the architect, do it as you want, whichever makes your life easy.
You should approach this from the deployment view.
First answer yourself few question:
How separate resources such as ASB, SqlServer, Event hub are impacting your app? can your app run independently while all above are unavailable?
How often do you plan to deploy? I assume you are going to implement some sort of Continuous deployment.
How often will you provision a new environment.
so long story short.
Anything that will have minimum (0) downtime on your app during deployment/disaster recovery, should be considered along with the fact anyone from the street can take you scripts and have your app running in reasonable time, say 30 min max.
I want to import a number of private virtual machines that only I can launch using the ARM REST API.
How do I do that? I cannot find instructions.
This question is a little unclear - do you have a number of already pre-defined virtual machine images that you want to start up, is it multiple copies of the same machine for a load balanced scenario or something else?
Also, you say "Only I can launch" what do you mean by that? By definition, when you describe your resources using Azure Resource Manager, you're essentially making a desired state configuration that you then deploy to Azure, and it will create all those machines for you.
If it's simply a question of creating the configuration file, you can try out cool stuff such as http://Armviz.io to set up your stuff. Alternatively, if you already have a group of resources that you'd like to capture into a script - go here:
http://capturegroup.azurewebsites.net