In order to store state remotely terraform has a config block that allows you point to a cloud resource. This article shows how to do it in Azure.
In a nutshell, it instructs the reader to use some AZ powershell commands to provision the resources.
Isn't the idea of Terraform to do the opposite of manual creation of resources? If this is true, what is a better way to do this? If its not, and I've misunderstood something, please clarify what it is that I missed.
Background: working on a greenfield application with nothing provisioned other than a git repo to store the tf files. Attempting to provision an azure static web site.
Its normal. To use a remote backed on azurem you need to have Azure Blob container. So it must exist first, before you can use it for a remote backed.
You don't have to use any powershell commands to create the container. Use other TF code, or Azure console, SDK or whatever you want to create it.
Terraform scaffold for Azure:
https://github.com/whiteducksoftware/terraform-scaffold-for-azure
Related
We are looking to "reset" a resource group, deleting everything but the necessary infrastructure in it. The problem is we are still immature in our IAC practices and a lot of resources are deployed via the portal. My initial thought is to have the only necessary infra defined in an ARM template and running it in complete mode when we want to reset it. Does Terraform have a complete mode feature? From what I understand, Terraform will only manage stuff in state. Since we wont really respecting the state after initial deployment, the resources deployed via the portal wont be destroyed on a TF destroy. Any thoughts? Thanks!
Does Terraform have a complete mode feature?
AFAIK, No , Terraform doesn't have complete Mode like ARM template has.
From what I understand, Terraform will only manage stuff in state.
Since we wont really respecting the state after initial deployment,
the resources deployed via the portal wont be destroyed on a TF
destroy.
Yes , You are correct the Terraform will only manage the the
resources which are in state file only .
So, by default Terraform will only store the resources deployed through it in the state file but if you want to create some resources from the portal , then also you can use the import resources feature of terraform. Using which Terraform will be able to manage the resources created from Terraform and Portal as well.
Reference:
Import - Terraform by HashiCorp
No, Terraform does not have such a feature.
There is a feature request which mainly covers the "reporting" aspect, but also would allow acting upon it.
You might be able to build something around the import feature of Terraform, as suggested here. However, this would require some effort.
You could also use Terraform to deploy an ARM template in complete mode, but then you might loose most of why you wanted to use Terraform in the first place.
We use Azure Pipeline to implement our Continuous integration pipeline. The app is deployed in virtual machines that we need to provision and configure. There are tones of libraries, patches , configurations , and applications that we need to deploy on the target VM before we get our code into those.
The question is what is the best tool to provision and configure these virtual machines? I was thinking of using Ansible AWX. Basically Azure Pipeline would make a call to the AWX API, which would then take it from there and finalize things.
There is an Azure Pipeline Extension that allows me to execute a playbook https://github.com/microsoft/azure-pipelines-extensions/blob/master/Extensions/Ansible/Src/readme.md. But I would like to use AWX instead so that my ansible/deployment code is decoupled from my pipeline.
Any suggestions?
As far as I know, Ansible allows you to automate the deployment and configuration of resources in your environment. It could meet your needs.
As you said, Azure Pipeline supports to run the playbook in the Ansible task(Ansible extension).
So I think you can directly complete the VM Configuration and Code Deployment in the azure pipeline.
If you want to separate these two steps, you can split them into two pipelines (VM configure and Code Deployment). To avoid confusion between configuration and deployment code, you can also split them into two repos.
On the other hand, if you run the playbook in the azure pipeline, the azure pipeline also supports adding tasks to change the parameters in the playbook(e.g. Replace Token).
Here is an operation guide about using Ansible in Azure Pipeline.
By the way, if the Virtual Machine is Azure VM, you also could use ARM template to update the Azure VM resource.
Personally, I would drop the AWX requirement. It's something else to manage and maintain and an entirely separate interface too. Instead, just do your whole pipeline in one place... azure devops. Pick one or the other. Tower doesn't have a built in source control, so I recommend ADO over it, but they'll both run ansible and they'll both do it on your own control nodes. There's no reason to take an extra step with another tool. It adds way too much complexity.
I am new to Terraform and was wondering if we can use Terraform to implement a kind of disaster recovery for Azure API manager.
I know there is disaster recovery implementation by Microsoft for API manager but I wanted to explore if I can just recreate the whole thing using Terraform.
I am able to recreate the API manager using Terraform with the same configuration/APIs etc.
The only thing which is unclear to me how to back up and recreate the same subscriptions/products in API manager using Terraform.
For example, if someone deletes the API manager, I want to recreate it using Terraform and import all the existing products/subscriptions (keys).
Any ideas?
Similar to using ARM Templates, you can use Terraform to deploy Azure APIM as well. You refer the azurerm provider docs for more information.
But for all runtime data like users & subscriptions, you will have to consider setting up a backup/restore system utilizing the built-in feature.
After deploying APIM using terraform, you will have to restore the runtime data separately. Also, depending on your Recovery Time Objective, you will have to take frequent backups.
PS: Logic Apps are a great way to setup automatic backups. There is an official sample that you can refer to for this.
To deploy my infrastructure I need to deploy a VM with a custom script extension. The only purpose of the VM, is to execute the script. After the execution of the script the VM should be deleted automatically.
How can this be done?
Additional information:
This is an azure resource manager deployment
the deletion should work in the azure marketplace environment as well.
this probably means you are doing something wrong, you can use Azure Container Instance to run the script and shutdown. it should work with marketplace as well (as far as I know you can have custom container in marketplace offerings).
Marketplace only allows you to use arm templates to deploy stuff, so you cannot really do what you are asking with an arm template. well, you might be able to hack something like that with nested deployments and complete mode, but I doubt that will pass moderation in marketplace.
technically, you can make vm delete itself as a part of the script. again, not something I would advise.
I have two Azure VM's running in a cloud service. They contains almost the same thing. Some TCP port's are also opened between them.
Is it possible to create a deploy package from this existing setup so that at a later time can deploy this setup in an easy way. I.e. I want to be able to do this:
1. Create deploy package from existing setup *
2. Delete whole existing cloud service including VM's
3. Deploy the package from step 1 and have everything created again.
*I can save one of the VM's to my Azure storage and use it as template for both of them if that is easier.
How to accomplish this if it is possible?
Yes, you can take what you have as a template and use it to stand up multiple silos. But in IaaS, there isn't a notion of a deployment package. There's a few things you'll need to do...
1) understand how to take an existing VM and turn it into an image
2) use Powershell or another DevOps style automation suite (Chef/Puppet/etc..) to define deploy your silo.
You seem specifically interested in how to create an image so I'd recommend using the tutorial we have published on this. http://www.windowsazure.com/en-us/documentation/articles/virtual-machines-capture-image-windows-server/ This does of course presume you're running Windows Server. But a Linux version it can be found at: http://www.windowsazure.com/en-us/documentation/articles/virtual-machines-linux-capture-image/
The automation of a deployment depends on a great many things, so I'd suggest at a starting point, familiarizing yourself with the management API: http://msdn.microsoft.com/en-us/library/windowsazure/ee460799.aspx
With the implementation of Resource Manager, you can now easily use JSON template to deploy and redeploy resources in Azure. There are also starter templates available - https://azure.microsoft.com/en-us/documentation/templates/