Terraform Use case for AppService - azure

I am asked to create Terraform Scripts for our Azure Infrastructure setup. For starters I am creating Terraform Scripts for AppService. For me I am confused with whole IaC paradigm and will like to know how it is done in an enterprise environment
1) Do we need to create seperate Terraform Scripts for each App Service. Or do we need to create one script and set the values as run time variables?
2) Do we need to have Terraform pipelines as separate or it should run along with Application deployment pipelines? That is,each time before Deployment of application, do we need to check for configuration drift via Terraform?
Thanks in advance !

This question is basically opinion based. As per my experience,
you can have a single script file and have values injected dynamically
you can have terraform along with your deployment pipeline
Since you have mentioned that you are confused with IaC paradim, this article here might give more clarity. https://thorsten-hans.com/terraform-the-definitive-guide-for-azure-enthusiasts#the-terraform-lifecycle
hope this helps.

Related

Does Terraform have an Azure ARM "complete" mode?

We are looking to "reset" a resource group, deleting everything but the necessary infrastructure in it. The problem is we are still immature in our IAC practices and a lot of resources are deployed via the portal. My initial thought is to have the only necessary infra defined in an ARM template and running it in complete mode when we want to reset it. Does Terraform have a complete mode feature? From what I understand, Terraform will only manage stuff in state. Since we wont really respecting the state after initial deployment, the resources deployed via the portal wont be destroyed on a TF destroy. Any thoughts? Thanks!
Does Terraform have a complete mode feature?
AFAIK, No , Terraform doesn't have complete Mode like ARM template has.
From what I understand, Terraform will only manage stuff in state.
Since we wont really respecting the state after initial deployment,
the resources deployed via the portal wont be destroyed on a TF
destroy.
Yes , You are correct the Terraform will only manage the the
resources which are in state file only .
So, by default Terraform will only store the resources deployed through it in the state file but if you want to create some resources from the portal , then also you can use the import resources feature of terraform. Using which Terraform will be able to manage the resources created from Terraform and Portal as well.
Reference:
Import - Terraform by HashiCorp
No, Terraform does not have such a feature.
There is a feature request which mainly covers the "reporting" aspect, but also would allow acting upon it.
You might be able to build something around the import feature of Terraform, as suggested here. However, this would require some effort.
You could also use Terraform to deploy an ARM template in complete mode, but then you might loose most of why you wanted to use Terraform in the first place.

Azure Pipeline to Ansible AWX

We use Azure Pipeline to implement our Continuous integration pipeline. The app is deployed in virtual machines that we need to provision and configure. There are tones of libraries, patches , configurations , and applications that we need to deploy on the target VM before we get our code into those.
The question is what is the best tool to provision and configure these virtual machines? I was thinking of using Ansible AWX. Basically Azure Pipeline would make a call to the AWX API, which would then take it from there and finalize things.
There is an Azure Pipeline Extension that allows me to execute a playbook https://github.com/microsoft/azure-pipelines-extensions/blob/master/Extensions/Ansible/Src/readme.md. But I would like to use AWX instead so that my ansible/deployment code is decoupled from my pipeline.
Any suggestions?
As far as I know, Ansible allows you to automate the deployment and configuration of resources in your environment. It could meet your needs.
As you said, Azure Pipeline supports to run the playbook in the Ansible task(Ansible extension).
So I think you can directly complete the VM Configuration and Code Deployment in the azure pipeline.
If you want to separate these two steps, you can split them into two pipelines (VM configure and Code Deployment). To avoid confusion between configuration and deployment code, you can also split them into two repos.
On the other hand, if you run the playbook in the azure pipeline, the azure pipeline also supports adding tasks to change the parameters in the playbook(e.g. Replace Token).
Here is an operation guide about using Ansible in Azure Pipeline.
By the way, if the Virtual Machine is Azure VM, you also could use ARM template to update the Azure VM resource.
Personally, I would drop the AWX requirement. It's something else to manage and maintain and an entirely separate interface too. Instead, just do your whole pipeline in one place... azure devops. Pick one or the other. Tower doesn't have a built in source control, so I recommend ADO over it, but they'll both run ansible and they'll both do it on your own control nodes. There's no reason to take an extra step with another tool. It adds way too much complexity.

How to maintain many Azure resources and deployments in one git repo?

I have a project that consists of an Azure webapp, a PostgreSQL on Azure, and multiple Azure functions for background ETL workflows. I also have a local Python package that I need to access from both the webapp and the Azure functions.
How can I structure configuration and script deployment for those resources from a single git repo?
Any suggestions or pointers to good examples or tutorials would be very appreciated.
All the Azure tutorials that I've seen are only for small and simple projects.
For now, I've hand-written an admin.py script that does e.g. the webapp and function deployments by creating a Python package, creating ZIP files for each resource and doing ZIP deployments. This is getting messy, and now I want to have QA and PROD versions, and I need to pass secrets so that the DB is reachable, and it's getting more complex. Is there either a nice way to structure this packaging / deployment, or a tool to help with it? For me, putting everything in Kubernetes is not the solution, at least the DB already exists. Also, Azure DevOps is not an option, we are using Gitlab CI, so eventually I want to have a solution that can run on CI/CD there.
Not sure if this will help complete but here we go.
Instead of using a hand-written admin.py script, try using a yaml pipeline flow. For Gitlab, they have https://docs.gitlab.com/ee/ci/yaml/ that you can use to get started. From what you've indicated, I would recommend having several job steps in your yaml pipeline that will build and package your web and function apps. For deployment, you can make use of environments. Have a look at https://docs.gitlab.com/ee/ci/multi_project_pipelines.html as well which illustrates how you can create downstream pipelines.
From a deployment standpoint, the current integration I've found between Azure and GitLab leaves me with two recommendations:
Leverage the script command of yaml to continue zipping your artifacts use Azure CLI (I would assume you can install the tools during the pipeline) to zip deploy.
Keep your code inside the GitLab repo and utilize Azure Pipelines to handle the CI/CD for you.
I hope you find this helpful.

How do I run a Terraform plan in multiple steps / phases?

I have a wonderful terraform plan that perfectly describes my infrastructure in Google Cloud Platform, however, I have a problem: since my repository isn't perfectly private, some steps of my plan are encrypted and must be decrypted using Google Key Management Service.
This means my plan must be broken down into two terraform phases:
Setup the Google Cloud Project and create a Key Ring and Key (after this, I encrypt secrets and put them in a variables.tf file)
Apply the entire plan.
Does Terraform support a way to break down my plan into phases? How should I go around implementing this?
Though terraform enables us to automate the resources creation, some preliminary steps need to be done manually, like account creation, billing setup, etc. Similarly for Google cloud setup, the project needs to be created prior running terrform scripts since terraform google provider requires the project details.
The project creation and terraform variables for the keys (as environment variables) can be generated through shell scripts. Then the shell script and the terraform scripts can be sequenced in execution using a make file.
The below link might be helpful for you to create GCP project through shell scripts.
https://medium.com/google-cloud/how-to-automate-project-creation-using-gcloud-4e71d9a70047

Deployment of Azure Data Factory version2 Pipelines

What are the different ways to deploy the adf v2 pipelines on different environment. What can be the best approach to do fast, repeatable, reliable deployments of pipelines.
Thanks in advance.
Currently in v2 there isn't really any best practices to follow as the development tools are still in private preview. But as somebody with access to the new dev UI and can offer assurances that the things you seek are coming. Probably later this month, but I'm guessing.
With regards to repeatability and automation of your deployments, you have 2 options:
Script the deployments using PowerShell. This works like the deployment of the data factory v1, but you'll have to use the v2 commandlets, and you'll have to add triggers. You can use PowerShell to parametrize connections to linked services and other environment specific values.
Deploy using ARM templates and run it from a PowerShell / TFS deployment. You can find an example here. In this case it is possible to use ARM template parameters to parameterize connections to linked services and other environment specific values.

Resources