Terraform: Mutliple providers, each with its own statefile - terraform

I know it's possible to combine multiple providers in a single Terraform project.
Would it be possible though, to declare different statefile per provider? In our use-case we will be deploying infrastructure with its part in the client's cloud provider account and other part within our cloud provider account.
We'd like to keep the statefiles separated (client's TF state vs our TF state), in order to allow smoother future migrations of either our part of the infra or client's part of the infra.
We also know that this can be achieved using Terragrunt on top of Terraform, but for the moment we'd prefer to avoid introducing a new tool into our stack. Hence looking for a TF-only solution (if such exists).

The simplest solution would be to use separate folders for your and your client's infrastructure.
Is there a specific reason why you would want to keep them in one folder? Even if you need to share some values, you can easily read them by using terraform_remote_state

Related

Creating automated snapshots of Azure Resources and their dependencies to deploy at a later date

I'm currently busy with an internship. In this internship I need to create a program which automatically creates "snapshots" of the current state of Azure Resources (And sometimes their dependencies) which need to be deployed to another environment. e.g. Acceptance -> Production. These snapshots must then be deployed to the new environment at a later date which has been coordinated with the client.
A solution can consists out of >100 Azure resources, ranging from API Managers, to LogicApps, CosmosDB's, etc. When a customer accepts or says "ok" to a few resources (= a part of the total solution) a snapshot needs to be made of that resource, in the specific state when the client said OK. That means that I also have to create a snapshot of the dependencies of that specific resource (LogicApp can depend on a CosmosDB, Keyvault etc).
And I can't just take a reference to the resource in the Acceptance environment, I need to bring that dependency over to production as well, seeing as it might be possible that another developer will continue working on said dependency which might break things.
I am bit of at a loss as to which direction to take here. I don't have a lot of experience with ARM (Templates) and I have been making several prototypes for a month now.
I have first tried to generate my own ARM (and Bicep) files through gathering information from the Azure Rest API, but I soon discovered this is not viable because I cannot extract all of the information from that API to create said ARM file.
I then looked into modifying the generated ARM files from Azure itself. Whilst this is an option, it contains a lot of information which I do not need or want to transfer over to another environment. It is also very hard to determine which parts of the generated ARM file must be deleted, updated, copied or left alone. And then I still need to recursively get the ARM templates of the dependencies and go through those in an automated way as well.
Is modifying existing ARM templates the best route to go here? Or does a similar product already exist which might help achieve my goal?
Thank you!!
In this case, I would not go with the approach to modify exported ARM templates but I would go with approach of Infrastructure as Code i.e., I would created ARM templates as granular as possible i.e., may be one template per resource at the least and store that infrastructure code in a source repository and if required version it to use it in different environments. The reason for recommending one template per resource is to take care of the dependencies in a complex environment. I know this might look like a bigger activity for the first-time implementation but once the templates are integrated into any continuous integration and continuous deployment (CI/CD) tool like Azure DevOps then all of it can be automated with the help of release pipelines for fast and reliable application and infrastructure updates. For more information in this regard, please refer this and this Azure documents.

How to Organize Directory Structure for Terraform

As a novice, iam trying to set up some structure for terraform projects. We as a team will be using terraform for building infrastructure for Aws ,azure and containers. I will be building infra for dedicated application teams and also cloud services individually.my challenge is set up a consistent directory structure which can be used to build all types of cloud and also for dedicated application teams. how can i set up a standard directory structure and how can i manage state file for application teams and individually services
THanks for all your knowledge and lessons to me on this
I can see one generic and one specific question here!
Generic Question: how can i set up a standard directory structure (sic)
This purely depends on how your infrastructure is and what cloud services it uses. I would recommend you to start going through this link to get an idea.
Specific Question: how can i manage state file for application teams and individually services (sic)
As you might be aware, your state file has the details about your infrastructure's state in its entirety and must be stored safely. Besides, since multiple teams are going to be using the Terraform code to update the infrastructure, do save the state file in an S3 bucket (or equivalent object storage service in a different cloud) and have it fetched every time someone runs terraform plan or terraform apply. Reference

What is the recommended way to store environment variables in Azure Functions for different environments?

Currently, I'm storing all key/value pairs in Application Settings, but I'm not happy with this approach. What is the recommended way to store settings for dev, test, stage, and prod? I need to make sure that prod settings are not visible to developers. Is there a way to create 4 different JSON files and define access permissions on them? Or do I need to create 4 different Function apps (or subscriptions)?
Azure App Configuration is a relatively new service that sounds like it could help in terms of managing the config values centrally with more control than individual instance App Settings.
Beyond that, you could perhaps build segregation by limiting devs to pushing code only and not accessing the hosting environment (Azure portal, etc). The layer in between would be something like Azure DevOps or Github Actions that has access to Azure, while devs are limited to pushing code that triggers deployment.
Also worth reminding ourselves that devs ultimately have a lot of access by virtue of writing the code. If they want to get at runtime data, they can, somehow. If you consider the devs untrusted, you may have bigger problems. If it's just a matter of preventing mistakes, a solid devops process is the key.

Terraform conditional creation

I have a Terraform file that creates a resource group, storage account, storage shares, database and VMs on Azure. My proposed use case is that in production once the resource group, storage account, storage shares and database are created database, they should stay in place. However, there are cases where the VMs may need to be destroyed and re-created with different specs. I know that I can run the file once to create everything and then taint the VMs and re-create them with an apply, but that doesn't seem like the ideal method.
In regular use, changes to infrastructure should be made by changes to configuration, rather than by running imperative commands like terraform taint. If you change something about the configuration of a VM that requires it to be created then the underlying provider should produce a plan to replace that object automatically, leaving others unchanged.
When you have different objects that need to change on different rhythms -- particularly when some of them are stateful objects like databases -- a good way to model this in Terraform is to decompose the problem into multiple separate Terraform configurations. You can use data sources in one configuration to retrieve information about objects created in another.
Splitting into at least two separate configurations means that the risk of running terraform apply on one of them can be reduced, because the scope of actions it can take is just on the objects managed in that particular configuration. Although in principle you can carefully review the Terraform plan to see when it's planning to make a change that would be harmful, splitting into multiple configurations is extra insurance that many teams use to reduce the possible impact of human error.

How can you prevent GCP console/cloud shell changes by "Owners" conflicting with the terraform code?

I understand the objective of deploying infrastructure as code and appreciate the benefit of being able to enforce code peer review pre-deployment. From a Security perspective this technical control assures me that changes that are being made to an environment are peer-reviewed.
However, wouldn't it still be possible for someone with relevant permissions (e.g. with the role Owner) to make changes directly on the console/cloud shell? This change then wouldn't be peer reviewed.
Just want to check what, if any, controls there are to prevent this? Of course, i understand that one control would be to restrict IAM permissions on the project or at org-level to prevent changes being made since then only the terraform service account could make changes but i want to understand if there are any other controls.
Nothing will stop a user to create/update/delete a resource manually (by manually I mean here : via Console or Cloud shell) if he has
the IAM permissions to do so.
In the case of a manual resource update : if the resource is managed by Terraform, running terraform plan will alert you that a modification has been made. Indeed, Terraform will see a difference between the resource description in your
.tf file and the reality. If you apply these changes, it will revert modifications made manually by the user.
Running periodic checks to verify if some modifications have been made out of Terraform (on resources managed by Terraform) could be a
good idea to alert you that someone did something manually.
But in case of newly created resources (out of Terraform), unless the resource is imported in Terraform after creation (terraform import),
you'll never know that this resource have been created, and you could not track any modifications on that resource.
The only way to prevent resource creation is by restricting IAM permissions. For example, if nobody (unless Terraform service account) have the permission storage.buckets.create, then nobody (excepted Terraform service account) will be able to create a bucket. The same applies to resources update.
If you want all your resources to be managed by Terraform, remove the create/update IAM permissions to all users except Terraform service account. But be aware that :
you can't create/update all GCP resources with Terraform. Even if Terraform providers grows fast, there will always be some delay between a new GCP product and its implementation in Terraform GCP provider. Some time ago, I remember myself waiting for Cloud Composer resource in Terraform,
which appears in 1.18.0 version on 2018/09/17, though Cloud Composer was available since the 2018/05/01. If I have chosen to create resources with only Terraform, then I should have wait 4 months before starting to use Cloud Composer (this is an example amongst other)
you may sometimes want to create resources outside of Terraform, for testing purpose for example. If Terraform is enforced to create/update all resources across your organization, this will not be possible. Think about non-technical users who want to create temporarily some resources to make some tests : they probably won't learn how to use Terraform, so they'll either give up or ask someone to create resources for them. As your number of users increase, this should become cumbersome
reasoning by the absurd : do you want to manage all resources available using Terraform? If so, then you may want to manage also Storage objects with Terraform, because there is a Terraform resource google_storage_bucket_object. Except some very specific cases, you don't want to manage these kind of resources with Terraform (in Storage objects case, think of huge files)
In conclusion, managing all your resources across your organization using Terraform and restrict only Terraform service account to create/update/delete resources is definitively a goal to aim for, and should be done as much as you can, but in reality, it is not always completely possible. Critical resources must be protected, and so IAM for updating/deleting them must be restricted. Also, owner role is not the only one that allows creating/updating/deleting resources. You will have to be very careful about roles you give to your users to ensure that they won't have such permission, and will probably rely on custom roles because predefined roles are often too broad.

Resources