How to create a file with timestamp in `terraform`? - terraform

I use below configuration to generate a filename with timestamp which will be used in many different places.
variable "s3-key" {
default = "deploy-${timestamp()}.zip"
}
but got Error: Function calls not allowed error. How can I use timestamp for a variable?

Variable defaults in particular are constant values, but local values allow for arbitrary expressions derived from variables:
variable "override_s3_key" {
default = ""
}
locals {
s3_key = var.override_s3_key != "" ? var.override_s3_key : "deploy-${timestamp()}.zip"
}
You can then use local.s3_key elsewhere in the configuration to access this derived value.
With that said, Terraform is intended for creating long-running infrastructure objects and so including timestamps is often (but not always!) indicative of a design problem. In this particular case, it looks like using Terraform to create application artifacts for deployment, which is something Terraform can do but Terraform is often not the best tool for this sort of job.
Instead, consider splitting your build and deploy into two separate steps, where the build step is implemented using any separate tool of your choice -- possibly even just a shell script -- and produces a versioned (or timestamped) artifact in S3. Then you can parameterize your Terraform configuration with that version or timestamp to implement the "deploy" step:
variable "artifact_version" {}
locals {
artifact_s3_key = "deploy-${var.artifact_version}.zip"
}
An advantage of this separation is that by separating the versioned artifacts from the long-lived Terraform objects you will by default retain the historical artifacts, and so if you deploy and find a problem you can choose to switch back to a known-good existing artifact by just re-running the deploy step (Terraform) with an older artifact version. If you instead manage the artifacts directly with Terraform, Terraform will delete your old artifact before creating a new one, because that's Terraform's intended usage model.
There's more detail on this model in the HashiCorp guide Serverless Applications with AWS Lambda and API Gateway. You didn't say that the .zip file here is destined for Lambda, but a similar principle applies for any versioned artifact. This is analogous to the workflow for other deployment models, such as building a separate Docker image or AMI for each release; in each case, Terraform is better employed for the process of selecting an existing artifact built by some other tool rather than for creating those artifacts itself.

Related

Including Terraform Configuration from Another Gitlab Project

I have a couple apps that use the same GCP project. There are dev, stage, and prod projects, but they're basically the same, apart from project IDs and project numbers. I would like to have a repo in gitlab like config where I keep these IDs, in a dev.tfvars, stage.tfvars, prod.tfvars. Currently each app's repo has a directory of config/{env}.tfvars, which is really repetitive.
Googling for importing or including terraform resources is just getting me results about terraform state, so hasn't been fruitful.
I've considered:
Using a group-level Gitlab variable just as key=val env file and have my gitlab-ci yaml source the correct environment's file, then just include what I need using -var="key=value" in my plan and apply commands.
Creating a terraform module that either uses TF_WORKSPACE or an input prop to return the correct variables. I think this may be possible, but I'm new to TF, so I'm not sure how to return data back from a module, or if this type of "side-effects only" solution is an abusive workaround to something there's a better way to achieve.
Is there any way to include terraform variables from another Gitlab project?

Custom environments in terraform on aws

I need to create a setup with terraform and aws where i could have the traditional Dev, Stage and Prod environments. I also need to allow creation of custom copies of individual services in those environments for developers to work on. I will use workspaces for Dev, Stage and Prod but can't figure out how to achieve these custom copies of services in environments.
An example would be a lambda in dev environment that would be called "test-lambda-dev". It would be deployed with the dev environment. The developer with initials "de" would need to work on some changes in this lambdas code and would need a custom version of it deployed alongside the regular one. So he would need to create a "test-lambda-dev-de".
I have tried to solve it by introducing a sufix after the resources name that would normaly be empty but someone could provide a value signalling that a cuwtom version is needed. Terraform destroyed the regular resource and replaced it with the custom one so it has not worked as intended.
Is there a way to get it to work? If not how can developers work on resources without temporarily breaking the environment?

Using variables when specifying location of the module in Terraform

I am trying to run this code:
locals {
terraform_modules_git = "git::ssh://....#vs-ssh.visualstudio.com/v3/...../terraform-modules"
terraform_modules_module = "resource_group?ref=v15.0.0"
}
module "MyModuleCall" {
source = "${local.terraform_modules_git}/${local.terraform_modules_module}"
}
My goal was to consolidate all Tag references in one place and not duplicate long string with the name of the repo with all my modules numerous times.
And I get this error:
Error: Variables not allowed
on main.tf line 12, in module "MyModuleCall":
12: source = "${local.terraform_modules_git}/${local.terraform_modules_module}"
Variables may not be used here.
Does anybody know why they have put this limitation? What is wrong with using variables?
Does anybody see any work around?
You can't dynamically generate source. You must explicitly hardcode it, as explained in the docs:
This value must be a literal string with no template sequences; arbitrary expressions are not allowed.
Sadly, I'm not aware of any workaround, except pre-processing templates before using them. The pre-processing would just find and replace the source with what you want.
Dependencies in Terraform are handled statically before executing the program, because the runtime needs to have access to all of the involved code (in Terraform's case, both modules and providers) before it can create a runtime context in order to execute any code. This is similar to most other programming languages, where you'd typically install dependencies using a separate command like pip install or npm install or go get before you can run the main program. In Terraform's case, the dependency installer is terraform init, and "running the program" means running terraform plan or terraform apply.
For this reason, Terraform cannot and does not permit dynamically-constructed module or provider source addresses. If you need to abstract the physical location and access method of your modules from the address specified in calling modules then one option is to use a module registry to tell Terraform how to map a local address like yourcompany.example.com/yourteam/resource-group/azure to the more complex Git URL that Terraform will actually use to fetch it.
However, in practice most teams prefer to specify their Git URLs directly because it results in a simpler overall system, albeit at the expense of it being harder to move your modules to a new location at a later date. A compromise between these two is to use a hosted service which provides Terraform Registry services, such as Terraform Cloud, but of course that comes at the expense of introducing another possible point of failure into your overall system.

How to update ADF Pipeline level parameters during CICD

Being novice to ADF CICD i am currently exploring how we can update the pipeline scoped parameters when we deploy the pipeline from one enviornment to another.
Here is the detailed scenario -
I have a simple ADF pipeline with a copy activity moving files from one blob container to another
Example - Below there is copy activity and pipeline has two parameters named :
1- SourceBlobContainer
2- SinkBlobContainer
with their default values.
Here is how the dataset is configured to consume these Pipeline scoped parameters.
Since this is development environment its OK with the default values. But the Test environment will have the containers present with altogether different name (like "TestSourceBlob" & "TestSinkBlob").
Having said that, when CICD will happen it should handle this via CICD process by updating the default values of these parameters.
When read the documents, no where i found to handle such use-case.
Here are some links which i referred -
http://datanrg.blogspot.com/2019/02/continuous-integration-and-delivery.html
https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment
Thoughts on how to handle this will be much appreciated. :-)
There is another approach in opposite to ARM templates located in 'ADF_Publish' branch.
Many companies leverage that workaround and it works great.
I have spent several days and built a brand new PowerShell module to publish the whole Azure Data Factory code from your master branch or directly from your local machine. The module resolves all pains existed so far in any other solution, including:
replacing any property in JSON file (ADF object),
deploying objects in an appropriate order,
deployment part of objects,
deleting objects not existing in the source any longer,
stop/start triggers, etc.
The module is publicly available in PS Gallery: azure.datafactory.tools
Source code and full documentation are in GitHub here.
Let me know if you have any question or concerns.
There is a "new" way to do ci/cd for ADF that should handle this exact use case. What I typically do is add global parameters and then reference those everywhere (in your case from the pipeline parameters). Then in your build you can override the global parameters with the values that you want. Here are some links to references that I used to get this working.
The "new" ci/cd method following something like what is outlined here Azure Data Factory CI-CD made simple: Building and deploying ARM templates with Azure DevOps YAML Pipelines. If you have followed this, something like this should work in your yaml:
overrideParameters: '-dataFactory_properties_globalParameters_environment_value "new value here"'
Here is an article that goes into more detail on the overrideParameters: ADF Release - Set global params during deployment
Here is a reference on global parameters and how to get them exposed to your ci/cd pipeline: Global parameters in Azure Data Factory

Ideal terraform workspace project structure

I'd like to setup Terraform to manage dev/stage/prod environments. The infrastructure is the same in all environments, but there are differences in the variables in every environment.
What does an ideal Terraform project structure look like now that workspaces have been introduced in Terraform 0.10? How do I reference the workspace when naming/tagging infrastructure?
I wouldn't recommend using workspaces (previously 'environments') for static environments because they add a fair bit of complexity and are harder to keep track of.
You could get away with using a single folder structure for all environments, use workspaces to separate the environments and then use conditional values based on the workspace to set the differences. In practice (and especially with more than 2 environments leading to nested ternary statements) you'll probably find this difficult to manage.
Instead I'd still advocate for separate folders for every static environment and using symlinks to keep all your .tf files the same across all environments and a terraform.tfvars file to provide any differences at each environment.
I would recommend workspaces for dynamic environments such as short lived review/lab environments as this allows for a lot of flexibility. I'm currently using them to create review environments in Gitlab CI so every branch can have an optionally deployed review environment that can be used for manual integration or exploratory testing.
In the old world you might have passed in the var 'environment' when running terraform, which you would interpolate in your .tf files as "${var.environment}".
When using workspaces, there is no need to pass in an environment variable, you just make sure you are in the correct workspace and then interpolate inside your .tf files with "${terraform.workspace}"
As for how you'd manage all of the variables, i'd recommend using a varmap, like so:
variable "vpc_cidr" {
type = "map"
default = {
dev = "172.0.0.0/24"
preprod = "172.0.0.0/24"
prod = "172.0.0.0/24"
}
}
This would then be referenced in a aws_vpc resource using a lookup
"${lookup(var.vpc_cidr, terraform.workspace)}"
The process of creating and selecting workspaces is pretty easy:
terraform workspace
Usage: terraform workspace
Create, change and delete Terraform workspaces.
Subcommands:
show Show the current workspace name.
list List workspaces.
select Select a workspace.
new Create a new workspace.
delete Delete an existing workspace.
so to create a new workspace for pre production you'd do the following:
terraform workspace new preprod
and if you ran a plan, you'd see that there should be no resources. What this will do in the backend is create a new folder to manage the state for 'preprod'.

Resources