As part of my requirement I am trying to create a AWS CodePipeline from Terraform and Trigger the CodePipeline Manually once created. But Unfortunately the CodePipeline is getting triggered automatically once created from Terraform
I am Trying to find a way to stop the Codepipeline from triggering automatically as soon as it is deployed from Terraform.
I was trying to do one of the below 2 methods, But I did not find anything relevant for the same.
Find any Parameter which sets the auto trigger to false.
Or Enable the "DisableInboundStageTransitions" between the Source and the First Stage. So the pipeline stage's will not run even if the source has started.
I see "DisableInboundStageTransitions" is available in CloudFormation but not in Terraform.
It Would be really great if someone can let me know that is it even possible to do the above from Terraform?
Is there any workaround to Achieve the same?
Thanks in Advance.
Related
Learning Terraform, and in one of the tutorials for terraform with azure a requirement was to log in with the az client. Now my understanding is that this was to create a Service Princlple.
I was trying this with Github actions and my assumption was that the properties obtained for the Service Principle. When I tried running terraform plan everything worked out fine.
However, when I tried to do terraform apply it failed until I explicitly did an az login step in the github workflow job.
What am I missing here? Does terraform plan only compare the new configuration file against the state file, not the actual account? Or does it verify the state against the resource-group/subscription in Azure?
I was a little confused with the documentation on terraform plan
We're in the middle of working on a small proof of concept project which will deploy infrastructure to Azure using Terraform. Our Terraform source is held in GitHub and we've using Terraform cloud as the backend to store our state, secrets etc.
Within Terraform cloud we've created two workspaces, one for the staging environment and one for the production environment.
So far we've used the guide on the Terraform docs to develop a GitHub action which triggers on a push to the main branch and deploys our infrastructure to the staging environment. This all works great and we can see our state held in Terraform cloud.
The next hurdle is to promote our changes into the production environment.
Unfortunately we've hit a brick wall trying to figure out how to dynamically change the Terraform cloud workspace within the GitHub action so it's operating on production and not staging. I've spent most of the day looking into this with little joy.
For reference the Terraform backend is currently configured as follows:
terraform {
backend "remote" {
organization = "terraform-organisation-name"
workspaces {
name = "staging-workspace-name"
}
}
}
The action itself does an init and then and apply.
Obviously with the workspace name hardcoded this will only work on staging. Ultimately the questions comes down to how to parameterise or dynamically change the Terraform cloud workspace from the command line?
I feel I'm missing something fundamental and any help or suggestions would be greatly appreciated.
Terraform will try to deploy all resources defined on Terraform configuration files. There are a lot of resources in my application, like lmabda, api gateway, ECS etc. I wonder whether I can specify deploying only one resource. For example, I want to deploy one lambda only and don't want to apply other resources. How can I make it in Terraform?
terraform apply -target=aws_lambda_function.test_function
More information on the usage of -target can be found in the terraform apply documentation.
When using Terraform to deploy our AWS infra, if we use the AWS console to redeploy an API Gateway deployment, it creates a new deployment ID. Afterwards when running terraform apply again, Terraform lists aws_api_gateway_stage.deployment_id as an in-place update item. Running terraform refresh does not update the state. It continues to show this as a delta. Am I missing something?
Terraform computes the delta based on what it already has saved on the backend and what there is on AWS at the moment of application. You can use Terraform to apply changes to AWS, but not the other way around, i.e. you cannot change your AWS configuration and expect Terraform to pull the changes into its state.
I am working in a project that will be deployed at my client's Microsoft Azure. Thus I am currently testing terraform to assist me when the time comes.
create a azure function with terraform that will trigger on blob storage input data
My question is about how to add the azure functions's javascript/c# code into the terraform script so it will be automatically deployed ?
I checked the terraform docs, but it wasn't of much help:
https://www.terraform.io/docs/providers/azurerm/r/function_app.html
Any ideas?
Terraform doesn't handle pushing code to Azure resources, that's usually done in a following step in the pipeline (e.g. 1- execute terraform 2- deploy code).
However, the Azure Function App does have the ability to connect directly to your repo, and the Terraform azurerm_function_app module exposes the source_control property.
Terraform's azurerm_function_app documentation
So with Terraform you can configure the function app to pull the code directly from the repo when a change is detected.
Microsoft's Azure Function Continuous Deployment documentation