Is there anyway i can use terraform to copy folders from local server to Google storage bucket?
I have tried file provisioner, but it is only working for VM instance, but not cloud storage.
It appear that there is a command called google_storage_bucket_object which will copy a local file to a GCS object.
See here for details:
https://www.terraform.io/docs/providers/google/r/storage_bucket_object.html
Another thought might be to run the local-exec provisioner and run gsutil to copy a directory of files.
https://www.terraform.io/docs/provisioners/local-exec.html
Another possibility to upload a group of files to a bucket
variable "files" {
type = map(string)
default = {
# sourcefile = destfile
"folder/file1" = "somefolder/file1",
"folder2/file2" = "somefolder/file2"
}
}
resource "google_storage_bucket_object" "my-config-objects" {
for_each = var.files
name = each.value
source = "${path.module}/${each.key}"
bucket = my-bucket-name
}
Related
I am using below terraform datasource for importing shared state from s3. Terraform is giving me error " No stored state was found for the given workspace in the given backend". I am expecting terraform to pick up the workspace "dev-use1" as I have set the workspace using terraform workspace select "dev-use1".
data "terraform_remote_state" "shared_jobs_state" {
backend = "s3"
config = {
bucket = "cicd-backend"
key = "analyticsjobs.tfstate"
workspace_key_prefix = "pipeline/v2/db"
region = "us-east-1"
}
}
Version = Terraform v1.1.9 on darwin_arm64
After enabling the DEBUG in terraform by setting TF_LOG="DEBUG". I can see that s3 api call is giving 404 error.
from the request xml I can see that the prefix is wrong.
As a workaround I have done below changes to datasource.
Not sure this is the recommended way of doing but it works. There is less clarity in docs regards to this https://www.terraform.io/language/state/remote-state-data
data "terraform_remote_state" "shared_jobs_state" {
backend = "s3"
config = {
bucket = "cicd-backend"
key = "pipeline/v2/db/${terraform.workspace}/analyticsjobs.tfstate"
region = "us-east-1"
}
}
I have a requirement to attach drives to a windows server VM in GCP and this has to be done in terraform. I am using terraform version 12.
We have 3 database servers that we need to get into terraform. The existing servers have drives mapped like this:
Data: E
Log: F
Backup: G
Currently the servers that I am building have the drives attached in this incorrect order and have the wrong letters assigned:
Log: D
Backup: E
Data: F
This is the code that I am using to create and attach the volumes:
// Create Data Disk
resource "google_compute_disk" "datadisk_instance1" {
name = var.data_disk_name_instance1
type = var.disk_type
size = var.data_disk_size
zone = var.zone1
snapshot = var.data_snapshot_name_instance1
physical_block_size_bytes = 4096
}
// Create Log Disk
resource "google_compute_disk" "logdisk_instance1" {
name = var.log_disk_name_instance1
type = var.disk_type
size = var.log_disk_size
zone = var.zone1
snapshot = var.log_snapshot_name_instance1
physical_block_size_bytes = 4096
}
// Create Backup Disk
resource "google_compute_disk" "backupdisk_instance1" {
name = var.backup_disk_name_instance1
type = var.disk_type
size = var.backup_disk_size
zone = var.zone1
snapshot = var.backup_snapshot_name_instance1
physical_block_size_bytes = 4096
}
// Attach Data disk
resource "google_compute_attached_disk" "datadiskattach_instance1" {
disk = google_compute_disk.datadisk_instance1.id
instance = google_compute_instance.instance1.id
}
// Attach Log Disk
resource "google_compute_attached_disk" "logdiskattach_instance1" {
disk = google_compute_disk.logdisk_instance1.id
instance = google_compute_instance.instance1.id
}
// Attach Backup disk
resource "google_compute_attached_disk" "backupdiskattach_instance1" {
disk = google_compute_disk.backupdisk_instance1.id
instance = google_compute_instance.instance1.id
}
The disks are being created from snapshots and the data must be preserved.
How can I attach these disks in the correct order and assign the correct drive letters?
In Azure we achieve it by running a custom script extension - which basically downloads a powershell script inside the VM and executes it.
I don't know GCP but a quick google search tells me Google Compute lets you setup startup scripts.
You could run powershell as a start up script which will do the disk initialization and formatting.
Azure documentation has the powershell documented ( you may need to build on top of this, by adding checks like - are there partitions with type RAW ? etc)
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/attach-disk-ps#initialize-the-disk
Terraform docs has a simple example of adding a startup script, you may need to tinker with the syntax to get it up and running with powershell
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance
What specific syntax needs to be used in the example below in order for Terraform to source the AWS provider from a given path in the local file system instead of requesting a cloud copy from the remote Terraform Registry?
provider "aws" {
region = var._region
access_key = var.access_key
secret_key = var.secret_access_key
}
Something like src=C:\path\to\terraform\aws\provider\binary
I recall Mitchel Hashimoto explaining that this is a new feature during HashiConf, but I cannot seem to find documentation.
You should be able to set it in the CLI configuration as described in the documentation:
provider_installation {
filesystem_mirror {
path = "/usr/share/terraform/providers"
include = ["example.com/*/*"]
}
direct {
exclude = ["example.com/*/*"]
}
}
directory structure
I am using s3 as remote state backend and dynamodb table for locking
Both platform1 and platform2 used share infrastructure from shared platform
If I try to create platform1 first it will fail as the dependencies in shared haven’t been created, same for platform2, but if I create shared platform first and then platform1 and platform2 all infrastructure will build without issues
Is this correct?
How can I build shared environment first when trying to build one of the platform environments?
I have tried creating shared environment first
the root terragrunt.hcl file i.e. under tst1 folder
# Configure Terragrunt to automatically store tfstate files in an S3 bucket
remote_state {
backend = "s3"
config = {
encrypt = true
bucket = "automation-terraform-state"
key = "tst1/${path_relative_to_include()}/terraform.tfstate"
region = "ap-southeast-2"
dynamodb_table = "tst-terraform-locks"
}
}
# Configure root level variables that all resources can inherit. This is especially helpful with multi-account configs
# where terraform_remote_state data sources are placed directly into the modules.
inputs = {
aws_region = "ap-southeast-2"
ami_id = "ami-0aa5848a455c3ec32"
vpc_id = "vpc-7e49e81a"
}
terragrunt.hcl inside platform1
terraform {
source = "git::git#github.com:acme/infrastructure-modules.git//application_lb"
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders()
}
inputs = {
...
...
...
}
Is there any way I can use a Terraform template output to another Terraform template's input?
Ex: I have a Terraform template which creates an ELB and I have another Terraform template which is going to create an auto scale group which need the ELB information as an input variable.
I know I can use shell script to grep and feed in the ELB information, but I'm looking for some Terraform way to doing this.
Have you tried using remote state to populate your second template?
Declare it like this:
resource "terraform_remote_state" "your_state" {
backend = "s3"
config {
bucket = "${var.your_bucket}"
region = "${var.your_region}"
key = "${var.your_state_file}"
}
}
And then you should be able to pull out your resource directly like this:
your_elb = "${terraform_remote_state.your_state.output.your_output_resource}"
If this doesn't work for you, have you tried implementing your ELB in a module and then just using the output?
https://github.com/terraform-community-modules/tf_aws_elb is a good example of how to structure the module.
Looks like in newer versions of Terraform you'd access the output var like this
your_elb = "${data.terraform_remote_state.your_state.your_output_resource}"
All the rest is the same, just how you referenced it.
The question is about ELB, but I will give an example with S3. It is less things to write.
If you don't know how to store terraform state on AWS, read the article.
Let's suppose you have two independent projects: project-1, project-2. They are located in two different directories (two different repositories)!
Terraform file /tmp/project-1/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b1"
acl = "private"
}
// Output. It will available on s3://multi-terraform-project-state-bucket/p1.tfstate
output "bucket_name_p1" {
value = aws_s3_bucket.main_bucket.bucket
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
After it you move to the terraform file /tmp/project-2/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b2"
acl = "private"
tags = {
// Get the S3 bucket name from another terraform state file. In this case it is s3://multi-terraform-project-state-bucket/p1.tfstate
p1-bucket = data.terraform_remote_state.state1.outputs.bucket_name_p1
}
}
// Get date from another state file
data "terraform_remote_state" "state1" {
backend = "s3"
config = {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
region = "eu-central-1"
}
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p2.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
Now check the tags in the my-epic-test-b2. You will find there the name of the bucket from the project-1.
When you are integrating terraform with Jenkins you can simply define a variable in the Jenkinsfile you are creating. Suppose you want to run ec2-instance using terraform and Jenkinsfile. So when you need to get the public IP address of the instance you can use this command inside your Jenkinsfile.
script {
def public_ip = sh(script: 'terraform output public_ip | cut -d '"' -f2', returnStdout: true).trim()
}
this makes proper formatting and saves only the IP address in the public_ip variable. To do that you have to define the output block in the terraform script to output the public ip