Terraform get tfvars file from remote repository - terraform

I have .tfvars file in remote repository for some other infrastructure. I want to reuse the config file. I am working on new infrastructure but need to use the content of .tfvars. So far tried the following but I am not sure how to read the file I got via http url. Is there a better way of getting remote config?
data "http" "tfvars" {
url = "https://some#dev.azure.com/some/_git/sampleinfra?path=/infra/variable_folders.auto.tfvars"
request_headers = {
Accept = "application/json"
}
}
I want to add that the content of the file is json array
folderList = {
"main" = {
process= true
}
"base" = {
process= false
}
}

Related

Is it possible to use 1password for Terraform provider credentials?

I'm trying to setup a Terraform configuration for Sonatype Nexus (among other things). Rather than providing my passwords directly, I want to get them from my 1Password system. The advantage for doing this is this Terraform config will live with alongside my broader infrastructure configuration, which includes the setup of the 1password Connect deployment.
My infrastructure CI/CD therefore already has environment variables set for the 1password credentials out of necessity, and it would be nice to make those the only variables I would need for anything. Hence trying to access this password from 1Password.
Below is my Terraform setup. As you can see, it gets the Nexus admin password from 1Password and tries to use it in the provider. However, when I run this Terraform script, it fails with a 401 response from Nexus when trying to create the blobstore.
To be honest, the 1Password Terraform documentation leaves much to be desired. I don't even know if I can configure a provider with data from another provider to begin with.
terraform {
backend "kubernetes" {
secret_suffix = "nexus-state"
config_path = "~/.kube/config"
}
required_providers {
nexus = {
source = "datadrivers/nexus"
version = "1.21.0"
}
onepassword = {
source = "1Password/onepassword"
version = "1.1.4"
}
}
}
provider "onepassword" {
url = "https://my-1password"
token = var.onepassword_token
}
data "onepassword_item" "nexus_admin" {
vault = "VAULT_UUID"
uuid = "ITEM_UUID"
}
provider "nexus" {
insecure = true
password = data.onepassword_item.nexus_admin.password
username = "admin"
url = "https://my-nexus"
}
resource "nexus_blobstore_file" "npm_private" {
name = "npm-private"
path = "/nexus-data/npm-private"
}

GCP Cloud monitoring using terraform for allowing "Response Code Classes" in uptime alert creation

So I have my terraform file where I have created my uptime check, where I am checking the SSL certs and not the uptime, configured it so just to check the cert expire.
Now suppose by default
I have Acceptable HTTP Response Code 200 allowed but if I want to allow 404 code as well so that if the website gives 404 response but still by test passes how can I allow that in the terraform code..?
so for example
resource "google_monitoring_uptime_check_config" "https" {
display_name = "https-uptime-check"
timeout = "60s"
http_check {
path = "/some-path"
port = "443"
use_ssl = true
validate_ssl = true
}
monitored_resource {
type = "uptime_url"
labels = {
project_id = "my-project-name"
host = "192.168.1.1"
}
}
content_matchers {
content = "example"
matcher = "MATCHES_JSON_PATH"
json_path_matcher {
json_path = "$.path"
json_matcher = "REGEX_MATCH"
}
}
}
This is passed if I click test option
but I need to allow 404 as well so that the test will pass if the return is 404 as well.
Can anyone plz help me with the correct code to include 404 under Acceptable HTTP Response Code->Response Code Classes allow 404 and 200.
you can specify exact response codes:
http_check {
path = "/some-path"
port = "443"
use_ssl = true
validate_ssl = true
accepted_response_status_codes {
status_class = "STATUS_CLASS_2XX"
}
accepted_response_status_codes {
status_value = 404
}
accepted_response_status_codes {
status_value = 302
}
}

Terraform - GCP - link an ip address to a load balancer linked to a cloud storage bucket

What I want:
I would like to have a static.example.com DNS records that link to a bucket in GCS containing my static images.
As I manage my DNS through Cloudflare, I think I need to use the fact that GCP can attribute me an anycast-IP , to link that IP to a GCP load balancer , that will be linked to bucket
What I currently have:
a bucket already created manually , named "static-images"
the load balancer linking to said bucket, created with
resource "google_compute_backend_bucket" "image_backend" {
name = "example-static-images"
bucket_name = "static-images"
enable_cdn = true
}
the routing to link to my bucket
resource "google_compute_url_map" "urlmap" {
name = "urlmap"
default_service = "${google_compute_backend_bucket.image_backend.self_link}"
host_rule {
hosts = ["static.example.com"]
path_matcher = "allpaths"
}
path_matcher {
name = "allpaths"
default_service = "${google_compute_backend_bucket.image_backend.self_link}"
path_rule {
paths = ["/static"]
service = "${google_compute_backend_bucket.image_backend.self_link}"
}
}
}
an ip created with:
resource "google_compute_global_address" "my_ip" {
name = "ip-for-static-example-com"
}
What I'm missing:
the terraform's equivalent to the "frontend configuration" when creating a load balancer from the web console
Looks like you're just missing a forwarding rule and target proxy.
The terraform docs on google_compute_global_forwarding_rule have a good example.
e.g.:
resource "google_compute_global_forwarding_rule" "default" {
name = "default-rule"
target = "${google_compute_target_http_proxy.default.self_link}"
port_range = 80 // or other e.g. for ssl
ip_address = "${google_compute_global_address.my_ip.address}"
}
resource "google_compute_target_http_proxy" "default" { // or https proxy
name = "default-proxy"
description = "an HTTP proxy"
url_map = "${google_compute_url_map.urlmap.self_link}"
}
hope this helps!

Get a signed URL of google storage object via Terraform

I am trying to get an object (ex. abc.png) signed URL from google bucket via a Terraform .tf script. But I am not getting any output on the console.
I have installed terraform on my local Linux machine, I am providing service account JSON key as credentials but not getting the signed URL, please check my script below:
provider "google" {
credentials = "account.json"
}
data "google_storage_object_signed_url" "get_url" {
bucket = "my bucket"
path = "new.json"
content_md5 = "pRviqwS4c4OTJRTe03FD1w=="
content_type = "text/plain"
duration = "2h"
credentials = "account.json"
extension_headers = {
x-goog-if-generation-match = 1
}
}
Please let me know what I am doing wrong.
If you need see Output Values, please add the Outputs code as below
output "signed_url" {
value = "${data.google_storage_object_signed_url.get_url.signed_url}"
}

GitHub webhook created twice when using Terraform aws_codebuild_webhook

I'm creating a an AWS CodeBuild using the following (partial) Terraform Configuration:
resource "aws_codebuild_webhook" "webhook" {
project_name = "${aws_codebuild_project.abc-web-pull-build.name}"
branch_filter = "master"
}
resource "github_repository_webhook" "webhook" {
name = "web"
repository = "${var.github_repo}"
active = true
events = ["pull_request"]
configuration {
url = "${aws_codebuild_webhook.webhook.payload_url}"
content_type = "json"
insecure_ssl = false
secret = "${aws_codebuild_webhook.webhook.secret}"
}
}
for some reason two Webhooks are created on GitHub for that spoken project, one with events pull_request and push, and the second with pull request (the only one I've expected).
I've tried removing the first block (aws_codebuild_webhook) even though terraform documentation give an example with both:
https://www.terraform.io/docs/providers/aws/r/codebuild_webhook.html
but than I'm in a pickle because there isn't a way to acquire the payload_url the Webhook require and currently accept it from aws_codebuild_webhook.webhook.payload_url.
not sure what is the right approach here, Appreciate any suggestion.

Resources