Terraform Openstack: Attach network interface during creation - terraform

I want to create an instance in openstack with a pre-defined network interface attached only. I have access to openstack, I know the network interface id/name.
Post creation of an instance I can simply attach the interface. This way it will get a randomly assigned IP from the pool and afterwards get the network interface attached. That's not what I want.
As stated in the beginning I want to attach the interface while I build the instance.
Edit - Example code:
Host Creation:
resource "openstack_compute_instance_v2" "example_host" {
count = 1
name = example-host
image_name = var.centos_7_name
flavor_id = "2"
key_pair = "key"
}
Interface attaching:
resource "openstack_compute_interface_attach_v2" "example_interface_attach" {
instance_id = openstack_compute_instance_v2.example_host[0].id
port_id = "bd858b4c-d6de-4739-b125-314f1e7041ed"
}
This won't work. Terraform returns an error:
Error: Error creating OpenStack server: Expected HTTP response code []
when accessing [POST servers], but got 409 instead
{"conflictingRequest": {"message": "Multiple possible networks found,
use a Network ID to be more specific.", "code": 409}}
Back to my initial query: I want to deploy a new host and attach a network interface. The result should be a host with only one IP-Address, the one I've attached to it.

The error seems to be generated by the instance launch. OpenStack (not Terraform) insists on a network if more than one network is available. From an OpenStack perspective, you have several solutions. Off the cuff, I see three:
Since microversion 2.37, the Nova API allows you to specify "none" as a network, in which case the instance runs, but is not connected after the launch.
Or launch the instance on a port instead of a network, after putting an IP address on the port. Using the openstack client:
openstack port create --network <network> --fixed-ip subnet=<subnet>,ip-address=<ip-address>
openstack server create ... --port <port-ip> ...
I consider that the best solution.
Another solution would be specifying a network and a fixed IP address while launching the instance. CLI:
openstack server create ... --nic NET-UUID,v4-fixed-ip=172.16.7.8 ...
Unfortunately, I can't tell whether Terraform supports any of these solutions. I would try adding the port_id to the resource "openstack_compute_instance_v2" "example_host" block.

I've found the solution and it's incredibly simple. You can plainly add the port id to the network block. I've tried it previously already but it failed. Chances are I've provided the wrong ID.
##Create hosts
resource "openstack_compute_instance_v2" "test_host" {
count = 1
name = format("test-host-%02d", count.index + 1)
image_name = var.centos_7_name
flavor_id = "2"
key_pair = "key"
network {
port = "<port-id>"
}
}
Here's an additional solution removing the potential of providing a wrong id.
##Create hosts
resource "openstack_compute_instance_v2" "test_host" {
count = 1
name = format("test-host-%02d", count.index + 1)
image_name = var.centos_7_name
flavor_id = "2"
key_pair = "key"
network {
port = data.openstack_networking_port_v2.port_1.id
}
}
data openstack_networking_port_v2 "port_1" {
name = "switch-port-208.37"
}

Related

How do I attach drives and assign drive letters to windows servers in GCP using Terraform?

I have a requirement to attach drives to a windows server VM in GCP and this has to be done in terraform. I am using terraform version 12.
We have 3 database servers that we need to get into terraform. The existing servers have drives mapped like this:
Data: E
Log: F
Backup: G
Currently the servers that I am building have the drives attached in this incorrect order and have the wrong letters assigned:
Log: D
Backup: E
Data: F
This is the code that I am using to create and attach the volumes:
// Create Data Disk
resource "google_compute_disk" "datadisk_instance1" {
name = var.data_disk_name_instance1
type = var.disk_type
size = var.data_disk_size
zone = var.zone1
snapshot = var.data_snapshot_name_instance1
physical_block_size_bytes = 4096
}
// Create Log Disk
resource "google_compute_disk" "logdisk_instance1" {
name = var.log_disk_name_instance1
type = var.disk_type
size = var.log_disk_size
zone = var.zone1
snapshot = var.log_snapshot_name_instance1
physical_block_size_bytes = 4096
}
// Create Backup Disk
resource "google_compute_disk" "backupdisk_instance1" {
name = var.backup_disk_name_instance1
type = var.disk_type
size = var.backup_disk_size
zone = var.zone1
snapshot = var.backup_snapshot_name_instance1
physical_block_size_bytes = 4096
}
// Attach Data disk
resource "google_compute_attached_disk" "datadiskattach_instance1" {
disk = google_compute_disk.datadisk_instance1.id
instance = google_compute_instance.instance1.id
}
// Attach Log Disk
resource "google_compute_attached_disk" "logdiskattach_instance1" {
disk = google_compute_disk.logdisk_instance1.id
instance = google_compute_instance.instance1.id
}
// Attach Backup disk
resource "google_compute_attached_disk" "backupdiskattach_instance1" {
disk = google_compute_disk.backupdisk_instance1.id
instance = google_compute_instance.instance1.id
}
The disks are being created from snapshots and the data must be preserved.
How can I attach these disks in the correct order and assign the correct drive letters?
In Azure we achieve it by running a custom script extension - which basically downloads a powershell script inside the VM and executes it.
I don't know GCP but a quick google search tells me Google Compute lets you setup startup scripts.
You could run powershell as a start up script which will do the disk initialization and formatting.
Azure documentation has the powershell documented ( you may need to build on top of this, by adding checks like - are there partitions with type RAW ? etc)
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/attach-disk-ps#initialize-the-disk
Terraform docs has a simple example of adding a startup script, you may need to tinker with the syntax to get it up and running with powershell
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance

AWS WAF not blocking requests using aws_wafregional_regex_pattern_set

I'm kind of surprised that I'm running into this problem. I created an aws_wafregional_regex_pattern_set to block incoming requests that contain php in their URI. I expected all requests with php in them to be blocked. However the requests are still making it through. Perhaps, I'm misunderstanding what this resource actually does? I have attached some sample code below.
resource "aws_wafregional_rule" "block_uris_containining_php" {
name = "BlockUrisContainingPhp"
metric_name = "BlockUrisContainingPhp"
predicate {
data_id = "${aws_wafregional_regex_match_set.block_uris_containing_php.id}"
negated = false
type = "RegexMatch"
}
}
resource "aws_wafregional_regex_match_set" "block_uris_containing_php" {
name = "BlockUrisContainingPhp"
regex_match_tuple {
field_to_match {
type = "URI"
}
regex_pattern_set_id = "${aws_wafregional_regex_pattern_set.block_uris_containing_php.id}"
text_transformation = "NONE"
}
}
resource "aws_wafregional_regex_pattern_set" "block_uris_containing_php" {
name = "BlockUrisContainingPhp"
regex_pattern_strings = [ "php$" ]
}
This code creates a String and regex matching condition in AWS WAF. So, I know it's at least getting created. I used cloudwatch to check for blocked requests as I sent requests containing php to the load balancer, but each request went through successfully. Any help with this would be greatly appreciated.
I can't tell from snippet but did you add the rule to web ACL and set the rule action to block?
Also you should try using wafv2 instead of wafregional as wafv2 comes with new features and easier to express rules.

Terraform - GCP - link an ip address to a load balancer linked to a cloud storage bucket

What I want:
I would like to have a static.example.com DNS records that link to a bucket in GCS containing my static images.
As I manage my DNS through Cloudflare, I think I need to use the fact that GCP can attribute me an anycast-IP , to link that IP to a GCP load balancer , that will be linked to bucket
What I currently have:
a bucket already created manually , named "static-images"
the load balancer linking to said bucket, created with
resource "google_compute_backend_bucket" "image_backend" {
name = "example-static-images"
bucket_name = "static-images"
enable_cdn = true
}
the routing to link to my bucket
resource "google_compute_url_map" "urlmap" {
name = "urlmap"
default_service = "${google_compute_backend_bucket.image_backend.self_link}"
host_rule {
hosts = ["static.example.com"]
path_matcher = "allpaths"
}
path_matcher {
name = "allpaths"
default_service = "${google_compute_backend_bucket.image_backend.self_link}"
path_rule {
paths = ["/static"]
service = "${google_compute_backend_bucket.image_backend.self_link}"
}
}
}
an ip created with:
resource "google_compute_global_address" "my_ip" {
name = "ip-for-static-example-com"
}
What I'm missing:
the terraform's equivalent to the "frontend configuration" when creating a load balancer from the web console
Looks like you're just missing a forwarding rule and target proxy.
The terraform docs on google_compute_global_forwarding_rule have a good example.
e.g.:
resource "google_compute_global_forwarding_rule" "default" {
name = "default-rule"
target = "${google_compute_target_http_proxy.default.self_link}"
port_range = 80 // or other e.g. for ssl
ip_address = "${google_compute_global_address.my_ip.address}"
}
resource "google_compute_target_http_proxy" "default" { // or https proxy
name = "default-proxy"
description = "an HTTP proxy"
url_map = "${google_compute_url_map.urlmap.self_link}"
}
hope this helps!

GitHub webhook created twice when using Terraform aws_codebuild_webhook

I'm creating a an AWS CodeBuild using the following (partial) Terraform Configuration:
resource "aws_codebuild_webhook" "webhook" {
project_name = "${aws_codebuild_project.abc-web-pull-build.name}"
branch_filter = "master"
}
resource "github_repository_webhook" "webhook" {
name = "web"
repository = "${var.github_repo}"
active = true
events = ["pull_request"]
configuration {
url = "${aws_codebuild_webhook.webhook.payload_url}"
content_type = "json"
insecure_ssl = false
secret = "${aws_codebuild_webhook.webhook.secret}"
}
}
for some reason two Webhooks are created on GitHub for that spoken project, one with events pull_request and push, and the second with pull request (the only one I've expected).
I've tried removing the first block (aws_codebuild_webhook) even though terraform documentation give an example with both:
https://www.terraform.io/docs/providers/aws/r/codebuild_webhook.html
but than I'm in a pickle because there isn't a way to acquire the payload_url the Webhook require and currently accept it from aws_codebuild_webhook.webhook.payload_url.
not sure what is the right approach here, Appreciate any suggestion.

How to use terraform output as input variable of another terraform template

Is there any way I can use a Terraform template output to another Terraform template's input?
Ex: I have a Terraform template which creates an ELB and I have another Terraform template which is going to create an auto scale group which need the ELB information as an input variable.
I know I can use shell script to grep and feed in the ELB information, but I'm looking for some Terraform way to doing this.
Have you tried using remote state to populate your second template?
Declare it like this:
resource "terraform_remote_state" "your_state" {
backend = "s3"
config {
bucket = "${var.your_bucket}"
region = "${var.your_region}"
key = "${var.your_state_file}"
}
}
And then you should be able to pull out your resource directly like this:
your_elb = "${terraform_remote_state.your_state.output.your_output_resource}"
If this doesn't work for you, have you tried implementing your ELB in a module and then just using the output?
https://github.com/terraform-community-modules/tf_aws_elb is a good example of how to structure the module.
Looks like in newer versions of Terraform you'd access the output var like this
your_elb = "${data.terraform_remote_state.your_state.your_output_resource}"
All the rest is the same, just how you referenced it.
The question is about ELB, but I will give an example with S3. It is less things to write.
If you don't know how to store terraform state on AWS, read the article.
Let's suppose you have two independent projects: project-1, project-2. They are located in two different directories (two different repositories)!
Terraform file /tmp/project-1/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b1"
acl = "private"
}
// Output. It will available on s3://multi-terraform-project-state-bucket/p1.tfstate
output "bucket_name_p1" {
value = aws_s3_bucket.main_bucket.bucket
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
After it you move to the terraform file /tmp/project-2/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b2"
acl = "private"
tags = {
// Get the S3 bucket name from another terraform state file. In this case it is s3://multi-terraform-project-state-bucket/p1.tfstate
p1-bucket = data.terraform_remote_state.state1.outputs.bucket_name_p1
}
}
// Get date from another state file
data "terraform_remote_state" "state1" {
backend = "s3"
config = {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
region = "eu-central-1"
}
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p2.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
Now check the tags in the my-epic-test-b2. You will find there the name of the bucket from the project-1.
When you are integrating terraform with Jenkins you can simply define a variable in the Jenkinsfile you are creating. Suppose you want to run ec2-instance using terraform and Jenkinsfile. So when you need to get the public IP address of the instance you can use this command inside your Jenkinsfile.
script {
def public_ip = sh(script: 'terraform output public_ip | cut -d '"' -f2', returnStdout: true).trim()
}
this makes proper formatting and saves only the IP address in the public_ip variable. To do that you have to define the output block in the terraform script to output the public ip

Resources