How to provision rds instance inside vpc via puppet - puppet

I'm trying to provision rds instance inside existing VPC using puppetlabs/aws module. I'm able to provision rds instance in non-VPC mode using following declaration of resource:
rds_instance { $instance_name:
ensure => present,
allocated_storage => $rds::params::allocated_storage,
db_instance_class => $rds::params::db_instance_class,
db_name => $db_name,
engine => $db_data['engine'],
license_model => $db_data['license_model'],
db_security_groups => $db_security_groups,
master_username => $master_username,
master_user_password=> $master_user_password,
region => $region,
skip_final_snapshot => $rds::params::skip_final_snapshot,
storage_type => $rds::params::storage_type,
}
However, when I try to add additional attribute called: db_subnet, following error happens when trying to puppet apply:
Error: Could not set 'present' on ensure: unexpected value at params[:subnet_group_name]
I'm aware this error retrieves aws-sdk rather than puppet module itself.
If I'm correct, I need to pass subnet group name for db_subnet attribute and I've done but it results with issue from above. Any idea what I'm doing wrong?
Thanks in advance

Related

How to use AWS SSO temporary credentials in AWS SDK for local, but Instance Role for prod?

I understand how to use AWS SSO temporary credentials in my SDK, from this question and this question and this documentation. It's pretty simple, really:
import { fromSSO } from "#aws-sdk/credential-providers";
const client = new FooClient({ credentials: fromSSO({ profile: "my-sso-profile" }) });
And this code will work perfectly fine in my local computer and in my teammates' computers (so long as they are logged to AWS SSO with AWS CLI). However, it's not at all clear to me how to modify this so that it works both on our local computers with AWS SSO and on an EC2 instance with an Instance Role. It's obvious that the EC2 instance should not have access to AWS SSO, but rather it should get its permissions from the IAM Policies attached to its associated Instance Role. But how would my code look like, to account for both scenarios?
I'm taking a wild stab a this:
import { fromInstanceMetadata, fromSSO } from "#aws-sdk/credential-providers";
const isEc2Instance = "I HAVE NO IDEA WHAT TO DO HERE";
let credentials;
if (isEc2Instance) {
credentials = fromInstanceMetadata({
// Optional. The connection timeout (in milliseconds) to apply to any remote requests.
// If not specified, a default value of `1000` (one second) is used.
timeout: 1000,
// Optional. The maximum number of times any HTTP connections should be retried. If not
// specified, a default value of `0` will be used.
maxRetries: 0,
});
} else {
credentials = fromSSO({ profile: "my-sso-profile" });
}
const client = new FooClient({ credentials: credentials });
The code for getting the credentials from instance metadata is from here so it's probably correct. But as line 2 says, I have no idea how to determine whether I should be using AWS SSO or InstanceMetadata or something else (perhaps for other platforms, like what if this code is deployed in EC2 for dev env and ECS/EKS for prod env?).
Maybe this if statement is not the right approach. I would gladly consider another option.
So, what would be the correct way to write code that gets the AWS credentials from the correct source depending on the platform where it's running?
And yes, since these credentials will be the same for any AWS SDK Client anywhere in the app, the code that gets the credentials should be abstracted away from this, and 5 if statements in a CredentialsHelper doesn't sound so bad, but I didn't want to overcomplicate this question.
The code is JavaScript and I'm looking for something that works in Node.js, but I think the logic would be the same in any language.
You need to have DefaultAWSCredentialsProviderChain added in your code.
This will take your credentials in local and will take instance credentials when deployed in EC2 using EKS.
Moreover, add this snippet to your code:
private def getProvider(awsConfig: AWSConfig): Either[Throwable, AWSCredentialsProvider] = {
def isDefault(key: String): Boolean = key == "default"
def isIam(key: String): Boolean = key == "iam"
def isEnv(key: String): Boolean = key == "env"
((awsConfig.accessKey, awsConfig.secretKey) match {
case (a, s) if isDefault(a) && isDefault(s) =>
new DefaultAWSCredentialsProviderChain().asRight
case (a, s) if isDefault(a) || isDefault(s) =>
"accessKey and secretKey must both be set to 'default' or neither".asLeft
case (a, s) if isIam(a) && isIam(s) =>
InstanceProfileCredentialsProvider.getInstance().asRight
case (a, s) if isIam(a) && isIam(s) =>
"accessKey and secretKey must both be set to 'iam' or neither".asLeft
case (a, s) if isEnv(a) && isEnv(s) =>
new EnvironmentVariableCredentialsProvider().asRight
case (a, s) if isEnv(a) || isEnv(s) =>
"accessKey and secretKey must both be set to 'env' or neither".asLeft
case _ =>
new AWSStaticCredentialsProvider(
new BasicAWSCredentials(awsConfig.accessKey, awsConfig.secretKey)
).asRight
}).leftMap(new IllegalArgumentException(_))
}

Terraform Openstack: Attach network interface during creation

I want to create an instance in openstack with a pre-defined network interface attached only. I have access to openstack, I know the network interface id/name.
Post creation of an instance I can simply attach the interface. This way it will get a randomly assigned IP from the pool and afterwards get the network interface attached. That's not what I want.
As stated in the beginning I want to attach the interface while I build the instance.
Edit - Example code:
Host Creation:
resource "openstack_compute_instance_v2" "example_host" {
count = 1
name = example-host
image_name = var.centos_7_name
flavor_id = "2"
key_pair = "key"
}
Interface attaching:
resource "openstack_compute_interface_attach_v2" "example_interface_attach" {
instance_id = openstack_compute_instance_v2.example_host[0].id
port_id = "bd858b4c-d6de-4739-b125-314f1e7041ed"
}
This won't work. Terraform returns an error:
Error: Error creating OpenStack server: Expected HTTP response code []
when accessing [POST servers], but got 409 instead
{"conflictingRequest": {"message": "Multiple possible networks found,
use a Network ID to be more specific.", "code": 409}}
Back to my initial query: I want to deploy a new host and attach a network interface. The result should be a host with only one IP-Address, the one I've attached to it.
The error seems to be generated by the instance launch. OpenStack (not Terraform) insists on a network if more than one network is available. From an OpenStack perspective, you have several solutions. Off the cuff, I see three:
Since microversion 2.37, the Nova API allows you to specify "none" as a network, in which case the instance runs, but is not connected after the launch.
Or launch the instance on a port instead of a network, after putting an IP address on the port. Using the openstack client:
openstack port create --network <network> --fixed-ip subnet=<subnet>,ip-address=<ip-address>
openstack server create ... --port <port-ip> ...
I consider that the best solution.
Another solution would be specifying a network and a fixed IP address while launching the instance. CLI:
openstack server create ... --nic NET-UUID,v4-fixed-ip=172.16.7.8 ...
Unfortunately, I can't tell whether Terraform supports any of these solutions. I would try adding the port_id to the resource "openstack_compute_instance_v2" "example_host" block.
I've found the solution and it's incredibly simple. You can plainly add the port id to the network block. I've tried it previously already but it failed. Chances are I've provided the wrong ID.
##Create hosts
resource "openstack_compute_instance_v2" "test_host" {
count = 1
name = format("test-host-%02d", count.index + 1)
image_name = var.centos_7_name
flavor_id = "2"
key_pair = "key"
network {
port = "<port-id>"
}
}
Here's an additional solution removing the potential of providing a wrong id.
##Create hosts
resource "openstack_compute_instance_v2" "test_host" {
count = 1
name = format("test-host-%02d", count.index + 1)
image_name = var.centos_7_name
flavor_id = "2"
key_pair = "key"
network {
port = data.openstack_networking_port_v2.port_1.id
}
}
data openstack_networking_port_v2 "port_1" {
name = "switch-port-208.37"
}

Terraform backend SignatureDoesNotMatch

I'm pretty new to terraform, but I'm stuck trying to setup a terraform backend to use S3.
INIT:
terraform init -backend-config="access_key=XXXXXXX" -backend-config="secret_key=XXXXX"
TERRAFORM BACKEND:
resource "aws_dynamodb_table" "terraform_state_lock" {
name = "terraform-lock"
read_capacity = 5
write_capacity = 5
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
resource "aws_s3_bucket" "bucket" {
bucket = "tfbackend"
}
terraform {
backend "s3" {
bucket = "tfbackend"
key = "terraform"
region = "eu-west-1"
dynamodb_table = "terraform-lock"
}
}
ERROR:
Error: error using credentials to get account ID: error calling sts:GetCallerIdentity: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.
status code: 403, request id: xxxx-xxxx
I really am at a loss because these same credentials are used for my Terraform Infrastructure and is working perfectly fine. The IAM user on AWS also has permissions for both Dynamo & S3.
Am I suppose to tell Terraform to use a different authentication method?
Remove .terraform/ and try again and really double check your credentials.
I regenerate the access keys and password and works good.

Swarm mode token - Puppet module

The documentation of a swarm mode setup seems to be missing something important.
It looks like to manage swarm with puppet I need to provide a token.
But to get the token I need to go to the manager node and type docker swarm join-token -q, copy the output and paste it into puppet?
Am I missing something? Or there's some automated way to do that?
What I would expect is this:
if(host_has_label("my-swarm-manager")) {
docker::swarm {'cluster_manager':
init => true,
advertise_addr => current_host_ip(),
listen_addr => current_host_ip(),
swarm_name => 'my-swarm'
}
} else if (host_has_label("my-swarm-worker")) {
docker::swarm {'cluster_worker':
join => true,
advertise_addr => current_host_ip(),
listen_addr => current_host_ip(),
manager_ip => get_ip_by_swarm_name('my-swarm'),
token => get_token_by_swarm_name('my-swarm')
}
}
Swarm mode token

How to use terraform output as input variable of another terraform template

Is there any way I can use a Terraform template output to another Terraform template's input?
Ex: I have a Terraform template which creates an ELB and I have another Terraform template which is going to create an auto scale group which need the ELB information as an input variable.
I know I can use shell script to grep and feed in the ELB information, but I'm looking for some Terraform way to doing this.
Have you tried using remote state to populate your second template?
Declare it like this:
resource "terraform_remote_state" "your_state" {
backend = "s3"
config {
bucket = "${var.your_bucket}"
region = "${var.your_region}"
key = "${var.your_state_file}"
}
}
And then you should be able to pull out your resource directly like this:
your_elb = "${terraform_remote_state.your_state.output.your_output_resource}"
If this doesn't work for you, have you tried implementing your ELB in a module and then just using the output?
https://github.com/terraform-community-modules/tf_aws_elb is a good example of how to structure the module.
Looks like in newer versions of Terraform you'd access the output var like this
your_elb = "${data.terraform_remote_state.your_state.your_output_resource}"
All the rest is the same, just how you referenced it.
The question is about ELB, but I will give an example with S3. It is less things to write.
If you don't know how to store terraform state on AWS, read the article.
Let's suppose you have two independent projects: project-1, project-2. They are located in two different directories (two different repositories)!
Terraform file /tmp/project-1/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b1"
acl = "private"
}
// Output. It will available on s3://multi-terraform-project-state-bucket/p1.tfstate
output "bucket_name_p1" {
value = aws_s3_bucket.main_bucket.bucket
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
After it you move to the terraform file /tmp/project-2/main.tf:
// Create an S3 bucket
resource "aws_s3_bucket" "main_bucket" {
bucket = "my-epic-test-b2"
acl = "private"
tags = {
// Get the S3 bucket name from another terraform state file. In this case it is s3://multi-terraform-project-state-bucket/p1.tfstate
p1-bucket = data.terraform_remote_state.state1.outputs.bucket_name_p1
}
}
// Get date from another state file
data "terraform_remote_state" "state1" {
backend = "s3"
config = {
bucket = "multi-terraform-project-state-bucket"
key = "p1.tfstate"
region = "eu-central-1"
}
}
// Store terraform state on AWS. The S3 bucket and dynamo db table should be created before running terraform
terraform {
backend "s3" {
bucket = "multi-terraform-project-state-bucket"
key = "p2.tfstate"
dynamodb_table = "multi-terraform-project-state-table"
region = "eu-central-1" // AWS region of state resources
}
}
provider "aws" {
profile = "my-cli-profile" // User profile defined in ~/.aws/credentials
region = "eu-central-1" // AWS region
}
You run terraform init, and terraform apply.
Now check the tags in the my-epic-test-b2. You will find there the name of the bucket from the project-1.
When you are integrating terraform with Jenkins you can simply define a variable in the Jenkinsfile you are creating. Suppose you want to run ec2-instance using terraform and Jenkinsfile. So when you need to get the public IP address of the instance you can use this command inside your Jenkinsfile.
script {
def public_ip = sh(script: 'terraform output public_ip | cut -d '"' -f2', returnStdout: true).trim()
}
this makes proper formatting and saves only the IP address in the public_ip variable. To do that you have to define the output block in the terraform script to output the public ip

Resources