I tried to use terraform without any Cloud instance - only for local install cloudflared tunnel using construction:
resource "null_resource" "tunell_install" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "/home/uzer/script/tunnel.sh"
}
}
instead something like:
provider "google" {
project = var.gcp_project_id
}
but after running
$ terraform apply -auto-approve
successfully created /etc/cloudflared/cert.json with content:
{
"AccountTag" : "${account}",
"TunnelID" : "${tunnel_id}",
"TunnelName" : "${tunnel_name}",
"TunnelSecret" : "${secret}"
}
but as I undestood here must be values instead variables? It's seems that metadata_startup_script from instance.tf only applied to Google instance. How it's possible to change it for using terraform with install CF tunnel locally and running tunnel? Maybe also need to use templatefile but in other .tf file? The curent code block metadata_startup_script:
// This is where we configure the server (aka instance). Variables like web_zone take a terraform variable and provide it to the server so that it can use them as a local variable
metadata_startup_script = templatefile("./server.tpl",
{
web_zone = var.cloudflare_zone,
account = var.cloudflare_account_id,
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id,
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
secret = random_id.tunnel_secret.b64_std
})
Content of server.tpl file:
# Script to install Cloudflare Tunnel
# cloudflared configuration
cd
# The package for this OS is retrieved
wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
sudo dpkg -i cloudflared-stable-linux-amd64.deb
# A local user directory is first created before we can install the tunnel as a system service
mkdir ~/.cloudflared
touch ~/.cloudflared/cert.json
touch ~/.cloudflared/config.yml
# Another herefile is used to dynamically populate the JSON credentials file
cat > ~/.cloudflared/cert.json << "EOF"
{
"AccountTag" : "${account}",
"TunnelID" : "${tunnel_id}",
"TunnelName" : "${tunnel_name}",
"TunnelSecret" : "${secret}"
}
EOF
# Same concept with the Ingress Rules the tunnel will use
cat > ~/.cloudflared/config.yml << "EOF"
tunnel: ${tunnel_id}
credentials-file: /etc/cloudflared/cert.json
logfile: /var/log/cloudflared.log
loglevel: info
ingress:
- hostname: ssh.${web_zone}
service: ssh://localhost:22
- hostname: "*"
service: hello-world
EOF
# Now we install the tunnel as a systemd service
sudo cloudflared service install
# The credentials file does not get copied over so we'll do that manually
sudo cp -via ~/.cloudflared/cert.json /etc/cloudflared/
# Now we can start the tunnel
sudo service cloudflared start
In argo.tf exist this code:
data "template_file" "init" {
template = file("server.tpl")
vars = {
web_zone = var.cloudflare_zone,
account = var.cloudflare_account_id,
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id,
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
secret = random_id.tunnel_secret.b64_std
}
}
If you are asking about how to create the file locally and populate the values, here is an example:
resource "local_file" "cloudflare_tunnel_script" {
content = templatefile("${path.module}/server.tpl",
{
web_zone = "webzone"
account = "account"
tunnel_id = "id"
tunnel_name = "name"
secret = "secret"
}
)
filename = "${path.module}/server.sh"
}
For this to work, you would have to assign the real values for all the template variables listed above. From what I see, there are already examples of how to use variables for those values. In other words, instead of hardcoding the values for template variables you could use standard variables:
resource "local_file" "cloudflare_tunnel_script" {
content = templatefile("${path.module}/server.tpl",
{
web_zone = var.cloudflare_zone
account = var.cloudflare_account_id
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name
secret = random_id.tunnel_secret.b64_std
}
)
filename = "${path.module}/server.sh"
}
This code will populate all the values and create a server.sh script in the same directory you are running the Terraform code from.
You could complement this code with the null_resource you wanted:
resource "null_resource" "tunnel_install" {
depends_on = [
local_file.cloudflare_tunnel_script,
]
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "${path.module}/server.sh"
}
}
Related
Terraform v1.2.8
I have a generic script that executes the passed-in shell script on my AWS remote EC2 instance that I've created also in Terraform.
resource "null_resource" "generic_script" {
connection {
type = "ssh"
user = "ubuntu"
private_key = file(var.ssh_key_file)
host = var.ec2_pub_ip
}
provisioner "file" {
source = "../modules/k8s_installer/${var.shell_script}"
destination = "/tmp/${var.shell_script}"
}
provisioner "remote-exec" {
inline = [
"sudo chmod u+x /tmp/${var.shell_script}",
"sudo /tmp/${var.shell_script}"
]
}
}
Now I want to be able to modify it so it runs on
all nodes
this node but not that node
that node but not this node
So I created variables in the variables.tf file
variable "run_on_THIS_node" {
type = boolean
description = "Run script on THIS node"
default = false
}
variable "run_on_THAT_node" {
type = boolean
description = "Run script on THAT node"
default = false
}
How can I put a condition to achieve what I want to do?
resource "null_resource" "generic_script" {
count = ???
...
}
You could use the ternary operator for this. For example, based on the defined variables, the condition would look like:
resource "null_resource" "generic_script" {
count = (var.run_on_THIS_node || var.run_on_THAT_node) ? 1 : length(var.all_nodes) # or var.number_of_nodes
...
}
The piece of the puzzle that is missing is the variable (or a number) that would tell the script to run on all the nodes. It does not have to be with length function, you could define it as a number only. However, this is only a part of the code you would have to add/edit, as there would have to be a way to control the host based on the index. That means that you probably would have to modify var.ec2_pub_ip so that it is a list.
I'm trying to set up FluentBit for my EKS cluster in Terraform, via this module, and I have couple of questions:
cluster_identity_oidc_issuer - what is this? Frankly, I was just told to set this up, so I have very little knowledge about FluentBit, but I assume this "issuer" provides an identity with needed permissions. For example, Okta? We use Okta, so what would I use as a value in here?
cluster_identity_oidc_issuer_arn - no idea what this value is supposed to be.
worker_iam_role_name - as in the role with autoscaling capabilities (oidc)?
This is what eks.tf looks like:
module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = "DevOpsLabs"
cluster_version = "1.19"
cluster_endpoint_private_access = true
cluster_endpoint_public_access = true
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
vpc_id = "xxx"
subnet_ids = ["xxx","xxx", "xxx", "xxx" ]
self_managed_node_groups = {
bottlerocket = {
name = "bottlerocket-self-mng"
platform = "bottlerocket"
ami_id = "xxx"
instance_type = "t2.small"
desired_size = 2
iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"]
pre_bootstrap_user_data = <<-EOT
echo "foo"
export FOO=bar
EOT
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
post_bootstrap_user_data = <<-EOT
cd /tmp
sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
sudo systemctl enable amazon-ssm-agent
sudo systemctl start amazon-ssm-agent
EOT
}
}
}
And for the role.tf:
data "aws_iam_policy_document" "cluster_autoscaler" {
statement {
effect = "Allow"
actions = [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeLaunchTemplateVersions",
]
resources = ["*"]
}
}
module "config" {
source = "github.com/ahmad-hamade/terraform-eks-config/modules/eks-iam-role-with-oidc"
cluster_name = module.eks.cluster_id
role_name = "cluster-autoscaler"
service_accounts = ["kube-system/cluster-autoscaler"]
policies = [data.aws_iam_policy_document.cluster_autoscaler.json]
tags = {
Terraform = "true"
Environment = "dev-test"
}
}
Since you are using a Terraform EKS module, you can access attributes of the created resources by looking at the Outputs tab [1]. There you can find the following outputs:
cluster_id
cluster_oidc_issuer_url
oidc_provider_arn
They are accessible by using the following syntax:
module.<module_name>.<output_id>
In your case, you would get the values you need using the following syntax:
cluster_id -> module.eks.cluster_id
cluster_oidc_issuer_url -> module.eks.cluster_oidc_issuer_url
oidc_provider_arn -> module.eks.oidc_provider_arn
and assign them to the inputs from the FluentBit module:
cluster_name = module.eks.cluster_id
cluster_identity_oidc_issuer = module.eks.cluster_oidc_issuer_url
cluster_identity_oidc_issuer_arn = module.eks.oidc_provider_arn
For the worker role I didn't see an output from the eks module, so I think that could be an output of the config module [2]:
worker_iam_role_name = module.config.iam_role_name
The OIDC parts of configuration are coming from the EKS cluster [3]. Another blog post going in details can be found here [4].
[1] https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest?tab=outputs
[2] https://github.com/ahmad-hamade/terraform-eks-config/blob/master/modules/eks-iam-role-with-oidc/outputs.tf
[3] https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html
[4] https://aws.amazon.com/blogs/containers/introducing-oidc-identity-provider-authentication-amazon-eks/
I've an EKS cluster deployed in AWS and I use terraform to deploy components to that cluster.
In order to get authenticated I'm using the following EKS datasources that provides the cluster API Authentication:
data "aws_eks_cluster_auth" "cluster" {
name = var.cluster_id
}
data "aws_vpc" "eks_vpc" {
id = var.vpc_id
}
And using the token inside several local-exec provisioners (apart of other resources) to deploy components
resource "null_resource" "deployment" {
provisioner "local-exec" {
working_dir = path.module
command = <<EOH
kubectl \
--server="${data.aws_eks_cluster.cluster.endpoint}" \
--certificate-authority=./ca.crt \
--token="${data.aws_eks_cluster_auth.cluster.token}" \
apply -f test.yaml
EOH
}
}
The problem I have is that some resources are taking a little while to deploy and at some point when terraform executes the next resource I get this error because the token has expired:
exit status 1. Output: error: You must be logged in to the server (the server has asked for the client to provide credentials)
Is there a way to force re-creation of the data before running the local-execs?
UPDATE: example moved to https://github.com/aidanmelen/terraform-kubernetes-rbac/blob/main/examples/authn_authz/main.tf
The data.aws_eks_cluster_auth.cluster_auth.token creates a token with a non-configurable 15 minute timeout.
One way to get around this is to use the sts token to create a long-lived service-account token and use that to provision the terraform-kubernetes-provider for long running kuberenetes resources.
I created a module called terraform-kubernetes-service-account to capture this common behavior of creating a service account, giving it some permissions, and output the auth information i.e. token, ca.crt, namespace.
For example:
module "terraform_admin" {
source = "aidanmelen/service-account/kubernetes"
name = "terraform-admin"
namespace = "kube-system"
cluster_role_name = "terraform-admin"
cluster_role_rules = [
{
api_groups = ["*"]
resources = ["*"]
resource_names = ["*"]
verbs = ["*"]
},
]
}
provider "kubernetes" {
alias = "terraform_admin_service_account"
host = "https://kubernetes.docker.internal:6443"
cluster_ca_certificate = module.terraform_admin.auth["ca.crt"]
token = module.terraform_admin.auth["token"]
}
data "kubernetes_namespace_v1" "example" {
metadata {
name = kubernetes_namespace.ex_complete.metadata[0].name
}
}
Im setting up and openstack instance using terraform. Im writing to a file the ip returned but for some reason its alwayse empty (i have looked at the instance in openstack consol and everythign is correct with ip, securitygroups etc etc)
resource "openstack_compute_instance_v2" "my-deployment-web" {
count = "1"
name = "my-name-WEB"
flavor_name = "m1.medium"
image_name = "RHEL7Secretname"
security_groups = [
"our_security_group"]
key_pair = "our-keypair"
network {
name = "public"
}
metadata {
expire = "2",
owner = ""
}
connection {
type = "ssh"
user = "vagrant"
private_key = "config/vagrant_private.key"
agent = "false"
timeout = "15m"
}
##Create Ansible host in staging inventory
provisioner "local-exec" {
command = "echo -e '\n[web]\n${openstack_compute_instance_v2.my-deployment-web.network.0.floating_ip}' > ../ansible/inventories/staging/hosts"
interpreter = ["sh", "-c"]
}
}
The host file generated only gets [web] but no ip. Anyone know why?
[web]
Modifying the variable from
${openstack_compute_instance_v2.my-deployment-web.network.0.floating_ip}
to
${openstack_compute_instance_v2.my-deployment-web.network.0.access_ip_v4}
solved the problem. Thank you #Matt Schuchard
Let's assume we have some DO tags:
resource "digitalocean_tag" "foo" {
name = "foo"
}
resource "digitalocean_tag" "bar" {
name = "bar"
}
And we have configured swarm worker nodes with mentioned tags.
resource "digitalocean_droplet" "swarm_data_worker" {
name = "swarm-worker-${count.index}"
tags = [
"${digitalocean_tag.foo.id}",
"${digitalocean_tag.bar.id}"
]
// swarm node config stuff
provisioner "remote-exec" {
inline = [
"docker swarm join --token ${data.external.swarm_join_token.result.worker} ${digitalocean_droplet.swarm_manager.ipv4_address_private}:2377"
]
}
}
I want to label created swarm node with corresponding resource (droplet) tags.
To label worker nodes we need to run on the swarm master:
docker node update --label-add foo --label-add bar worker-node
How can we automate this with terraform?
Got it! Probably not the best way to solve the issue, but until Terraform with full swarm support not released can't find something better.
The main idea is to use pre-installed DO ssh key:
variable "public_key_path" {
description = "DigitalOcean public key"
default = "~/.ssh/hcmc_swarm/key.pub"
}
variable "do_key_name" {
description = "Name of the key on Digital Ocean"
default = "terraform"
}
resource "digitalocean_ssh_key" "default" {
name = "${var.do_key_name}"
public_key = "${file(var.public_key_path)}"
}
Then we can provision manager:
resource "digitalocean_droplet" "swarm_manager" {
...
ssh_keys = ["${digitalocean_ssh_key.default.id}"]
provisioner "remote-exec" {
inline = [
"docker swarm init --advertise-addr ${digitalocean_droplet.swarm_manager.ipv4_address_private}"
]
}
}
And after all we can connect to the swarm_manager via ssh after worker is ready:
# Docker swarm labels list
variable "swarm_data_worker__lables" {
type = "list"
default = ["type=data-worker"]
}
resource "digitalocean_droplet" "swarm_data_worker" {
...
provisioner "remote-exec" {
inline = [
"ssh -o StrictHostKeyChecking=no root#${digitalocean_droplet.swarm_manager.ipv4_address_private} docker node update --label-add ${join(" --label-add ", var.swarm_data_worker__lables)} ${self.name}",
]
}
}
Please, if you know a better approach to solve this issue, don't hesitate to point out via new answer or comment.