So I have the following Terraform code: `
I have the following templatefile:
#cloud-config
users:
%{ for user in user_list ~}
- name: ${user.username}
gecos: ${user.username}
ssh-authorized-keys:
- ${user.ssh_public_key}
lock_passwd: true
groups: sudo
shell: /bin/bash
%{ endfor }
packages:
jp
The following Terraform code:
...
output "users" {
value = [for k in local.external_users : k.username]
}
data "template_cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
filename = "cloud-init.cfg"
content_type = "text/cloud-config"
content = templatefile("${path.module}/templates/userdataloop.yaml", {
user_list = local.external_users
})
}
…
resource "azurerm_virtual_machine" "jphostvm" {
...
os_profile {
computer_name = "${var.environment}-jp-${format("%02d", count.index + 1)}"
admin_username = "deploy"
custom_data = data.template_cloudinit_config.config.rendered
}
and finally the following users (declared in a different tf file): `
locals {
external_users = [
{
username = "ian"
ssh_public_key = data.vault_generic_secret.ian.data["key"]
},
{
username = "fred"
ssh_public_key = data.vault_generic_secret.fred.data["key"]
},
{
username = "ken"
ssh_public_key = data.vault_generic_secret.ken.data["key"]
}
]
}
I simply cannot get more than one user to be created. Note the output loop displays all the usernames. Passing single user as follows works perfectly:
data "template_cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
filename = "cloud-init.cfg"
content_type = "text/cloud-config"
content = templatefile("${path.module}/templates/userdataloop.yaml", {
username = "fred"
ssh_public_key = data.vault_generic_secret.fred.data["key"]
})
}
and using the following template:
#cloud-config
users:
- default
- name: ${username}
gecos: ${username}
ssh-authorized-keys:
- ${ssh_public_key}
lock_passwd: true
groups: sudo
shell: /bin/bash
}
Can anyone help with this? I just can't figure out what is wrong and the code is failing silently so no error message to point me. It's a debian image the code uses for the vm. Thanks.
When configuring a system with cloud-init, any errors during processing will be written into the cloud-init logs inside your virtual machine's filesystem. Where exactly you'd find those logs will depend on how your AMI is configured, but the default location is /var/log/cloud-init.log.
You've shown two different templates in your question, one of which contains a for directive while the other contains only interpolation.
Both of these are not following the advice in Generating JSON or YAML from a template, so it's possible that your attempt to generate YAML by string concatenation is producing something that isn't valid YAML syntax or doesn't have the meaning you intended.
The following template uses yamlencode instead, to guarantee that the result will always be valid YAML syntax:
#cloud-config
${yamlencode({
users = [
for user in user_list : {
name = user.username
gecos = user.username
ssh_authorized_keys = [user.ssh_public_key]
lock_passwd = true
groups = "sudo"
shell = "/bin/bash"
}
]
packages = [
"jp",
]
})}
I also changed your ssh-authorized-keys key to be ssh_authorized_keys instead, because that seems to be the name used in the relevant cloud-config example.
If you use a template like the above so that the result is guaranteed to be valid YAML syntax and it still doesn't work then that suggests that there's something wrong with the data represented by the YAML rather than the syntax of the YAML. In that case you'll need to refer to the cloud-init log in your virtual machine to see what happened when it tried to interpret your cloud-config data structure.
Related
I'm trying to run a cloudinit file by passing it in the terraform config file. The terraform apply command creates all the resources. But when i spin up the VM, none of the changes from the cloudinit are seen in the VM.
Here is the Cloudinit file with .tpl extension:
users:
- name: ansible
gecos: Ansible
sudo: ALL=(ALL) NOPASSWD:ALL
groups: [users, admin]
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa AAAAB3NzaC1.......
And here is the main.tf file:
data "template_file" "users_data" {
template = file("./sshPass.tpl")
}
data "template_cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
content_type = "text/cloud-config"
content = data.template_file.users_data.rendered
}
resource "azurerm_linux_virtual_machine" "poc-vm" {
name = var.vm_name
resource_group_name = azurerm_resource_group.poc_rg.name
location = azurerm_resource_group.poc_rg.location
size = var.virtual_machine_size
admin_username = var.vm_username
network_interface_ids = [azurerm_network_interface.poc_nic_1.id]
admin_ssh_key {
username = var.vm_username
public_key = tls_private_key.poc_key.public_key_openssh
}
os_disk {
caching = var.disk_caching
storage_account_type = var.storage_type
}
source_image_reference {
publisher = var.image_publisher
offer = var.image_offer
sku = var.image_sku
version = var.image_version
}
user_data = data.template_cloudinit_config.config.rendered
}
Try this:
data "template_cloudinit_config" "config" {
gzip = true
base64_encode = true
part {
content_type = "text/cloud-config"
content = "${data.template_file.users_data.rendered}"
}
In this example I change this line content = data.template_file.users_data.rendered for this one content = "${data.template_file.users_data.rendered}"
Hope this helps!
Found the errors. Changed the extension to '.cfg.'. Added 'custom_data' instead of user_data.
Added '#cloud-config' to the 1st line of the file.
Made sure I removed any spaces at end of my ssh key.
And also felt like I was using the wrong ssh key to login the whole time.
But anyways those things helped me.
I tried to use terraform without any Cloud instance - only for local install cloudflared tunnel using construction:
resource "null_resource" "tunell_install" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "/home/uzer/script/tunnel.sh"
}
}
instead something like:
provider "google" {
project = var.gcp_project_id
}
but after running
$ terraform apply -auto-approve
successfully created /etc/cloudflared/cert.json with content:
{
"AccountTag" : "${account}",
"TunnelID" : "${tunnel_id}",
"TunnelName" : "${tunnel_name}",
"TunnelSecret" : "${secret}"
}
but as I undestood here must be values instead variables? It's seems that metadata_startup_script from instance.tf only applied to Google instance. How it's possible to change it for using terraform with install CF tunnel locally and running tunnel? Maybe also need to use templatefile but in other .tf file? The curent code block metadata_startup_script:
// This is where we configure the server (aka instance). Variables like web_zone take a terraform variable and provide it to the server so that it can use them as a local variable
metadata_startup_script = templatefile("./server.tpl",
{
web_zone = var.cloudflare_zone,
account = var.cloudflare_account_id,
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id,
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
secret = random_id.tunnel_secret.b64_std
})
Content of server.tpl file:
# Script to install Cloudflare Tunnel
# cloudflared configuration
cd
# The package for this OS is retrieved
wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
sudo dpkg -i cloudflared-stable-linux-amd64.deb
# A local user directory is first created before we can install the tunnel as a system service
mkdir ~/.cloudflared
touch ~/.cloudflared/cert.json
touch ~/.cloudflared/config.yml
# Another herefile is used to dynamically populate the JSON credentials file
cat > ~/.cloudflared/cert.json << "EOF"
{
"AccountTag" : "${account}",
"TunnelID" : "${tunnel_id}",
"TunnelName" : "${tunnel_name}",
"TunnelSecret" : "${secret}"
}
EOF
# Same concept with the Ingress Rules the tunnel will use
cat > ~/.cloudflared/config.yml << "EOF"
tunnel: ${tunnel_id}
credentials-file: /etc/cloudflared/cert.json
logfile: /var/log/cloudflared.log
loglevel: info
ingress:
- hostname: ssh.${web_zone}
service: ssh://localhost:22
- hostname: "*"
service: hello-world
EOF
# Now we install the tunnel as a systemd service
sudo cloudflared service install
# The credentials file does not get copied over so we'll do that manually
sudo cp -via ~/.cloudflared/cert.json /etc/cloudflared/
# Now we can start the tunnel
sudo service cloudflared start
In argo.tf exist this code:
data "template_file" "init" {
template = file("server.tpl")
vars = {
web_zone = var.cloudflare_zone,
account = var.cloudflare_account_id,
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id,
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
secret = random_id.tunnel_secret.b64_std
}
}
If you are asking about how to create the file locally and populate the values, here is an example:
resource "local_file" "cloudflare_tunnel_script" {
content = templatefile("${path.module}/server.tpl",
{
web_zone = "webzone"
account = "account"
tunnel_id = "id"
tunnel_name = "name"
secret = "secret"
}
)
filename = "${path.module}/server.sh"
}
For this to work, you would have to assign the real values for all the template variables listed above. From what I see, there are already examples of how to use variables for those values. In other words, instead of hardcoding the values for template variables you could use standard variables:
resource "local_file" "cloudflare_tunnel_script" {
content = templatefile("${path.module}/server.tpl",
{
web_zone = var.cloudflare_zone
account = var.cloudflare_account_id
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name
secret = random_id.tunnel_secret.b64_std
}
)
filename = "${path.module}/server.sh"
}
This code will populate all the values and create a server.sh script in the same directory you are running the Terraform code from.
You could complement this code with the null_resource you wanted:
resource "null_resource" "tunnel_install" {
depends_on = [
local_file.cloudflare_tunnel_script,
]
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "${path.module}/server.sh"
}
}
I have a question regarding terraform remote states. Maybe I am using the remote state wrong or there is another possible solution:
In my scripts I create a db instance. The created endpoint, port etc. should be saved in the remote state. The DB scripts are outsourced in a module.
I want to reuse the endpoint and port values and pass them to the docker container environment:
environment = [
{
name: "SPRING_DATASOURCE_URL",
value: "jdbc:postgresql://${data.terraform_remote_state.foo.outputs.db_endpoint}:${data.terraform_remote_state.foo.outputs.db_port}"
}
These scripts are also outsourced into a separate module.
On each first run terraform states that these values are not present. Therefor I have to outcomment the environment values, run terraform apply and after all was created, I have to rerun terraform apply - now with the values for the environment.
Is there another possible (and better solution) to pass the created db values to the service and task which contains the docker container environment?
Edit 21-05-2021
I suggested #smsnheck that if a module block is used, one can reference output variable via module.module_name.output_name. E.g:
module/main.tf
resource "aws_db_instance" "example" {
// Arguments taken from example
allocated_storage = 10
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
}
output "db_host" {
value = aws_db_instance.example.address
}
output "db_port" {
value = aws_db_instance.example.port
}
main.tf
module "db" {
source = "./module"
}
// ...
resource "some_docker_provider" "example" {
// ...
environment = [
{
name: "SPRING_DATASOURCE_URL",
value: "jdbc:postgresql://${module.db.db_host}:${module.db.db_port}"
}
]
// ...
}
Old answer
I presume that you are creating the database from the terraform scripts. If so, you should not use data, because data is read before any resource is created. So what's happening is you are trying to get information about db host and port before it is created.
Assuming you are creating DB instance with aws_db_instance, you should reference to the port and host like this:
resource "aws_db_instance" "example" {
// Arguments taken from example
allocated_storage = 10
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
name = "mydb"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
}
// ...
// Locals are not required, you may use aws_db_instance.example. values directly
locals {
db_host = aws_db_instance.example.address
db_port = aws_db_instance.example.port
}
// ...
resource "some_docker_provider" "example" {
// ...
environment = [
{
name: "SPRING_DATASOURCE_URL",
value: "jdbc:postgresql://${local.db_host}:${local.db_port}"
}
]
// ...
}
This way, Terraform will know that first a DB instance must be created, because .address and .port are the values, that are known after a DB instance is created (a DB instance will be now the dependency of the docker container).
To get more information about the values that are returned after creation of the resource, refer to the Attributes reference in the provider's documentation. For instance, here is the reference for aws_db_instance: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_instance#attributes-reference
I've created a JSON string via template/interpolation.
I need to pass that to local-exec, which in turn uses a Powershell template to make a CLI call.
Originally I tried just referencing the json template in the Powershell command itself
--cli-input-json file://lfsetup.tpl
.. however, the template does not get interpolated.
Next, I tried setting the json to a local. However, this is multi-line and the CLI does not like that. Maybe if I could convert to single line ?
Any sugestions or guidance welcome !!
Thanks
JSON (.tpl or variable)
{
"CatalogId": "${account_id}",
"DataLakeSettings": {
"DataLakeAdmins": [
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role1"
},
{
"DataLakePrincipalIdentifier": "arn:aws:iam::${account_id}:role/Role2"
}
],
"CreateDatabaseDefaultPermissions": [],
"CreateTableDefaultPermissions": []
}
}
.tf
locals {
assume_role_arn = "arn:aws:iam::${local.account_id}:role/role_to_assume"
lf_json_settings = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
cli_region = "region"
}
resource "null_resource" "settings" {
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", { role_arn = local.assume_role_arn, json_settings = local.lf_json_settings, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
.ps
$ErrorActionPreference = "Stop"
$json = aws sts assume-role --role-arn ${role_arn} --role-session-name sessionname
$accessTokens = ConvertFrom-Json (-join $json)
$env:AWS_ACCESS_KEY_ID = $accessTokens.Credentials.AccessKeyId
$env:AWS_SECRET_ACCESS_KEY = $accessTokens.Credentials.SecretAccessKey
$env:AWS_SESSION_TOKEN = $accessTokens.Credentials.SessionToken
aws lakeformation put-data-lake-settings --cli-input-json file://lfsetup.tpl --region ${region}
$env:AWS_ACCESS_KEY_ID = ""
$env:AWS_SECRET_ACCESS_KEY = ""
$env:AWS_SESSION_TOKEN = ""
Output:
For these I put the template output into a local and passed the local to powershell. Then did variations with/out jsonencde and trying to replace '\n'. Strange results in some cases:
Use file provisioner to create .json file from rendered .tpl file:
locals {
...
settings_json_file = "/tmp/lfsetup.json"
}
resource "null_resource" "settings" {
provisioner "file" {
content = templatefile("${path.module}/lfsetup.tpl", { account_id = local.account_id})
destination = local.settings_json_file
}
provisioner "local-exec" {
command = templatefile("${path.module}/scripts/settings.ps1", role_arn = local.assume_role_arn, json_settings = local.settings_json_file, region = local.cli_region})
interpreter = ["pwsh", "-Command"]
}
}
Update your .ps file
replace file://lfsetup.tpl by file://${json_settings}
aws lakeformation put-data-lake-settings --cli-input-json file://${json_settings} --region ${region}
You may also use jsonencode function
In the following code block I'm trying to pass an array of server names to the attributes_json block:
resource "aws_instance" "consul-server" {
ami = var.consul-server
instance_type = "t2.nano"
key_name = var.aws_key_name
iam_instance_profile = "dna_inst_mgmt"
vpc_security_group_ids = [
"${aws_security_group.yutani_consul.id}",
"${aws_security_group.yutani_ssh.id}"
]
subnet_id = "${aws_subnet.public_1_subnet_us_east_1c.id}"
associate_public_ip_address = true
tags = {
Name = "consul-server${count.index}"
}
root_block_device {
volume_size = "30"
delete_on_termination = "true"
}
connection {
type = "ssh"
user = "chef"
private_key = "${file("${var.aws_key_path}")}"
timeout = "2m"
agent = false
host = self.public_ip
}
count = var.consul-server_count
provisioner "chef" {
attributes_json = <<-EOF
{
"consul": {
"servers": ["${split(",",aws_instance.consul-server[count.index].id)}"]
}
}
EOF
use_policyfile = true
policy_name = "consul_server"
policy_group = "aws_stage_enc"
node_name = "consul-server${count.index}"
server_url = var.chef_server_url
recreate_client = true
skip_install = true
user_name = var.chef_username
user_key = "${file("${var.chef_user_key}")}"
version = "14"
}
}
Running this gives me an error:
Error: Cycle: aws_instance.consul-server[1], aws_instance.consul-server[0]
(This is after declaring a count of 2 in a variable for var.consul-server_count)
Can anyone tell me what the proper way is to do this?
There are two issues here: (1) How to interpolate a comma-separated list in a JSON string ; and (2) What is causing the cyclic dependency error.
How to interpolate a list to make a valid JSON array
Use jsonencode
The cleanest method is to not use a heredoc at all and just use the jsonencode function.
You could do this:
locals {
arr = ["host1", "host2", "host3"]
}
output "test" {
value = jsonencode(
{
"consul" = {
"servers" = local.arr
}
})
}
And this yields as output:
Outputs:
test = {"consul":{"servers":["host1","host2","host3"]}}
Use the join function and a heredoc
The Chef provisioner's docs suggest to use a heredoc for the JSON string, so you can also do this:
locals {
arr = ["host1", "host2", "host3"]
sep = "\", \""
}
output "test" {
value = <<-EOF
{
"consul": {
"servers": ["${join(local.sep, local.arr)}"]
}
}
EOF
}
If I apply that:
Outputs:
test = {
"consul": {
"servers": ["host1", "host2", "host3"]
}
}
Some things to pay attention to here:
You are trying to join your hosts so that they become valid JSON in the context of a JSON array. You need to join them with ",", not just a comma. That's why I've defined a local variable sep = "\", \"".
You seem to be trying to split there when you apparently need join.
Cyclic dependency issue
The cause of the error message:
Error: Cycle: aws_instance.consul-server[1], aws_instance.consul-server[0]
Is that you have a cyclic dependency. Consider this simplified example:
resource "aws_instance" "example" {
count = 3
ami = "ami-08589eca6dcc9b39c"
instance_type = "t2.micro"
user_data = <<-EOF
hosts="${join(",", aws_instance.example[count.index].id)}"
EOF
}
Or you could use splat notation there too for the same result i.e. aws_instance.example.*.id.
Terraform plan then yields:
▶ terraform012 plan
...
Error: Cycle: aws_instance.example[2], aws_instance.example[1], aws_instance.example[0]
So you get a cycle error there because aws_instance.example.*.id depends on the aws_instance.example being created, so the resource depends on itself. In other words, you can't use a resources exported values inside the resource itself.
What to do
I don't know much about Consul, but all the same, I'm a bit confused tbh why you want the EC2 instance IDs in the servers field. Wouldn't the Consul config be expecting IP addresses or hostnames there?
In any case, you probably need to calculate the host names yourself outside of this resource, either as a static input parameter or something that you can calculate somehow. And I imagine you'll end up with something like:
variable "host_names" {
type = list
default = ["myhost1"]
}
resource "aws_instance" "consul_server" {
...
provisioner "chef" {
attributes_json = jsonencode(
{
"consul" = {
"servers" = var.host_names
}
})
}
}