Terraform windows vs linux issue - linux

So this issue is a bit convoluted but I need this for a very specific case in azure. I'm trying to create an APIM subnet inside an azure k8s vnet, but I haven't been able to find a return value from the k8s terraform call that gives me the ID/name for the vnet. Instead I used a powershell command to query Azure and get the name of the new vnet. I was working on this code locally on my windows box and it works fine:
data "external" "cluster_vnet_name" {
program = [var.runPSVer6 == true ? "pwsh" : "powershell","Select-AzSubscription '${var.subscriptionName}' |out-null; Get-AzVirtualNetwork -ResourceGroupName '${module.kubernetes-service.k8ResourceGroup}' | Select Name | convertto-json}"]
depends_on = [module.kubernetes-service]
}
I have a toogle in my variables for runpsver6 so when I run on a linux machine it will change powershell to pwsh. Now, this is were is starts getting a little weird. When I run this on my windows machine, its not an issue, however when I run this from a linux box I get the following error:
can't find external program "pwsh"
I have tried a number of different work arounds (such as using the full powershell snapin path /snap/bin/powershell and putting the commands in a .ps1 file) to no avail. Every single time it throws the error that it can't find pwsh as an external program.
I use this same runPSVer6 toggle for local-exec terraform commands with no issue, but I need the name of the Vnet as a response.
Anyone have any ideas what I'm missing here?
ADDED AFTER SEPT 30th
So I tried the alternative way of firing commands:
variable "runPSVer6" {
type = bool
default = true
}
variable "subscriptionName" {
type = string
}
variable "ResourceGroup" {
type = string
}
data "external" "runpwsh" {
program = [var.runPSVer6 == true ? "pwsh" : "powershell", "test.ps1"]
query = {
subscriptionName = var.subscriptionName
ResourceGroup = var.ResourceGroup
}
}
output "vnet" {
value = data.external.runpwsh.result.name
}
and this appears to allow the command to execute, however its not pulling back the result of the json response (even when I confirmed that I do get a response).
This is what I'm using for my .ps1:
Param($subscriptionName,$ResourceGroup)
$subscription = Select-AzSubscription $subscriptionName
$name = (Get-AzVirtualNetwork -ResourceGroupName $ResourceGroup | Select Name).Name
Write-Output "{`n`t""name"":""$name""`n}"
When i don't use the .name in the out, this is what I get:
data.external.runpwsh: Refreshing state...
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
vnet = { "name" = "" }
And this is the output from the .ps1:
{
"name":"vnettest"
}

Can you check if pwsh is working in the terminal. It should bring up the powershell prompt...
The path of pwsh must be added to the PATH.. /usr/bin is in my PATH as you can see below.
ubuntu#myhost:~$ whereis pwsh
pwsh: /usr/bin/pwsh
ubuntu#myhost:~$
ubuntu#myhost:~$ echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
ubuntu#myhost:~$ pwsh
PowerShell 7.0.3
Copyright (c) Microsoft Corporation. All rights reserved.
https://aka.ms/powershell
Type 'help' to get help.
PS /home/ubuntu>
PS /home/ubuntu> exit
ubuntu#myhost:~$
============
Added later after 29 Sep 2020.
I tried again in Ubuntu 20.
Terraform 13 was downloaded as Zip from main site
PowerShell 7.0.3 installed with snap install powershell --classic
I tried the below test code which worked.
varriable runPSVer6 {}
default = true
}
data "external" "testps" {
program = [var.runPSVer6 == true ? "pwsh" : "powershell","/tmp/testScript.ps1"]
}
output "ps_out" {
value = data.external.testps.result
}
The output was like...
Outputs:
ps_out = {
"name" = "My Resource Name"
"region" = "West Europe"
}
/tmp/testScript.ps1 code was simple output statement...
Write-Output '{ "name" : "My Resource Name", "region" : "West Europe" }'
I tried to null out the path variable just to see if i get the error message you mentioned. I did, as expected..
ubuntu#ip-172-31-53-128:~$ ./terraform apply
data.external.test_ps: Refreshing state...
Error: can't find external program "pwsh"
on main.tf line 5, in data "external" "test_ps":
5: data "external" "test_ps" {
but when i used the full path, it worked again. (even /snap/bin/powershell works)
program = [var.runPSVer6 == true ? "/snap/bin/pwsh" : "powershell","/tmp/testScript.ps1"]
I ealier wrognly blamed snap with my issue, but snap did work now.
This does not give any clue here or pin-point the issue you are having. But may be you try a couple of things just to be sure,
1.) issue "pwsh" in the current directory and see that Powershell prompt does come up.. not sure if you already checked this, but sometime some other characters in path could cause an issue.
2.) can you run tf once after exporting PATH=/snap/bin ... (do it inside a shell and exit later so that you back to old path. or. export the correct path later after test)
3.) If you used full path, the error message must have been different other "Error: can't find external program "pwsh" ... can you cross check if there was diff error msg
this is how the pwsh bin and the sym link looks like in my machine...
ubuntu#ip-172-31-53-128:~$ /usr/bin/ls -lt /snap/bin/pwsh
lrwxrwxrwx 1 root root 10 Sep 29 15:40 /snap/bin/pwsh -> powershell
ubuntu#ip-172-31-53-128:~$
ubuntu#ip-172-31-53-128:~$ /usr/bin/ls -lt /snap/bin/powershell
lrwxrwxrwx 1 root root 13 Sep 29 15:40 /snap/bin/powershell -> /usr/bin/snap
ubuntu#ip-172-31-53-128:~$

Related

Create gitlab project from template with terraform provider

I want to crate a gitlab project from a template via terafrom code.
resource "gitlab_project" "services_projects" {
for_each = local.service_projects
name = "${each.key}"
default_branch = "main"
description = ""
issues_enabled = false
merge_requests_enabled = false
namespace_id = "${gitlab_group.services_group.id}"
snippets_enabled = false
visibility_level = "private"
wiki_enabled = false
use_custom_template = "${each.value.use_custom_template}"
template_project_id = "${each.value.template_project_id}"
group_with_project_templates_id = var.group_with_project_templates_id
}
This works, all my projects were crated and also the project to be created from template is created from the template. But then terraform errors with:
module.services["test_tf_1"].gitlab_project.services_projects["values"]: Destruction complete after 5s
75module.services["test_tf_1"].gitlab_project.services_projects["values"]: Creating...
76╷
77│ Error: error while waiting for project "values" import to finish: unexpected state 'failed', wanted target 'finished'. last error: %!s(<nil>)
78│
79│ with module.services["test_tf_bb_1"].gitlab_project.services_projects["values"],
80│ on modules/gitlab/main.tf line 132, in resource "gitlab_project" "services_projects":
81│ 132: resource "gitlab_project" "services_projects" {
82│
83╵
85
Cleaning up project directory and file based variables
00:01
87ERROR: Job failed: command terminated with exit code 1
Does anyone knows where this error comes from or how I can solve it?
I think it has something to do with the missing template_link but I don't really understand the concept. Creating one does not work.
https://registry.terraform.io/providers/gitlabhq/gitlab/latest/docs/resources/group_project_file_template
I think it's an issue of Gitlab itself, not your code/terraform provider.
Please see https://gitlab.com/gitlab-org/gitlab/-/issues/208452
It might be a flaky bug. In my case it helped simply to re-run the same code.

how to use template_file resource in terraform

I want to launch a 5 VM and as soon as launch it will save the IP of that VM in a file
this is the high level idea of what I want to do I want to launch 5 instance and save all IP in a single VM.
I think here template_file will work but i am not sure how to implement this scenario
i tried
#!/bin/bash
touch myip.txt
private_ip=$(google_compute_instance.default.network_interface.0.network_ip)
echo "$private_ip" >> /tmp/ip.sh
resource "null_resource" "coderunner" {
provisioner "file" {
source = "autoo.sh"
destination = "/tmp/autoo.sh"
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
}
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/autoo.sh",
"sh /tmp/autoo.sh",
]
}
depends_on = ["google_compute_instance.default"]
}
but it is not working as soon as script run through an error as
null_resource.coderunner (remote-exec): /tmp/autoo.sh: line 3: google_compute_instance.default.network_interface.0.network_ip: command not found
There are 2 kinds of template files. One is template_file which is data resouce and the other one is templatefile which is a function.
Template_file is used when you have some file you want to transfer from your machine to provisioning instance and change some paramaters according to that machine. For example:
data "template_file" "temp_file" {
template = file("template.yaml")
vars = {
"local_ip" = "my_local_ip"
}
}
( if you want more detail explanation of what I did in this example just post in the comments but I think it's not for your use case )
This is good because you can change this file for each instance you have if you iterate over it with count for example.
As you can see this template doesn't do what you want to do. It's completely different thing.
To do what you want to it's best to use 2 provisioners:
1st you can use file provisioner to copy the script ( which executes for example ip a, along with some other paramaters to cut and filter only for the data you need )
and 2nd one, remote-exec which will execute that script.
Instead of using depend_on, if you use remote-exec provisioner it's best to use sleep command. Sleep hold you on for given amount of time and let's your instance start properly. You need to choose the right amount of sleep time depending on the size and speed of your instance but I usually do 30 seconds.
I hope I understand your question correctly and hope it helped with something.

terraform import: How to avoid errors on reimporting in automated scripts

Goal:
Reserve a static ip (permanent) for ever in gcp console: e.g "ip-drupal-1"
In terraform submodule "./module_drupal" make use of that 'ip-drupal-1'
When 'terraform destroy' is invoked, the the ip-drupal-1 must stay reserved in gcp. If static-ip gets destroyed, this will generate another new one, and I have to update DNS records.
The below procedure did not achieve that goal. Is there any sample code out there?
I added a "terraform import -var-file="main.tfvars" google_compute_address.ip-drupal-1 ip-drupal-1",
so it imports that static ip each time I invoke that shellscript.
How to avoid this error : "to import to this address, you must first remove ..."
To specifically address this maybe add a terraform state rm followed by the object id right before the import.
See this for info about terraform state rm.
Depending on how you are handling your automation that might work.
Here is what I ended up doing
In the infra-update.sh, I added 'terraform state rm' before importing.
infra-createorupdate.sh
terraform init
terraform state list
STATIC_IP_NAME="ip-base"
terraform state rm "google_compute_address.${STATIC_IP_NAME}"
terraform import -var-file="main.tfvars" google_compute_address.${STATIC_IP_NAME} ${STATIC_IP_NAME}
read -s -n 1 -p "Press any key to continue . . ."
terraform plan -var-file="main.tfvars" -out plan.out
#echo -ne '\007'
read -s -n 1 -p "Press any key to APPLY .. Press Ctrl C to abort"
echo ""
terraform apply plan.out
#echo -ne '\007'
Then in my root main.tf I defined the address as prevent_destroy = true
main.tf
resource "google_compute_address" "ip-base" { # see terraform import in deploy.sh
name = "ip-base"
lifecycle {
prevent_destroy = true #DO NOT DELETE STATIC-IP
}
}
# Call vm_micro
module "vm-base" {
source = "./_module-vm-micro"
vm_instance_name = "base"
custom_static_ip = google_compute_address.ip-base.address
#from predefined
#CPU/RAM
vm_size = var.org_micro
#where
deploy_env = var.deploy_env
zone = var.zone
#disk
boot_image = var.disk_image_coscloud
disk_type = var.disk_hdd
disk_size = var.disk_20_gb
#login
login_key_file = var.ssh_pubkey_file
login_user = var.ssh_username
}
Another alternative:
Perhaps I can also save money by not using static-ip. Instead, in the boot-startupscript.sh, I could make an Dynamic DNS api call to update the dynamic api to dns. I've to experiment with that, hope there is some example code.
https://support.google.com/domains/answer/6147083?hl=en#zippy=%2Cusing-the-api-to-update-your-dynamic-dns-record

How to combine "IP" and "name" in a list of instances with Terraform local-exec

I am trying to read from Terraform the instance public_IP and the instance_name and then write them into a file in the same line.
Whit the next command, I write the next file:
provisioner "local-exec" {
command = "echo \"${join("\n", aws_instance.nodeStream.*.public_ip)}\" >> ../ouput_file"
}
output_file:
34.14.219.13
64.2.201.14
59.12.31.15
What I want is to have the next output_file:
34.14.219.13 instance_name1
64.2.201.14 instance_name2
59.12.31.15 instance_name3
So I have try the next to concat both lists:
provisioner "local-exec" {
command = "echo \"${concat(sort(lookup(aws_instance.node1Stream.*.tags, "Name")), sort(aws_instance.node1Stream.*.public_ip))}\" >> ../../output_file"
}
The previous throws:
Error: Invalid function argument: Invalid value for "inputMap" parameter: lookup() requires a map as the first argument.
Since your goal is to produce a string from a data structure, this seems like a good use for string templates:
locals {
hosts_file_content = <<EOT
%{ for inst in aws_instance.node1Stream ~}
${inst.private_ip} ${inst.tags["Name"]}
%{ endfor ~}
EOT
}
With that local value defined, you can include it in the command argument of the provisioner like this:
provisioner "local-exec" {
command = "echo '${local.hosts_file_content}' >> ../../output_file"
}
If just getting that data into a file is your end goal, and that wasn't just a contrived example for the sake of this question, I'd recommend using the local_file resource instead so that Terraform can manage that file like any other resource, including potentially updating it if the inputs change without the need for any special provisioner triggering:
resource "local_file" "hosts_file" {
filename = "${path.root}/../../output_file"
content = <<EOT
%{ for inst in aws_instance.node1Stream ~}
${inst.private_ip} ${inst.tags["Name"]}
%{ endfor ~}
EOT
}
With that said, the caveat on the local_file documentation page applies both to this resource-based approach and the provisioner-based approach: Terraform is designed primarily for managing remote objects that can persist from one Terraform run to the next, not for objects that live only on the system where Terraform is currently running. Although these features do allow creating and modifying local files, it'll be up to you to make sure that the previous file is consistently available at the same location relative to the Terraform configuration next time you apply a change, or else Terraform will see the file gone and be forced to recreate it.

Terraform not showing output despite being declared

I am declaring the following output in a TF module in the output.tf file:
output "jenkins_username" {
value = "${local.jenkins_username}"
description = "Jenkins admin username"
#sensitive = true
}
output "jenkins_password" {
value = "${local.jenkins_password}"
description = "Jenkins admin password"
#sensitive = true
}
The corresponding locals have been declared in main.tf as follows:
locals {
jenkins_username = "${var.jenkins_username == "" ? random_string.j_username.result : var.jenkins_username}"
jenkins_password = "${var.jenkins_password == "" ? random_string.j_password.result : var.jenkins_password}"
}
However, after the apply has finished, I see no relevant output, and what is more, it is not displayed even when I call the explicit output command:
$ terraform output jenkins_password
The output variable requested could not be found in the state
file. If you recently added this to your configuration, be
sure to run `terraform apply`, since the state won't be updated
with new output variables until that command is run.
I was having the exact same problem. What worked for me was to comment out the output variables in the first deployment, and uncomment them once that would succeed.

Resources