Unable to provision with file multiple time - linux

While trying to provision with file multiple times, second occurance is not being considered. Not sure if I'm doing it correctly.
Please throw some light !
The below block works perfectly -
source = "/home/ubuntu/Desktop/aws_migration_using_terraform/tcs-btag-account_us-east-2/aws_infra_automation"
destination = "/home/ubuntu"
}
However, this one didn't work and there is no error thrown by terraform itself !
source = "/home/ubuntu/Desktop/aws_migration_using_terraform/tcs-btag-account_us-east-2/livedevops"
destination = "/home/ubuntu"
}
The entire code is given below --
resource "tls_private_key" "bastion-key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "generated_key" {
key_name = var.bastion_key
public_key = tls_private_key.bastion-key.public_key_openssh
}
resource "aws_instance" "bastion_host_us-east-2a" {
ami = var.bastion_ami_id
instance_type = var.bastion_ec2_instance_type
disable_api_termination = false
subnet_id = aws_subnet.devops_mig_pub_sub_01.id
vpc_security_group_ids = [aws_security_group.sg-btag-allow.id, aws_security_group.sg-ssh-allow.id]
associate_public_ip_address = true
availability_zone = aws_subnet.devops_mig_pub_sub_01.availability_zone
key_name = aws_key_pair.generated_key.id
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
port = 22
private_key = tls_private_key.bastion-key.private_key_pem
timeout = "60s"
}
#Copying files from local to remote
provisioner "file" {
source = "/home/ubuntu/Desktop/aws_migration_using_terraform/tcs-btag-account_us-east-2/aws_infra_automation"
destination = "/home/ubuntu"
}
provisioner "file" {
source = "/home/ubuntu/Desktop/aws_migration_using_terraform/tcs-btag-account_us-east-2/livedevops"
destination = "/home/ubuntu"
}
user_data = <<-EOF
#!/bin/bash
sudo apt update -y
sudo apt install -y software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt update -y
sudo apt install -y ansible
/usr/bin/ansible --version > ansible-v.txt
echo "Installing the cloudwatch agent for Ubuntu Linux."
curl -O https://s3.amazonaws.com/amazoncloudwatch-agent/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
dpkg -i -E ./amazon-cloudwatch-agent.deb
EOF
tags = {
"Name" = "bastion_host"
}
}
output "private_key" {
value = tls_private_key.bastion-key.private_key_pem
sensitive = true
}
output "bastion_public_ip" {
value = aws_instance.bastion_host_us-east-2a.public_ip
}
output "bastion_private_ip" {
value = aws_instance.bastion_host_us-east-2a.private_ip
}
resource "aws_ebs_volume" "bastion_storage" {
availability_zone = var.bastion-ebs-availability-zone
size = 50
type = "gp2"
tags = {
"Name" = "bastion_ebs_volume"
}
}
resource "local_file" "bastion_private_key" {
content = tls_private_key.bastion-key.private_key_pem
filename = "bastion-key.pem"
file_permission = "0400"
}

I see ubuntu being the user used to SSH to target machine. It's a bad idea to copy files directly to HOME directory of the user & in this case the file provisioner is just replacing everything available on /home/ubuntu directory.
The above directory also contains your SSH public keys used for authentication in ~/.ssh/authorized_keys. That's the reason it's failing at the second file provisioner.
You create a tmp directory under /home/ubuntu or use /tmp or /var/tmp directories if they allow ubuntu user to write something to write.

Related

Deploying a self managed EKS cluster via Terraform

It's my first time doing this, and this is mostly a copy pasted beginner example. Not sure what I'm missing.
self_managed_node_group_defaults = {
disk_size = 50
}
self_managed_node_groups = {
bottlerocket = {
name = "bottlerocket-self-mng"
platform = "bottlerocket"
ami_id = "xxx"
instance_type = "t2.small"
desired_size = 2
iam_role_additional_policies = ["arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"]
pre_bootstrap_user_data = <<-EOT
echo "foo"
export FOO=bar
EOT
bootstrap_extra_args = "--kubelet-extra-args '--node-labels=node.kubernetes.io/lifecycle=spot'"
post_bootstrap_user_data = <<-EOT
cd /tmp
sudo yum install -y https://s3.amazonaws.com/ec2-downloads-windows/SSMAgent/latest/linux_amd64/amazon-ssm-agent.rpm
sudo systemctl enable amazon-ssm-agent
sudo systemctl start amazon-ssm-agent
EOT
}
}
And the error it throws:
Error: Your query returned no results. Please change your search criteria and try again.
with module.eks.module.self_managed_node_group["bottlerocket"].data.aws_ami.eks_default[0]
on .terraform/modules/eks/modules/self-managed-node-group/main.tf line 5, in data "aws_ami" "eks_default":
data "aws_ami" "eks_default" {

SSH isnt working in Windows with Terraform provisioner connection type

I tried Creating Instance in AWS using Terraform and try to copy a set of files into the newly created AWS Instance. I used "provisioner" for the same but for the connection, it always says connection timed out.
In the example below I showed like its AWS Pem file but I tried with both ppk and pem files. nothing works.
provider "aws" {
region = "ap-southeast-1"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
resource "aws_instance" "firsttest" {
ami = "ami-061eb2b23f9f8839c"
instance_type = "t2.micro"
key_name = "deepak"
provisioner "file" {
source = "index.html"
destination = "/home/ubuntu/index.html"
connection {
type = "ssh"
user = "ubuntu"
private_key = file("D:/awskeyterraform/deepak.pem")
host = "${aws_instance.firsttest.public_ip}"
}
}
user_data = <<-EOF
#!/bin/bash
apt-get update -y
apt-get install -y nginx
systemctl enable nginx
service nginx restart
touch index.html
EOF
tags = {
name = "terraform-firsttest"
}
}
Expected should copy the index.html but actual the connection timed out to connect to the newly created instance
In Windows, SSH module Connection doesn't accept "*.pem". Instead, it accepts the PEM file after renaming it to "id_rsa".
provider "aws" {
region = "ap-southeast-1"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
resource "aws_instance" "firsttest" {
ami = "ami-061eb2b23f9f8839c"
instance_type = "t2.micro"
key_name = "deepak"
provisioner "file" {
source = "index.html"
destination = "/home/ubuntu/index.html"
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("D:/awskeyterraform/id_rsa")}"
host = "${aws_instance.firsttest.public_ip}"
}
}
user_data = <<-EOF
#!/bin/bash
apt-get update -y
apt-get install -y nginx
systemctl enable nginx
service nginx restart
touch index.html
EOF
tags = {
name = "terraform-firsttest"
}
}
Hope this should solve the issue.

Terraform remote-exec provisioner fails with 'bash: Permission denied'

I tried to use the remote-exec to execute several commands on target VM, but failed with 'bash: Permission denied', here is the code:
connection {
host = "${azurerm_network_interface.nic.private_ip_address}"
type = "ssh"
user = "${var.mp_username}"
private_key = "${file(var.mp_vm_private_key)}"
}
provisioner "remote-exec" {
inline = [
"sudo wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh",
"sudo chown ${var.mp_username}: onboard_agent.sh",
"sudo chmod +x onboard_agent.sh",
"./onboard_agent.sh -w ${azurerm_log_analytics_workspace.workspace.workspace_id} -s ${azurerm_log_analytics_workspace.workspace.primary_shared_key} -d opinsights.azure.us"
]
}
After checked the issue here: https://github.com/hashicorp/terraform/issues/5397, I need to wrap all the commands into a file. Then I used a template file to put all the commands in it:
OMSAgent.sh
#!/bin/bash
sudo wget https://raw.githubusercontent.com/Microsoft/OMS-Agent-for-Linux/master/installer/scripts/onboard_agent.sh
sudo chown ${username}: onboard_agent.sh
sudo chmod +x onboard_agent.sh
./onboard_agent.sh -w ${workspaceId} -s ${workspaceKey} -d opinsights.azure.us
The code changes to:
data "template_file" "extension_data" {
template = "${file("templates/OMSAgent.sh")}"
vars = {
workspaceId = "${azurerm_log_analytics_workspace.workspace.workspace_id}"
workspaceKey = "${azurerm_log_analytics_workspace.workspace.primary_shared_key}"
username = "${var.mp_username}"
}
}
resource "null_resource" "remote-provisioner" {
connection {
host = "${azurerm_network_interface.nic.private_ip_address}"
type = "ssh"
user = "${var.mp_username}"
private_key = "${file(var.mp_vm_private_key)}"
script_path = "/home/${var.mp_username}/OMSAgent.sh"
}
provisioner "file" {
content = "${data.template_file.extension_data.rendered}"
destination = "/home/${var.mp_username}/OMSAgent.sh"
}
provisioner "remote-exec" {
inline = [
"chmod +x /home/${var.mp_username}/OMSAgent.sh",
"/home/${var.mp_username}/OMSAgent.sh"
]
}
}
But seems something wrong in the null_resource, the null resource installation stoped and throws this:
null_resource.remote-provisioner (remote-exec): /home/user/OMSAgent.sh: 2: /home/user/OMSAgent.sh: Cannot fork
.
And the content for the shell script is this:
cat OMSAgent.sh
#!/bin/sh
chmod +x /home/user/OMSAgent.sh
/home/user/OMSAgent.sh
Seems I did the script in the wrong way.
#joe huang Please make sure you use the username and password provided when you created the os_profile for your VM:
os_profile {
computer_name = "hostname"
admin_username = "testadmin"
admin_password = "Password1234!"
}
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine.html#example-usage-from-an-azure-platform-image-
Here is a document for installing the OMS agent:
https://support.microsoft.com/en-in/help/4131455/how-to-reinstall-operations-management-suite-oms-agent-for-linux
Hope this helps!
if your /tmp is mounted with noexec, the default location that TF uses to push its tmp script needs to change, perhaps to your users home dir. In the connection block add:
script_path = "~/terraform_provisioner_%RAND%.sh"

Terraforms remote-exec on each host created

I am trying to set up a group of EC2s for an app using Terraform in AWS. After each server is created I want to mount the eNVM instance storage on each server using remote-exec. So create 3 servers and then mount the eNVM on each of the 3 servers
attempted to use null_resource but I am getting errors about 'resource depends on non-existent resource' or 'interpolation' errors
variable count {
default = 3
}
module "app-data-node" {
source = "some_git_source"
count = "${var.count}"
instance_size = "instance_data"
hostname_pattern = "app-data"
dns_domain = "${data.terraform_remote_state.network.dns_domain}"
key_name = "app-automation"
description = "Automation App Data Instance"
package_proxy = "${var.package_proxy}"
}
resource "null_resource" "mount_envm" {
# Only run this provisioner for app nodes
#count = "${var.count}"
depends_on = [
"null_resource.${module.app-data-node}"
]
connection {
host = "${aws_instance.i.*.private_ip[count.index]}"
user = "root"
private_key = "app-automation"
}
provisioner "remote-exec" {
inline = [
"sudo mkfs -t ext4 /dev/nvme0n1",
"sudo mkdir /data",
"sudo mount /dev/nvme0n1 /data"
]
}
}
3 EC2 instances each with eNVMs mounted on them.
You can use a null_resource to run the provisioner:
resource "null_resource" "provisioner" {
count = "${var.count}"
triggers {
master_id = "${element(aws_instance.my_instances.*.id, count.index)}"
}
connection {
#host = "${element(aws_instance.my_instances.*.private_ip, count.index)}"
host = "${element(aws_instance.my_instances.*.private_ip, count.index)}"
type = "ssh"
user = "..."
private_key = "..."
}
# set hostname
provisioner "remote-exec" {
inline = [
"sudo mkfs -t ext4 /dev/nvme0n1",
"sudo mkdir /data",
"sudo mount /dev/nvme0n1 /data"
]
}
}
This should do it for all instances at once as well.

terraform not working with remote exec for tpl script

I have a simple aws ec2 instance as below
resource "aws_instance" "App01" {
##ami = "ami-846144f8"
ami = "${data.aws_ami.aws_linux.id}"
instance_type = "t1.micro"
subnet_id = "${aws_subnet.public_1a.id}"
associate_public_ip_address = true
vpc_security_group_ids = ["${aws_security_group.web_server.id}","${aws_security_group.allow_ssh.id}"]
key_name = "key"
provisioner "remote-exec"{
inline = ["${template_file.bootstrap.rendered}"]
}
tags {
Name = "App01"
}
}
data "aws_ami" "aws_linux" {
most_recent = true
filter {
name = "name"
values = ["amzn2-ami-*-x86_64-gp2"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
filter {
name = "owner-alias"
values = ["amazon"]
}
}
resource "template_file" "bootstrap" {
template = "${file("bootstrap.tpl")}"
vars {
app01ip = "${aws_instance.App01.private_ip}"
app02ip = "${aws_instance.App02.private_ip}"
DBandMQip = "${aws_instance.DBandMQ.private_ip}"
}
}
This is my tbl script
#!/bin/bash -xe
# install necessary items like ansible and
sudo yum-config-manager --enable epel
sudo amazon-linux-extras install ansible2
echo "${app01ip} App01" > /etc/hosts
echo "${app02ip} App02" > /etc/hosts
echo "${DBandMQip} DBandMQ" > /etc/hosts
I keep getting a
Error: Error asking for user input: 1 error(s) occurred:
* Cycle: aws_instance.App01, template_file.bootstrap
I believe its coming from the resource portion for remote-exec but I am unsure whats wrong because it looks fine to me. Anyone has any idea what I am doing wrong?

Resources