Unable to execute script using user_data - terraform

I'm trying to execute a script once while ec2 boots up so have following in instance.tf.
resource "aws_instance" "test" {
ami = "i-33434"
user_data = "${data.template_file.user-data.rendered}"
}
data "template_file" "user-data" {
template = "${file("templates/init.tpl")}"}
And have init.tpl file created under templates folder with below content:
#!/bin/bash
sudo mkdir /ecs
mkfs -t ext4 /dev/xvdt10
mkfs -t ext4 /dev/xvdt11
mkdir /ecs/folder1
mkdir /ecs/folder2
mount /dev/xvdt10 /ecs/folder1
mount /dev/xvdt11 /ecs/folder2
echo /dev/xvdt10 /ecs/folder1 ext4 defaults,nofail 0 2 >> /etc/fstab
echo /dev/xvdt11 /ecs/folder2 ext4 defaults,nofail 0 2 >> /etc/fstab

Related

Packer CentOS AMI: loop device not found

So I'm trying to make modifications to an ISO in a Packer instance, but I keep getting the following error message:
==> amazon-ebs.centos-efi: + sudo mount -t iso9660 -o loop 'temporary.iso' /tmp/tmp.M5xCLte5mi
==> amazon-ebs.centos-efi: mount: temporary.iso: failed to setup loop device: No such file or directory
And I cannot seem to understand why this is happening. Things I have tried:
I have tried mounting in a different directory.
Running modprobe loop
Running losetup
Creating a loop device: sudo mknod -m640 /dev/loop8 b 7 8
Neither of them worked. So I've come here for some guidance.
I'll provide relevant bits of my Packer template below:
source "amazon-ebs" "centos-efi" {
ami_name = "centos-efi-{{timestamp}}"
ssh_username = "centos"
instance_type = "t2.medium"
region = "${var.aws_region}"
source_ami = "ami-04f798ca92cc13f74"
skip_create_ami = true
tag {
key = "Name"
value = "CentOS EFI Build"
}
launch_block_device_mappings {
device_name = "/dev/sda1"
volume_size = 32
volume_type = "gp2"
delete_on_termination = true
}
}
Provisioner Code:
ISO_ORIG=$(mktemp -d)
ISO_CHANGE=$(mktemp -d)
ls -al /tmp
sleep 15
sudo mount -t iso9660 -o loop temporary.iso $ISO_ORIG
cd $ISO_ORIG
Any recommendations?

Check duplicate disk LABEL before mounting it to the system in bash script

is there a way to check if there is duplicate disk LABEL before mounting it to the system?
i need to make sure that if the user have two external drives, if the two of them have the same label, prompt to the user a warning and asking it to remove the duplicated disk.
my code is in the early stages:
if mountpoint -q "${JOB_MOUNT_DIR}"; then
echo " ${JOB_MOUNT_HD_LABEL} já está montado e está pronto para uso"
else
echo "O dispositivo ""${JOB_MOUNT_HD_LABEL}"" não está montado no diretório ""${JOB_MOUNT_DIR}"""
echo "Deseja montar o diretório?"
echo -n "Qual sua opção? [s/n]: "
read -r "opcao"
if [ "$opcao" == "s" ]; then
mkdir -p "${JOB_MOUNT_DIR}/${JOB_MOUNT_HD_LABEL}"
mount -L "${JOB_MOUNT_HD_LABEL}" "${JOB_MOUNT_DIR}/${JOB_MOUNT_HD_LABEL}"
exit 0
else
echo "Disco não irá ser montado"
exit 0
fi
fi
exit 0
some parts are in pt-br i think that will not be a problem
first it checks if that the disk is already mounted, if not it asks to mount, then there is the problem to know which of the two disk with the same LABEL is to mount
You can use:
lsblk -o name,label
NAME LABEL
mmcblk0
└─mmcblk0p1 eMMC
Or you can use:
blkid
/dev/nvme0n1p1: UUID="36D7-B890" TYPE="vfat" PARTUUID="8614534f-01"
/dev/nvme0n1p5: UUID="65885781-bd9b-4c62-afb0-4a82a0e5759e" TYPE="ext4" PARTUUID="8614534f-05"
/dev/mmcblk0p1: LABEL="eMMC" UUID="79ff33b4-2add-4f2f-844e-d7d242c18578" TYPE="ext4" PARTUUID="d4b36674-ab5f-4f15-bb83-313cce242fe4"
You may need to prefix above commands with sudo
If the above commands get your labels correctly, you can pipe the output roughly as follows to list duplicates:
lsblk -o label | sort | uniq -d

Replace the option noexec to exec in /etc/fstab file through a shell script

Only the /tmp option of noexec to exec should change. The /var/tmp option of noexec to exec shouldn't change.
contents of /etc/fstab
UUID=f229a689-a31e-4f1a-a823-9a69ee6ec558 / xfs defaults 0 0
UUID=eeb1df48-c9b0-408f-a693-38e2f7f80895 /boot xfs defaults 1 2
UUID=b41e6ef9-c638-4084-8a7e-26ecd2964893 swap swap defaults 0 0
UUID=79aa80a1-fa97-4fe1-a92d-eadf79721204 /var xfs defaults 1 2
UUID=644be3d0-433c-4ed5-bf12-7f61d5b99860 /tmp xfs defaults,nodev,nosuid,noexec 1 2
UUID=decda446-34ac-45b6-826c-ae3f090ed717 /var/log xfs defaults 1 2
UUID=a74170bc-0309-4b3b-862e-722fb7a6882d /var/tmp xfs defaults,nodev,nosuid,noexec 1 2
Using awk:
$ cat 1.awk
$2=="/tmp" { n=split($4,a,",");
str=""
for (i=1; i <= n; i++ ) {
if (a[i] != "noexec") {
if (length(str))
str=str","
str=str""a[i]
}
}
$4=str; print }
$2 != "/tmp" { print }
$ awk -f 1.awk fstab
UUID=f229a689-a31e-4f1a-a823-9a69ee6ec558 / xfs defaults 0 0
UUID=eeb1df48-c9b0-408f-a693-38e2f7f80895 /boot xfs defaults 1 2
UUID=b41e6ef9-c638-4084-8a7e-26ecd2964893 swap swap defaults 0 0
UUID=79aa80a1-fa97-4fe1-a92d-eadf79721204 /var xfs defaults 1 2
UUID=644be3d0-433c-4ed5-bf12-7f61d5b99860 /tmp xfs defaults,nodev,nosuid 1 2
UUID=decda446-34ac-45b6-826c-ae3f090ed717 /var/log xfs defaults 1 2
UUID=a74170bc-0309-4b3b-862e-722fb7a6882d /var/tmp xfs defaults,nodev,nosuid,noexec 1 2
Note, alignment of fields in the modified line can easily be improved using printf. I cannot tell if you were using tabs or spaces between the various fields.

Unable to mount volume created by terraform

I am using the following terraform template
resource "aws_instance" "ec2" {
ami = "${var.ami_id}"
instance_type = "${var.flavor}"
key_name = "${var.key_name}"
availability_zone = "${var.availability_zone}"
security_groups= ["${var.security_group}"]
tags = {Name = "${var.instance_name}"}
}
resource "aws_volume_attachment" "ebs_volume" {
device_name = "/dev/sdg"
volume_id = "vol-006d716dad719545c"
instance_id = "${aws_instance.ec2.id}"
}
to launch an instance in aws and attach volume to that instance.
When i execute this i see that the instance is created and volume is attached to the instance as well.
ubuntu#ip-172-31-10-43:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 91M 1 loop /snap/core/6350
loop1 7:1 0 18M 1 loop /snap/amazon-ssm-agent/930
loop2 7:2 0 88.4M 1 loop /snap/core/6964
loop3 7:3 0 18M 1 loop /snap/amazon-ssm-agent/1335
xvda 202:0 0 50G 0 disk
└─xvda1 202:1 0 50G 0 part /
xvdg 202:96 0 20G 0 disk
But when i try to mount the volume im getting this weird error
ubuntu#ip-172-31-10-43:~$ sudo mkdir -p /goutham
ubuntu#ip-172-31-10-43:~$ sudo mount /dev/xvdg /goutha,
mount: /goutha,: mount point does not exist.
ubuntu#ip-172-31-10-43:~$ sudo mount /dev/xvdg /goutham
mount: /goutham: wrong fs type, bad option, bad superblock on /dev/xvdg, missing codepage or helper program, or other error.
Can anyone please help me out as to what mistake i am doing in this exercise.
Thanks in advance.
You can make a file system on an attached disk using user data and terraform script.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
https://www.terraform.io/docs/providers/aws/r/instance.html#user_data
Create a sh file, templates/mkfs.sh
#!/bin/bash
while ! ls /dev/xvdg > /dev/null
do
sleep 5
done
if [ `file -s /dev/xvdg | cut -d ' ' -f 2` = 'data' ]
then
mkfs.xfs /dev/xvdg
fi
terraform script,
data "template_file" "mkfs" {
template = "${file("${path.module}/templates/mkfs.sh")}"
}
resource "aws_instance" "ec2" {
...
user_data = "${data.template_file.mkfs}"
...
}
It will be run when an ec2 instance is created and wait until disk is mounted. after that it will create file system.
I figured it i think i missed creating the file system in the volume as the volume im trying to attach is an empty volume
so this helped me out
$ sudo mkfs -t xfs /dev/xvdg
and
sudo mkdir -p /goutham
sudo mount /dev/xvdg /goutham
Thanks

AWS ECS volumes do not share any files

I have an EBS volume I have mounted to an AWS ECS cluster instance. This EBS volume is mounted under /data:
$ cat /etc/fstab
...
UUID=xxx /data ext4 defaults,nofail 0 2
$ ls -la /data
total 28
drwxr-xr-x 4 1000 1000 4096 May 14 06:11 .
dr-xr-xr-x 26 root root 4096 May 15 21:18 ..
drwxr-xr-x 4 root root 4096 May 14 06:11 .ethereum
drwx------ 2 1000 1000 16384 May 14 05:29 lost+found
Edit: Output of /proc/mounts
[ec2-user#xxx ~]$ cat /proc/mounts
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
/dev/xvda1 / ext4 rw,noatime,data=ordered 0 0
devtmpfs /dev devtmpfs rw,relatime,size=4078988k,nr_inodes=1019747,mode=755 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0
cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0
cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cgroup /cgroup/devices cgroup rw,relatime,devices 0 0
cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0
cgroup /cgroup/hugetlb cgroup rw,relatime,hugetlb 0 0
cgroup /cgroup/memory cgroup rw,relatime,memory 0 0
cgroup /cgroup/perf_event cgroup rw,relatime,perf_event 0 0
/dev/xvdf /data ext4 rw,relatime,data=ordered 0 0
Now, I would like to mount /data/.ethereum as a Docker volume to /geth/.ethereum in my ECS task definition:
{
...
"containerDefinitions": [
{
...
"volumesFrom": [],
"mountPoints": [
{
"containerPath": "/geth/.ethereum",
"sourceVolume": "ethereum_datadir",
"readOnly": null
}
],
...
}
],
...
"volumes": [
{
"host": {
"sourcePath": "/data/.ethereum"
},
"name": "ethereum_datadir"
}
],
...
}
It appears that the volume is correctly mounted after running the task:
$ docker inspect -f '{{ json .Mounts }}' f5c36d9ea0d6 | python -m json.tool
[
{
"Destination": "/geth/.ethereum",
"Mode": "",
"Propagation": "rprivate",
"RW": true,
"Source": "/data/.ethereum"
}
]
How ever, if I create file inside the container inside the mount point, it will not be there on the host machine.
[ec2-user#xxx .ethereum]$ docker exec -it f5c36d9ea0d6 bash
root#f5c36d9ea0d6:/geth# cat "Hello World!" > /geth/.ethereum/hello_world.txt
cat: Hello World!: No such file or directory
root#f5c36d9ea0d6:/geth# echo "Hello World!" > /geth/.ethereum/hello_world.txt
root#f5c36d9ea0d6:/geth# cat /geth/.ethereum/hello_world.txt
Hello World!
root#f5c36d9ea0d6:/geth# exit
exit
[ec2-user#xxx ~]$ cat /data/.ethereum/hello_world.txt
cat: /data/.ethereum/hello_world.txt: No such file or directory
Somehow the file systems are not being shared. Any ideas?
Found the issue.
It seems like with Docker, any mount point (e.g. for EBS volumes) on the host instance has to be created before the Docker daemon has started, otherwise Docker will write the files into the instance's root file system without you even noticing.
I stopped Docker, unmounted the EBS volume, cleaned everything up, mounted the EBS volume again and started Docker afterwards. Now Docker seems to recognize the mount point and writes everything into my EBS volume as it should.

Resources