I've always been puzzled why I cannot create files in $HOME directory using user_data when using an aws_instance resource. Even a simple "touch a.txt" in user_data would not create the file.
I have worked around this by creating files in other directories (e.g. /etc/some_file.txt) instead. But I am really curious what's the reason behind this & if there is a way to create files in $HOME with user_data.
Thank you.
----- 1st edit -----
Sample code:
resource "aws_instance" "ubuntu" {
ami = var.ubuntu_ami
instance_type = var.ubuntu_instance_type
subnet_id = aws_subnet.ubuntu_subnet.id
associate_public_ip_address = "true"
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.standard_sg.id]
user_data = <<-BOOTSTRAP
#!/bin/bash
touch /etc/1.txt # this file is created in /etc/1.txt
touch 2.txt # 2.txt is not created in $HOME/2.txt
BOOTSTRAP
tags = {
Name = "${var.project}_eks_master_${count.index + 1}"
}
}
I am not sure what is the default path used at user_data but I did a simple test and I found the solution to your problem.
In an EC2 Instance, I tried this in my user_data
user_data = <<-EOF
#! /bin/bash
sudo bash -c "pwd > /var/www/html/path.html"
The result was this:
root#ip-10-0-10-10:~# cat /var/www/html/path.html
/
Did you check if you have this file created?
ls -l /2.txt
Feel free to reach me if you have any doubts.
I think I found the answer to my own question. The $HOME environment variable does not exist at the time the user_data script is run.
I tried to 'echo $HOME >> /etc/a.txt' and I got a blank line. And instead of creating a file using 'touch $HOME/1.txt', I tried 'touch /home/ubuntu/1.txt' and the file 1.txt was created.
So, I can only conclude that $HOME does not exist at the time user_data was run.
----- Update 1 -----
Did some further testing to support my findings above. When I ran sudo bash -c 'echo $HOME > /etc/a.txt', it gave me the result of /root in the file /etc/a.txt. But when I ran echo $HOME > /etc/b.txt, the file /etc/b.txt contained 0xA (just a single linefeed character).
Did another test by running set > /etc/c.txt to see if $HOME was defined & $HOME didn't exist amongst the environment variables listed in /etc/c.txt. But once the instance was up, and I ran set via an SSH session, $HOME existed & had the value /home/ubuntu.
I also wondered who was running during the initialization so I tried who am i > /etc/d.txt. And /etc/d.txt was a 0-byte file. So, now I don't know which user is running during the EC2 instantiation.
Related
As several other users who have posted to StackOverflow, I ran into problems with file provisioners, and the Terraform documentation says we should not rely on them.
What's the best way to work around file provisioners - specifically for local config files and scripts?
One solution, which works very well and does not require a direct connection to the instance, is to use the userdata as a hook to "install" the files from the base64 version of the file(s).
We can actually embed the files as base64 strings in the userdata initialization scripts. This works for both Windows and Linux instances in AWS, and is compatible also with having a userdata script run on startup.
Solution Description:
During terraform plan, encode whatever local files you need as base64 strings using terraform functions base64encode(file("path/to/file")).
(Optional) Save a marker file (_INIT_STARTED_) at the start of userdata execution; this file will have the creation timestamp of when the userdata execution started.
Before running the actual userdata script, write the base64 strings to text files. (The actual command varies between windows and linux, see examples below.)
Run the userdata script itself (userdata_win.bat or userdata_lin.sh)
(Optional) Finally, save a second marker file (_INIT_COMPLETE_) which will have the creation timestamp of when the userdata script completed. (The absence of this file is also helpful to detect script failures and/or still-running scripts after logging into the instance.)
For AWS Linux instances:
data "template_file" "userdata_lin" {
template = <<EOF
#!/bin/bash
mkdir -p /home/ubuntu/setup-scripts
cd /home/ubuntu/setup-scripts
touch _INIT_STARTED_
echo ${base64encode(file("${path.module}/userdata_lin.sh"))} | base64 --decode > userdata.sh
echo ${base64encode(file("${path.module}/config.json"))} | base64 --decode > config.json
${file("${path.module}/userdata_lin.sh")}
sudo chmod 777 *
touch _INIT_COMPLETE_
EOF
}
# ...
resource "aws_instance" "my_linux_instance" {
# ...
user_data = data.template_file.userdata_lin.rendered
}
For AWS Windows instances:
data "template_file" "userdata_win" {
template = <<EOF
<script>
mkdir C:\Users\Administrator\setup-scripts
cd C:\Users\Administrator\setup-scripts
echo "" > _INIT_STARTED_
echo ${base64encode(file("${path.module}/userdata_win.bat"))} > tmp1.b64 && certutil -decode tmp1.b64 userdata.bat
echo ${base64encode(file("${path.module}/config.json"))} > tmp2.b64 && certutil -decode tmp2.b64 config.json
${file("${path.module}/userdata_win.bat")}
echo "" > _INIT_COMPLETE_
</script>
<persist>false</persist>
EOF
}
# ...
resource "aws_instance" "my_windows_instance" {
# ...
user_data = data.template_file.userdata_win.rendered
}
How can I downsize a virtual machine after its provisioning, from terraform script? Is there a way to update a resource without modifying the initial .tf file?
I have a solution, maybe you could try.
1.Copy your tf file, for example cp vm.tf vm_back.tf and move vm.tf to another directory.
2.Modify vm_size in vm_back.tf. I use this tf file, so I use the following command to change the value.
sed -i 's/vm_size = "Standard_DS1_v2"/vm_size = "Standard_DS2_v2"/g' vm_back.tf
3.Update VM size by executing terraform apply.
4.Remove vm_back.tf and mv vm.tf to original directory.
How about passing in a command line argument that is used in a conditional variable?
For example, declare a conditional value in your .tf file:
vm_size = "${var.vm_size == "small" ? var.small_vm : var.large_vm}"
And when you want to provision the small VM, you simply pass the vm_size variable in on the command line:
$ terraform apply -var="vm_size=small"
In my current project I have the problem of overlong commands in the slurm multiprog conf (it has a limit of 256 chars).
To circumvent this, I'd like to define variables (paths, userdata, filenames) in the batchfile and use them in the multiprog config file.
Aunt google and the rather spartan documentation didn't give me hints. I looked into using environment variables, but haven't found a way to set my own.
Any hints?
The command line in the multiprog configuration file is exec'ed rather than parsed with Bash so you need to invoke the Bash shell explicitly.
For instance:
$ export VAR=VALUE
I set a VAR var in the environment with value VALUE and use the following conf file to illustrate:
$ cat multi.conf
0 echo $VAR
1 bash -c 'echo $VAR'
Task 0 will simply be exec'ed while task 1 will first be parsed by Bash. The result:
$ srun -n2 -l --multi-prog multi.conf
0: $VAR
1: VALUE
Taks 0 echoes the variable name while task 1 echoes the variable's value. But beware that you need to spend 10 additional chars.
I have a problem where my config files contents are placed within my deployment script because they get their settings from my setting.sh file. This causes my deployment script to be very large a bloated.
I was wondering if it would be possible in bash to do something like this
setting.sh
USER="Tom"
log.conf
log=/$PLACEHOLDER_USER/full.log
deployment.sh
#!/bin/bash
# Pull in settings file
. ./settings.sh
# Link config to right location
ln -s /home/log.conf /home/logging/log.conf
# Write variables on top of placeholder variables in the file
for $PLACEHOLDER_* in /home/logging/log.conf
do
(Replace $PLACEHOLDER_<VARAIBLE> with $VARIABLE)
done
I want this to work for any variable found in the config file which starts with $placeholder_
This process would allow me to move a generic config file from my repository and then add the proper variables from my setting file on top of the placeholder variables in the config.
I'm stuck on how I can get this to actually work using my deployment.sh.
This small script will read all variable lines from settings.sh and replace the PLACEHOLDER_xxx in file for each. Does this help you?
while IFS== read variable value
do
sed -i "s/\$PLACEHOLDER_$variable/$value/g" file
done < settings.sh
#!/usr/local/env bash
set -x
ln -s /home/log.conf /home/logging/log.conf
while read user
do
usertmp=$(echo "${user}" | sed s'#USER=\"##' \
sed s'#"$##')
user="${usertemp}"
log="${user}"/full.log
done < setting.sh
I don't really understand the rest of what you're trying to do, I will confess, but this will hopefully give you the idea. Use read.
If I am writing a bash script, and I choose to use a config file for parameters. Can I still pass in parameters for it via the command line? I guess I'm asking can I do both on the same command?
The watered down code:
#!/bin/bash
source builder.conf
function xmitBuildFile {
for IP in "{SERVER_LIST[#]}"
do
echo $1#$IP
done
}
xmitBuildFile
builder.conf:
SERVER_LIST=( 192.168.2.119 10.20.205.67 )
$bash> ./builder.sh myname
My expected output should be myname#192.168.2.119 and myname#10.20.205.67, but when I do an $ echo $#, I am getting 0, even when I passed in 'myname' on the command line.
Assuming the "config file" is just a piece of shell sourced into the main script (usually containing definitions of some variables), like this:
. /etc/script.conf
of course you can use the positional parameters anywhere (before or after ". /etc/..."):
echo "$#"
test -n "$1" && ...
you can even define them in the script or in the very same config file:
test $# = 0 && set -- a b c
Yes, you can. Furthemore, it depends on your architecture of script. You can overwrite parametrs with values from config and vice versa.
By the way shflags may be pretty useful in writing such script.