I'm currently developing an Azure webapp from a container. I want to rewrite the initial docker run that Azure is doing because my container needs some environment variables.
So, I tried different ways but nothing works. For example, if I set my variables inside the 'Startup file' field in the container settings it will append the content in the original docker run, like it is explained here: StartupFIle in webapp for container
Something like this
docker run -d -p 5920:80 --name test -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITE_SITE_NAME=test -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=test.net -e WEBSITE_INSTANCE_ID=0 test/myImage:latest -e DB_HOST=test.com:3306 -e DB_DATABASE=test -e DB_USERNAME=test -e DB_PASSWORD=test -e APP_URL=https://test.com
Obviously won't work.
I tried to enter into the app using FTPS but I can't find the .env file and cannot connect to the container via ssh because it continues to fail.
So, my question is: How can I upload the initial docker run command that azure container is doing?
I added all my environment variables in the app settings and I can see them in kudu, but I0m missing a step.
Thanks for your help
I resolve my problem using docker compose in the container settings.
First: I have searched the forum and also went through documentation, but still cannot get it right.
So, I have a docker command I want to run on a remote server, from my bash script. I want to pass an environment variable – on the local machine running the script – to the remote server. Furthermore, I need a response from the remote command.
Here is what I actually am trying to do and what I need: the script is a tiny wrapper around our Traefik/Docker/Elixir/Phoenix app setup to be able to connect easily to the running Elixir application, inside the Erlang observer. With the script, the steps would be:
ssh into the remote machine
docker ps to see all running containers, since in our blue/green deploy the active one changes name
docker exec into the correct container
execute a command inside the docker container to connect to the running Elixir application
The command I am using now is:
CONTAINER=$(ssh -q $USER#$IP 'sudo docker ps --format "{{.Names}}" | grep ""$APP_NAME"" | head -n 1')
The main problem is the part with the grep and the ENV var... It is empty, and does not get replaced. It makes sence, since that var does not exist on the remote machine, it does on my local machine. I tried single quotes, $(), ... Either it just does not work, or the solutions I find online execute the command but then I have no way of getting the container name, which I need for the subsequent command:
ssh -o 'RequestTTY force' $USER#$IP "sudo docker exec -i -t $CONTAINER /bin/bash -c './bin/app remote'"
Thanks for your input!
First, are you sure you need to call sudo docker stop? as stopping the containers did not seem to be part of the workflow you mentioned. [edit: not applicable anymore]
Basically, you use a double-double-quote, grep ""$APP_NAME"", but it seems this variable is not substituted (as the whole command 'sudo docker ps …' is singled-quoted); according to your question, this variable is available locally, but not on the remote machine, so you may try writing:
CONTAINER=$(ssh -q $USER#$IP 'f() { sudo docker ps --format "{{.Names}}" | grep "$1" | head -n 1; }; f "'"$APP_NAME"'"')
You can try this single command :
ssh -t $USER#$IP "docker exec -it \$(docker ps -a -q --filter Name=/$APP_NAME) bash -c './bin/app remote'"
You will need to redirect the command with the local environmental variable (APP_NAME) into the ssh command using <<< and so:
CONTAINER=$(ssh -q $USER#$IP <<< 'sudo docker ps --format "{{.Names}}" | grep "$APP_NAME" | head -n 1 | xargs -I{} sudo docker stop {}')
I would like to execute commands on my Radius Server (Amazon Linux) via Windows Powershell
These are the commands:
useradd -g radius-enabled username
yes -- -1 | sudo -u username google-authenticator -l testlabel -i testissuer -t -d -w 5 --no-rate-limit -f
1 basically adds a user then 2 creates an mfa token for the created user
This needs to be done via Windows Powershell so I can try to make a powershell script later on
First, I tried to manually SSH to the Linux Server via Powershell module SSHSessions:
(i have installed the SSHSession module, configured my sshd_config)
New-SshSession -ComputerName ? -Username -Password
Enter-SshSession -ComputerName
[ServerIPAddress]: /home/username # :
It shows that I have SSHed to the server and I am able to execute some commands such as "df -h"
but when I try to sudo, change directory or execute other commands, It just hangs and I cant stop it.
What would be the proper/best way to execute this? I have tried searching for related topics but it wouldn't fit my situation. I decided to ask a question here to be more specific and maybe gain reputation so I can upvote that other stackoverflow answers that helped me.
Thanks!
I have a private Azure Linux VM means it can be accessed only from jumpbox(access) vm. I need to deploy a script to this private VM. As this VM cannot even access any storage account/repo, I can't use Custom Script Extension for script deployment. So I thought of deploying the script using az vm run-command invoke by converting the SomeScript.sh to strings and echo it to the virtual machine. Different pieces of my code as below:
SomeScript.sh
#!/bin/bash
#
# CommandToExecute: ./SomeScript.sh ${CUST_NO}
#
#some more code
Function that converts the .sh file to strings:
function getCommandToExecute()
{
local scriptName=$1
local commandToExecute
local currentLocation=$(dirname "$0")
local scriptFullPath="$currentLocation/Environment/VmScripts/$scriptName"
mapfile < $scriptFullPath
printf -v escapedContents "%q\n" "${MAPFILE[#]}"
commandToExecute+="echo "$escapedContents" > /usr/myapps/$scriptName"
echo "$commandToExecute"
}
vm run command:
az vm run-command invoke -g $resourceGroupName \
-n $vmName --command-id RunShellScript \
--scripts "#!/bin/bash\n ${commandToExecute}"
If I use the "#!/bin/bash\n ${commandToExecute}" part (commandToExecute replaced with string scripts) in the RunCommand window in azure portal, the script works fine, but I can't make it work via run command due to this exception:
\n[stdout]\n\n[stderr]\n/bin/sh: 1: /var/lib/waagent/run-command/download/133/script.sh: not found\n"
Any idea what is missing here? Or if there is a better alternative to handle this.
I think quoting the whole script and its deployment script for use with --scripts is a lot of work and error prone too. Luckily, there are easier alternatives for both quoting steps. The documentation of az vm run-command invoke --scripts states
Use #{file} to load script from a file.
Therefore, you can do
deploymentScript() {
echo '#! /usr/bin/env bash'
printf 'tail -n+4 "${BASH_SOURCE[0]}" > %q\n' "$2"
echo 'exit'
cat "$1"
}
deploymentScript local.sh remote.sh > tmpDeploy.sh
az vm run-command invoke ... --scripts '#{tmpDeploy.sh}'
rm tmpDeploy.sh
Replace local.sh with the path of the local script you want to deploy and replace remote.sh with the remote path where the script should be deployed.
If you are lucky, then you might not even need tmpDeploy.sh. Try
az vm ... --scripts "#{<(deploymentScript local.sh remote.sh)}"
Some notes on the implementation:
The deployed script is an exact copy of your script file. Even embedded binary data is kept. The trick with tail $BASH_SOURCE is inspired by this answer.
The script can be arbitrarily long. Even huge scripts won't run into the error Argument list too long imposed by getconf ARG_MAX.
There was issue with escaped string I mentioned above. After correcting the code, it all good now. My final code:
function getCommandToExecute()
{
local scriptName=$1
local commandToExecute
local currentLocation=$(dirname "$0")
local scriptFullPath="$currentLocation/Environment/VmScripts/$scriptName"
local singleStringCommand=""
mapfile -t < $scriptFullPath
for line in "${MAPFILE[#]}";
do
singleStringCommand+="$(printf '%s' "$line" | sed 's/[\"$]/\\&/g')"
singleStringCommand+="\n"
done
commandToExecute+="echo "\"$singleStringCommand\"" > /usr/local/bin/$scriptName;"
echo "$commandToExecute"
}
I searched a lot of topic about "user-data script is not working" in these few days, but until now, I haven't gotten any idea about my case yet, please help me to figure out what happened, thanks a lot!
According to AWS User-data explanation:
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts.
So I tried to pass my own user-data when instance launch, this is my user-data:
\#!/bin/bash
echo 'test' > /home/ec2-user/user-script-output.txt
But there is no file in this path: /home/ec2-user/user-script-output.txt
I checked /var/lib/cloud/instance/user-data.txt, the file is exist and same as my user-data script.
Also I checked the log in /var/log/cloud-init.log, there is no error message.
But the user-data script is working if I launch an new instance with Amazon linux(2014.09.01), but I'm not sure what difference between my AMI (based on Amazon linux) and Amazon linux.
The only different part I saw is if I run this script:
sudo yum list installed | grep cloud-init
My AMI:
cloud-init.noarch 0.7.2-8.33.amzn1 #amzn-main
Amazon linux:
cloud-init.noarch 0.7.2-8.33.amzn1 installed
I'm not sure this is the reason?
If you need more information, I'm glad to provide, please let me know what happened in my own AMI and how to fix it?
many thanks
Update
Just found an answer from this post,
If I add #cloud-boothook in the top of user-data file, it works!
#cloud-boothook
#!/bin/bash
echo 'test' > /home/ec2-user/user-script-output.txt
But still not sure why.
User_data is run only at the first start up. As your image is a custom one, I suppose it have already been started once and so user_data is desactivated.
For windows, it can be done by checking a box in Ec2 Services Properties. I'm looking at the moment how to do that in an automated way at the end of the custom image creation.
For linux, I suppose the mechanism is the same, and user_data needs to be re-activated on your custom image.
The #cloud-boothook make it works because it changes the script from a user_data mechanism to a cloud-boothook one that runs on each start.
EDIT :
Here is the code to reactivate start on windows using powershell:
$configFile = "C:\\Program Files\\Amazon\\Ec2ConfigService\\Settings\\Config.xml"
[xml] $xdoc = get-content $configFile
$xdoc.SelectNodes("//Plugin") |?{ $_.Name -eq "Ec2HandleUserData"} |%{ $_.State = "Enabled" }
$xdoc.SelectNodes("//Plugin") |?{ $_.Name -eq "Ec2SetComputerName"} |%{ $_.State = "Enabled" }
$xdoc.OuterXml | Out-File -Encoding UTF8 $configFile
$configFile = "C:\\Program Files\\Amazon\\Ec2ConfigService\\Settings\\BundleConfig.xml"
[xml] $xdoc = get-content $configFile
$xdoc.SelectNodes("//Property") |?{ $_.Name -eq "AutoSysprep"} |%{ $_.Value = "Yes" }
$xdoc.OuterXml | Out-File -Encoding UTF8 $configFile
(I know the question focus linux, but it could help others ...)
As I tested, there were some bootstrap data in /var/lib/cloud directory.
After I cleared that directory, User Data script worked normally.
rm -rf /var/lib/cloud/*
I have also faced the same issue on Ubuntu 16.04 hvm AMI. I have raised the issue to AWS support but still I couldn't find exact reason/bug which affects it.
But still I have something which might help you.
Before taking AMI remove /var/lib/cloud directory (each time). Then while creating Image, set it to no-reboot.
If these things still ain't working, you can test it further by forcing user-data to run manually. Also tailf /var/log/cloud-init-output.log for cloud-init status. It should end with something like modules:final to make your user-data run. It should not stuck on modules:config.
sudo rm -rf /var/lib/cloud/*
sudo cloud-init init
sudo cloud-init modules -m final
I don't have much idea whether above commands will work on CentOS or not. I have tested it on Ubuntu.
In my case, I have also tried removing /var/lib/cloud directory, but still it failed to execute user-data in our scenario. But I have came up with different solution for it. What we have did is we have created script with above commands and made that script to run while system boots.
I have added below line in /etc/rc.local to make it happen.
sudo bash /home/ubuntu/force-user-data.sh || exit 1
But here is the catch, it will execute the script on each boot so which will make your user-data to run on every single boot, just like #cloud-boothook. No worries, you can just tweak it by just removing the force-user-data.sh itself at the end. So your force-user-data.sh will look something like
#!/bin/bash
sudo rm -rf /var/lib/cloud/*
sudo cloud-init init
sudo cloud-init modules -m final
sudo rm -f /home/ubuntu/force-user-data.sh
exit 0
I will appreciate if someone can put some lights on why it is unable to execute the user-data.
this is the answer as an example: ensure that you have in the headline only #!/bin/bash
#!/bin/bash
yum update -y
yum install httpd mod_ssl
service httpd start
chkconfig httpd on
I was having a lot of trouble with this. I'll provide detailed walk-though.
My added wrinkle is that I'm using terraform to instantiate the hosts via a launch configuration and autoscaling group.
I could NOT get it to work by adding the script inline in lc.tf
user_data = DATA <<
"
#cloud-boothook
#!/bin/bash
echo 'some crap'\'
"
DATA
I could fetch it from user data,
wget http://169.254.169.254/latest/user-data
but noticed I was getting it with the quotes still in it.
This is how I got it to work: I moved to pulling it from a template instead since what you see is what you get.
user_data = "${data.template_file.bootscript.rendered}"
This means I also need to declare my template file like so:
data "template_file" "bootscript" {
template = "${file("bootscript.tpl")}"
}
But I was still getting an error in the cloud init logs
/var/log/cloud-init.log
[WARNING]: Unhandled non-multipart (text/x-not-multipart) userdata: 'Content-Type: text/cloud...'
Then I found this article about user data formatting user That makes sense, if user-data can come in multiple parts, maybe cloud-init needs cloud-init commands in one place and the script in the other.
So my bootscript.tpl looks like this:
Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config.txt"
#cloud-config
cloud_final_modules:
- [scripts-user, always]
--//
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="userdata.txt"
#!/bin/bash
echo "some crap"
--//
#!/bin/bash
Do not leave any white space at the start of the line . Use exact command. Otherwise it may run in AMAZON Linux AMI , but it’ll not run in RHEL.
On ubuntu 16, removing /var/lib/cloud/* does not work. I removed only instance(s) from the folder /var/lib/cloud/ and then it ran fine for me
I ran:
sudo rm /var/lib/cloud/instance
sudo rm /var/lib/cloud/instances
Then I retried my user data script and it worked fine
I'm using CentOS and the logic for userdata there is simple:
In the file /etc/rc.local there is a call for a initial.sh script, but it looks for a flag first:
if [ -f /var/tmp/initial ]; then
/var/tmp/initial.sh &
fi
initial.sh is the file that does the execution of user-data, but in the end it deletes the flag. So, if you want your new AMI to execute user-data again, just create the flag again before create the image:
touch /var/tmp/initial
The only way I get it to work was to add the #cloud-boothook before the #!/bin/bash
This is a typical user data script that installs Apache web server on a newly created instance
#cloud-boothook
#!/bin/bash
yum update -y
yum install -y httpd.x86_64
systemctl start httpd.service
systemctl enable httpd.service
without the #cloud-boothook it does not work, but with it, it works. It seems that different users have different experiences. Some are able to get it to work without it, not sure why.
just add --// at the end of your user data script , example :
#!/bin/bash
#Upgrade ec2 instance
sudo yum update -y
#Start docker service
sudo service docker start
--//
User Data should execute fine without using #cloud-boothook (which is used to activate the User Data at the earliest possible time during the boot process).
I started a new Amazon Linux AMI and used your User Data, plus a bit extra:
#!/bin/bash
echo 'bar' > /tmp/bar
echo 'test' > /home/ec2-user/user-script-output.txt
echo 'foo' > /tmp/foo
This successfully created three files.
User Data scripts are executed as root, so it should have permission to create files in any location.
I notice that in your supplied code, one example refers to /home/ec2-user/user-script/output.txt (with a subdirectory) and one example refers to /home/ec2-user/user-script-output.txt (no subdirectory). The command would understandably fail if you attempt to create a file in a non-existent directory, but your "Update" example seems to show that it did actually work.
Also,
If we are using user interactive commands like
sudo yum install java-1.8.0-devel
Then, we need to use with flags like -y
sudo yum install java-1.8.0-devel -y
You can find this in the EC2 documentation under Run commands at launch.
Ref: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-shell-scripts