I'm having trouble rebooting my EC2 instance from a cfn-init command. I have the following config key in my instance's CloudFormation::Init metadata.
dns-hostname:
commands:
dns-hostname:
env: { publicDns: !Ref PublicDns }
command: |
old=$(hostname)
sed "s|HOSTNAME=localhost.localdomain|HOSTNAME=$publicDns|" --in-place /etc/sysconfig/network
echo HOSTNAME changed from \"$old\" to \"$publicDns\"
reboot
ignoreErrors: true
All the command is supposed to do is change the instance's hostname to the provided public DNS name. A reboot is required for this change to take effect, and since cfn-init doesn't know this, I have to include the actual call to reboot in the last line. Unfortunately, the build fails with the following log message (from /var/log/cfn-init.log):
2017-04-16 12:16:00,301 [DEBUG] Running command dns-hostname
2017-04-16 12:16:00,301 [DEBUG] Running test for command dns-hostname
2017-04-16 12:16:00,309 [DEBUG] Test command output: HOSTNAME will be changed to "bastion.example.com"
2017-04-16 12:16:00,309 [DEBUG] Test for command dns-hostname passed
2017-04-16 12:16:00,321 [ERROR] Command dns-hostname (old=$(hostname)
sed "s|HOSTNAME=localhost.localdomain|HOSTNAME=$publicDns|" --in-place /etc/sysconfig/network
echo HOSTNAME changed from \"$old\" to \"$publicDns\"
reboot
) failed
2017-04-16 12:16:00,321 [DEBUG] Command dns-hostname output: HOSTNAME changed from "ip-10-0-128-4" to "bastion.example.com"
/bin/sh: line 3: reboot: command not found
2017-04-16 12:16:00,321 [INFO] ignoreErrors set to true, continuing build
Clearly, the actual hostname change is not failing, just the call to reboot. I get the same error message if I try to use shutdown -r instead of reboot, and if I try to use an absolute path (sbin/reboot), then it just hangs and stack creation times out. How are these very basic commands not found? Am I missing something simple here? Any help is appreciated!
EDIT: According to this post, when common commands are not available, it may be due to a screwed up PATH. And indeed, the CloudFormation::Init docs say that using the env property will overwrite the current environment, potentially including PATH. However, I added a line to my template to echo $PATH inside the command, and that yielded: "usr/local/bin:/bin:/usr/bin". So my PATH still includes the path to the bash executable, and I am still confused...
Well, it looks like the env property was the issue. Even though I thought that my PATH still had the necessary paths to find the bash executable and thereby run the reboot command, it wasn't until I removed the env property from my template that everything was able to build successfully. I still had some trouble getting the reboot command to behave as expected, as the command doesn't seem to run as soon as you call it. For instance, the following code will output numbers 1-10 before rebooting.
echo 1
echo 2
echo 3
echo 4
echo 5
reboot
echo 6
echo 7
echo 8
echo 9
echo 10
So the instance would apparently try to reboot while in the middle of running other commands from later CloudFormation::Init configs, causeing cfn-init to fail. My solution to this was just to run configs with commands blocks that manually called reboot after all other configs. Long story short, here is the working template snippet:
other-config:
...
# This config comes after the other b/c it manually calls 'reboot'
dns-hostname:
commands:
dns-hostname:
command: !Sub |
publicDns=${PublicDns}
old=$(hostname)
sed "s|HOSTNAME=localhost.localdomain|HOSTNAME=$publicDns|" --in-place /etc/sysconfig/network
echo HOSTNAME changed from \"$old\" to \"$publicDns\"
reboot
ignoreErrors: true
# Any other configs that call reboot can follow
Related
I am new to linux in general so I may not be aware of certain things. So, I have tried multitudes of solutions but I haven't succeeded in running the discord.js bot. I have used rc.local with other scripts like writing date on a file on startup date > <file-path> and it works properly, but somehow running bot on startup fails and can't quite read any errors either.
rc.local file:
#!/bin/sh -e
_IP=$(hostname -I) || true
if [ "$_IP" ]; then
printf "My IP address is %s\n" "$_IP"
fi
/bin/sh /home/pi/superscript.sh
exit 0
superscript.sh file:
#!/bin/sh
date > /home/pi/boot.log
( cd /home/pi/Desktop/discord-bot; /usr/bin/node /home/pi/Desktop/discord-bot/index.js ) &
The bot works completely fine when I manually execute it node ~/Desktop/discord-bot/index.js & and it even works if I manually execute superscript.sh, so I can't really find the problem. Can anyone help me out please I would really appreciate it.
Thank You.
easiest way is to use crontab to execute commands on startup
# Edit cron
crontab -e
# type following to start a script-file.sh on startup
#reboot script-file.sh
reference: https://www.simplified.guide/linux/automatically-run-program-on-startup
According to the systemd-run documentation, the -setenv option can be used to "Run the service process with the specified environment variables set".
However, it seems like the environment variable is actually not available to the process:
# systemd-run -t --setenv=TEST=Success echo TEST:$TEST
Running as unit run-20705.service.
Press ^] three times within 1s to disconnect TTY.
TEST:
Am I misunderstanding the usage of the --setenv option? Running systemd version 219.
You need to prevent bash from resolving $TEST before the systemd command is run.
Also echo is incapable of resolving environmental variables. Bash is needed within the systemd process to resolve TEST
So you need to run the following:
systemd-run -t --setenv=TEST=Success 'bash -c echo TEST:$TEST'
Excuse me if the subject is vague, but I tried to describe my problem to the best of my possibilities. I have my raspberry pi which I want to deploy to using codeship. Rsyncing the files works perfectly, but when I am to restart my application using pm2 my problem occurs.
I have installed node and pm2 using the node version manager NVM.
ssh pi#server.com 'source /home/pi/.bashrc; cd project; pm2 restart app.js -x -- --prod'0 min 3 sec
bash: pm2: command not found
I have even added:
shopt -s expand_aliases in the bottom of my bashrc but it doesn't help.
How can I make it restart my application after I have done a deploy? Thanks in advance for your sage advice and better wisdom!
EDIT 1: My .bashrc http://pastie.org/10529200
My $PATH: /home/pi/.nvm/versions/node/v4.2.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games
EDIT 2: I added /home/pi/.nvm/versions/node/v4.2.0/bin/pm2 which is the full path to pm2 and now I get the following error: /usr/bin/env: node: No such file or directory
It seems that even if I provide the full path, node isn't executed.
I think the problem is the misinterpretation that the shell executing node has a full environment like an interactive ssh session does. Most likely this is not the case.
When a SSH session spawns a shell it goes through a lot of gyrations to build an environment suitable to work with interactively. Things like inheriting from the login process, reading /etc/profile, reading ~/.profile. But in the cases where your executing bash directly this isn't always guaranteed. In fact the $PATH might be completely empty.
When /usr/bin/env node executes it looks for node in your $PATH which in a non-interactive shell could be anything or empty.
Most systems have a default PATH=/bin:/usr/bin typically /usr/local/bin is not included in the default environment.
You could attempt to force a login with ssh using ssh … '/bin/bash -l -c "…"'.
You can also write a specialized script on the server that knows how the environment should be when executed outside of an interactive shell:
#!/bin/bash
# Example shell script; filename: /usr/local/bin/my_script.sh
export PATH=$PATH:/usr/local/bin
export NODE_PATH=/usr/local/share/node
export USER=myuser
export HOME=/home/myuser
source $HOME/.nvm/nvm.sh
cd /usr/bin/share/my_script
nvm use 0.12
/usr/bin/env node ./script_name.js
Then call it through ssh: ssh … '/usr/local/bin/my_script.sh'.
Beyond these ideas I don't see how to help further.
Like Sukima said, the likelihood is that this is due to an environment issue - SSH'ing into a server does not set up a full environment. You can, however, get around much of this by simply calling /etc/profile yourself at the start of your command using the . operator (which is the same as the "source" command):
ssh pi#server.com '. /etc/profile ; cd project; pm2 restart app.js -x -- --prod'
/etc/profile should itself be set up to call the .bashrc of the relevant user, which is why I have removed that part. I used to have to do this quite a lot for quick proof-of-concept scripts at a previous workplace. I don't know if it would be considered a nasty hack for a more permanent script, but it certainly works, and would require minimal modification to your existing script should that be an issue.
For me I have to load :nvm as I installed node and yarn using :nvm
To load :nvm when ssh remote execution, we call
ssh :user#:host 'source ~/.nvm/nvm.sh; :other_commands_here'
Try:
ssh pi#server.com 'bash -l -c "source /home/pi/.bashrc; cd project; pm2 restart app.js -x -- --prod"'
You should enable some environment values by "source" or dot command ".". Here is an example.
ssh pi#server.com '. /home/pi/.nvm/nvm.sh; cd project; pm2 restart app.js -x -- --prod'
What worked for me was adding this to my .bash_profile:
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
Source: https://stackoverflow.com/a/820533/1824444
I have a linux shell script that when run from command line works perfectly but when scheduled to run via crontab, it does not give desired results.
The script is quite simple, it checks to see if mysql-proxy is running or not by checking if its pid is found using the pidof command. If found to be off, it attempts to start the proxy.
# Check if mysql proxy is off
# if found off, attempt to start it
if pidof mysql-proxy
then
echo "Proxy running."
else
echo "Proxy off ... attempting to restart"
/usr/local/mysql-proxy/bin/mysql-proxy -P 172.20.10.196:3306 --daemon --proxy-backend-addresses=172.20.10.194 --proxy-backend-addresses=172.20.10.195
if pidof mysql-proxy
then
echo "Proxy started"
else
echo "Proxy restar failed"
fi
fi
echo "==============================================="
The script is saved in a file check-sql-proxy.sh and has permissions set to 777. When I run the script from command line (sh check-sql-proxy.sh) it gives the desired output.
4066
Proxy running.
===============================================
The script is also scheduled to run every 5 minutes in crontab as
*/5 * * * * bash /root/auto-restart-mysql-proxy.sh > /dev/sql-proxy-restart-log.log
However, when I see the sql-proxy-restart-log.log file it contains the output:
Proxy off ... attempting to restart
Proxy restar failed
===============================================
It seems that pidof command fails to return the pid of the running application which brings the flow of script in else condition.
I am unable to figure out how to resolve this since when I run the script manually, it works fine.
Can anyone help what I am missing with regards to permissions or settings?
Thanks in advance.
Mudasser
Check that the shell is what you think it is (usually /bin/sh, not bash)
Also check that PATH environment variable. Usually, for cron jobs it is a good practice to fully qualify all paths to binaries, e.g.
#!/bin/bash
# Check if mysql proxy is off
# if found off, attempt to start it
if /bin/pidof mysql-proxy
etc.
Try pidof /usr/local/mysql-proxy/bin/mysql-proxy (full path to executable)
In common, try use the same command name as was used to start the instance of mysql-proxy.
The problem seems that crontab environment don't have the same environment as you.
You have 2 simple & proper solutions :
In the first lines of crontab :
PATH=/foo:/bar:/qux
SHELL=/bin/bash
or
source ~/.bashrc
in your scripts.
I have a webservice that runs on multiple different remote redhat machines. Whenever I want to update the service I will sync down the new webservice source code written in perl from a version control depot(I use perforce) and restart the service using that new synced down perl code. I think it is too boring to log to remote machines one by one and do that series of commands to restart the service one by one manully. So I wrote a bash script update.sh like below in order to "do it one time one place, update all machines". I will run this shell script in my local machine. But it seems that it won't work. It only execute the first command "sudo -u webservice_username -i" as I can tell from the command line in my local machine. (The code below only shows how it will update one of the remote webservice. The "export P4USER=myname" is for usage of perforce client)
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Why I know the only first command is executed? Well because after I input the password for the ssh on my local machine, it shows:
Your environment has been modified. Please check /tmp/webservice.env.
And it just gets stuck there. I mean no return.
As suggested by a commentor, I added "-t" for ssh
#!/bin/sh
ssh -t myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
This would let the local commandline return. But it seems weird, it cannot cd to that "dir", it says "cd:dir: No such file or directory" it also says "p4: command not found". So it looks like the sudo -u command executes with no effect and the export command has either not executed or excuted with no effect.
A detailed local log file is like below:
Your environment has been modified. Please check /tmp/dir/.env.
bash: line 0: cd: dir: No such file or directory
bash: p4: command not found
bash: line 0: cd: bin: No such file or directory
bash: ./prog: No such file or directory
tail: cannot open `../logs/service.log' for reading: No such file or directory
tail: no files remaining
Instead of connecting via ssh and then immediately changing users, can you not use something like ssh -t webservice_username#remotehost1 to connect with the desired username to begin with? That would avoid needing to sudo altogether.
If that isn't a possibility, try wrapping up all of the commands that you want to run in a shell script and store it on the remote machine. If you can get your task working from a script, then your ssh call becomes much simpler and should encounter fewer problems:
ssh myname#remotehost1 '/path/to/script'
For easily updating this script, you can write a short script for your local machine that uploads the most recent version via scp and then uses ssh to invoke it.
Note that when you run:
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Your ssh session runs sudo -u webservice_username -i waits for it to exit and then runs the rest of the commands; it does not execute sudo and then run the commands following. This has to do with the context in which you're running the series of commands. All the commands get executed in the shell of myname#remotehost1 and all sudo -u webservice_username - i is starts a shell for webservice_username and doesn't actually run any commands.
Really the best solution here is like bta said; write a script and then rsync/scp it to the destination and then run that using sudo.
export command simply not working with ssh like this, what you want to do is remote modify ~/.bashrc and it will source itself each time u do ssh login.