systemd-run does not set environment variables when using --setenv - linux

According to the systemd-run documentation, the -setenv option can be used to "Run the service process with the specified environment variables set".
However, it seems like the environment variable is actually not available to the process:
# systemd-run -t --setenv=TEST=Success echo TEST:$TEST
Running as unit run-20705.service.
Press ^] three times within 1s to disconnect TTY.
TEST:
Am I misunderstanding the usage of the --setenv option? Running systemd version 219.

You need to prevent bash from resolving $TEST before the systemd command is run.
Also echo is incapable of resolving environmental variables. Bash is needed within the systemd process to resolve TEST
So you need to run the following:
systemd-run -t --setenv=TEST=Success 'bash -c echo TEST:$TEST'

Related

Running bash script over SSH [duplicate]

I have to run a local shell script (windows/Linux) on a remote machine.
I have SSH configured on both machine A and B. My script is on machine A which will run some of my code on a remote machine, machine B.
The local and remote computers can be either Windows or Unix based system.
Is there a way to run do this using plink/ssh?
If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.
plink root#MachineB -m local_script.sh
If Machine A is a Unix-based system, you can use:
ssh root#MachineB 'bash -s' < local_script.sh
You shouldn't have to copy the script to the remote server to run it.
This is an old question, and Jason's answer works fine, but I would like to add this:
ssh user#host <<'ENDSSH'
#commands to run on remote host
ENDSSH
This can also be used with su and commands which require user input. (note the ' escaped heredoc)
Since this answer keeps getting bits of traffic, I would add even more info to this wonderful use of heredoc:
You can nest commands with this syntax, and that's the only way nesting seems to work (in a sane way)
ssh user#host <<'ENDSSH'
#commands to run on remote host
ssh user#host2 <<'END2'
# Another bunch of commands on another host
wall <<'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
You can actually have a conversation with some services like telnet, ftp, etc. But remember that heredoc just sends the stdin as text, it doesn't wait for response between lines
I just found out that you can indent the insides with tabs if you use <<-END!
ssh user#host <<-'ENDSSH'
#commands to run on remote host
ssh user#host2 <<-'END2'
# Another bunch of commands on another host
wall <<-'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<-'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
(I think this should work)
Also see
http://tldp.org/LDP/abs/html/here-docs.html
Also, don't forget to escape variables if you want to pick them up from the destination host.
This has caught me out in the past.
For example:
user#host> ssh user2#host2 "echo \$HOME"
prints out /home/user2
while
user#host> ssh user2#host2 "echo $HOME"
prints out /home/user
Another example:
user#host> ssh user2#host2 "echo hello world | awk '{print \$1}'"
prints out "hello" correctly.
This is an extension to YarekT's answer to combine inline remote commands with passing ENV variables from the local machine to the remote host so you can parameterize your scripts on the remote side:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo $ARG1 $ARG2
ENDSSH
I found this exceptionally helpful by keeping it all in one script so it's very readable and maintainable.
Why this works. ssh supports the following syntax:
ssh user#host remote_command
In bash we can specify environment variables to define prior to running a command on a single line like so:
ENV_VAR_1='value1' ENV_VAR_2='value2' bash -c 'echo $ENV_VAR_1 $ENV_VAR_2'
That makes it easy to define variables prior to running a command. In this case echo is our command we're running. Everything before echo defines environment variables.
So we combine those two features and YarekT's answer to get:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'...
In this case we are setting ARG1 and ARG2 to local values. Sending everything after user#host as the remote_command. When the remote machine executes the command ARG1 and ARG2 are set the local values, thanks to local command line evaluation, which defines environment variables on the remote server, then executes the bash -s command using those variables. Voila.
<hostA_shell_prompt>$ ssh user#hostB "ls -la"
That will prompt you for password, unless you have copied your hostA user's public key to the authorized_keys file on the home of user .ssh's directory. That will allow for passwordless authentication (if accepted as an auth method on the ssh server's configuration)
I've started using Fabric for more sophisticated operations. Fabric requires Python and a couple of other dependencies, but only on the client machine. The server need only be a ssh server. I find this tool to be much more powerful than shell scripts handed off to SSH, and well worth the trouble of getting set up (particularly if you enjoy programming in Python). Fabric handles running scripts on multiple hosts (or hosts of certain roles), helps facilitate idempotent operations (such as adding a line to a config script, but not if it's already there), and allows construction of more complex logic (such as the Python language can provide).
cat ./script.sh | ssh <user>#<host>
chmod +x script.sh
ssh -i key-file root#111.222.3.444 < ./script.sh
Try running ssh user#remote sh ./script.unx.
Assuming you mean you want to do this automatically from a "local" machine, without manually logging into the "remote" machine, you should look into a TCL extension known as Expect, it is designed precisely for this sort of situation. I've also provided a link to a script for logging-in/interacting via SSH.
https://www.nist.gov/services-resources/software/expect
http://bash.cyberciti.biz/security/expect-ssh-login-script/
ssh user#hostname ". ~/.bashrc;/cd path-to-file/;. filename.sh"
highly recommended to source the environment file(.bashrc/.bashprofile/.profile). before running something in remote host because target and source hosts environment variables may be deffer.
I use this one to run a shell script on a remote machine (tested on /bin/bash):
ssh deploy#host . /home/deploy/path/to/script.sh
if you wanna execute command like this
temp=`ls -a`
echo $temp
command in `` will cause errors.
below command will solve this problem
ssh user#host '''
temp=`ls -a`
echo $temp
'''
If the script is short and is meant to be embedded inside your script and you are running under bash shell and also bash shell is available on the remote side, you may use declare to transfer local context to remote. Define variables and functions containing the state that will be transferred to the remote. Define a function that will be executed on the remote side. Then inside a here document read by bash -s you can use declare -p to transfer the variable values and use declare -f to transfer function definitions to the remote.
Because declare takes care of the quoting and will be parsed by the remote bash, the variables are properly quoted and functions are properly transferred. You may just write the script locally, usually I do one long function with the work I need to do on the remote side. The context has to be hand-picked, but the following method is "good enough" for any short scripts and is safe - should properly handle all corner cases.
somevar="spaces or other special characters"
somevar2="!##$%^"
another_func() {
mkdir -p "$1"
}
work() {
another_func "$somevar"
touch "$somevar"/"$somevar2"
}
ssh user#server 'bash -s' <<EOT
$(declare -p somevar somevar2) # transfer variables values
$(declare -f work another_func) # transfer function definitions
work # call the function
EOT
The answer here (https://stackoverflow.com/a/2732991/4752883) works great if
you're trying to run a script on a remote linux machine using plink or ssh.
It will work if the script has multiple lines on linux.
**However, if you are trying to run a batch script located on a local
linux/windows machine and your remote machine is Windows, and it consists
of multiple lines using **
plink root#MachineB -m local_script.bat
wont work.
Only the first line of the script will be executed. This is probably a
limitation of plink.
Solution 1:
To run a multiline batch script (especially if it's relatively simple,
consisting of a few lines):
If your original batch script is as follows
cd C:\Users\ipython_user\Desktop
python filename.py
you can combine the lines together using the "&&" separator as follows in your
local_script.bat file:
https://stackoverflow.com/a/8055390/4752883:
cd C:\Users\ipython_user\Desktop && python filename.py
After this change, you can then run the script as pointed out here by
#JasonR.Coombs: https://stackoverflow.com/a/2732991/4752883 with:
`plink root#MachineB -m local_script.bat`
Solution 2:
If your batch script is relatively complicated, it may be better to use a batch
script which encapsulates the plink command as well as follows as pointed out
here by #Martin https://stackoverflow.com/a/32196999/4752883:
rem Open tunnel in the background
start plink.exe -ssh [username]#[hostname] -L 3307:127.0.0.1:3306 -i "[SSH
key]" -N
rem Wait a second to let Plink establish the tunnel
timeout /t 1
rem Run the task using the tunnel
"C:\Program Files\R\R-3.2.1\bin\x64\R.exe" CMD BATCH qidash.R
rem Kill the tunnel
taskkill /im plink.exe
This bash script does ssh into a target remote machine, and run some command in the remote machine, do not forget to install expect before running it (on mac brew install expect )
#!/usr/bin/expect
set username "enterusenamehere"
set password "enterpasswordhere"
set hosts "enteripaddressofhosthere"
spawn ssh $username#$hosts
expect "$username#$hosts's password:"
send -- "$password\n"
expect "$"
send -- "somecommand on target remote machine here\n"
sleep 5
expect "$"
send -- "exit\n"
You can use runoverssh:
sudo apt install runoverssh
runoverssh -s localscript.sh user host1 host2 host3...
-s runs a local script remotely
Useful flags:
-g use a global password for all hosts (single password prompt)
-n use SSH instead of sshpass, useful for public-key authentication
If it's one script it's fine with the above solution.
I would set up Ansible to do the Job. It works in the same way (Ansible uses ssh to execute the scripts on the remote machine for both Unix or Windows).
It will be more structured and maintainable.
It is unclear if the local script uses locally set variables, functions, or aliases.
If it does this should work:
myscript.sh:
#!/bin/bash
myalias $myvar
myfunction $myvar
It uses $myvar, myfunction, and myalias. Let us assume they is set locally and not on the remote machine.
Make a bash function that contains the script:
eval "myfun() { `cat myscript.sh`; }"
Set variable, function, and alias:
myvar=works
alias myalias='echo This alias'
myfunction() { echo This function "$#"; }
And "export" myfun, myfunction, myvar, and myalias to server using env_parallel from GNU Parallel:
env_parallel -S server -N0 --nonall myfun ::: dummy
Extending answer from #cglotr. In order to write inline command use printf, it useful for simple command and it support multiline using char escaping '\n'
example :
printf "cd /to/path/your/remote/machine/log \n tail -n 100 Server.log" | ssh <user>#<host> 'bash -s'
See don't forget to add bash -s
There is another approach ,you can copy your script in your host with scp command then execute it easily .
First, copy the script over to Machine B using scp
[user#machineA]$ scp /path/to/script user#machineB:/home/user/path
Then, just run the script
[user#machineA]$ ssh user#machineB "/home/user/path/script"
This will work if you have given executable permission to the script.

different ssh behavior from crond

I've been pulling my hair out on this one for several hours now. I welcome any new ideas on where to look next.
The objective is to login to a custom application CLI over SSH and then drop down a debug shell on the far-end device using one of the custom CLI commands. On the client side I'm using CentOS minimal and running ssh as follows:
Working case:
[user#ashleys-xpvm ws]$ ssh -p8222 admin#192.168.56.20
admin#192.168.56.20's password:
Welcome to CLI
admin connected from 172.29.33.108 using ssh on scm2
TRAN39# debug-utils shell
device#scm2:~$
The ssh client session accesses the custom CLI using the application-specific port 8222. Once inside the CLI, we drop down to the bash shell using the 'debug-utils shell' command.
This sequence was scripted with Python/pexpect and that worked fine when the script was launched from the user's command line. The problem arose when the script was moved to the crontab to be run automatically by crond. In the latter case, the script fails in a peculiar way.
Following the recommendation from this post: How to simulate the environment cron executes a script with? I launched a new shell on the client machine with the same environment variables as what the cron job uses and I was able to manually reproduce the same problem that the automatic cron job was running into.
With the cron environment set, the far-end device now throws the following error at the point where we issue the command to drop into the device's bash shell:
sh-4.2$ ssh -p8222 admin#192.168.56.20
admin#192.168.56.20's password:
Welcome to CLI
admin connected from 172.29.33.108 using ssh on scm2
TRAN39# debug-utils shell
error: failed to decode arguments
TRAN39#
Once I had the problem reproduced, I setup two terminals, one with the working environment variables and the other with the failing environment variables. I ran ssh from both terminals with '-vvv' flag and compared the debug output between the two.
The two outputs were identical except for where they step through the environment variables to determine what to send to the send the SSH server (obviously), as well as the 'bits set' lines were slightly different. I looked at the environment variable lines and I could see that ssh is ignoring all of them except for LANG which is identical in both the working case and the failing case.
I'm at a loss now for why the ssh server at the far-end device is behaving differently between these two client-side environment settings.
Here is the working environment:
[user#centos_vm ws]$ env
XDG_SESSION_ID=294
HOSTNAME=centos_vm
SELINUX_ROLE_REQUESTED=
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=192.168.56.20 52795 22
SELINUX_USE_CURRENT_RANGE=
OLDPWD=/home/user
SSH_TTY=/dev/pts/4
USER=user
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
MAIL=/var/spool/mail/user
PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/user/.local/bin:/home/user/bin
PWD=/home/user/ws
LANG=en_US.UTF-8
SELINUX_LEVEL_REQUESTED=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/user
LOGNAME=user
SSH_CONNECTION=192.168.56.20 52795 192.168.56.101 22
LESSOPEN=||/usr/bin/lesspipe.sh %s
XDG_RUNTIME_DIR=/run/user/1000
_=/usr/bin/env
[user#centos_vm ws]$
...and here is the failing (i.e. cron) environment:
sh-4.2$ env
XDG_SESSION_ID=321
SHELL=/bin/sh
USER=user
PATH=/usr/bin:/bin
PWD=/home/user/ws
LANG=en_US.UTF-8
HOME=/home/user
SHLVL=2
LOGNAME=user
XDG_RUNTIME_DIR=/run/user/1000
_=/usr/bin/env
OLDPWD=/home/user
sh-4.2$
I'm running out of my depth on ssh debugging at this point so any guidance on where to look next is greatly appreciated.
Usually ssh without specifying a command (ssh user#host) would pass the value of TERM on local host to remote server. For example:
# TERM=foo ssh 127.0.0.1
bash-4.4# echo $TERM
foo
bash-4.4#
In crontab, crond by default will not set the TERM var so after ssh login, the TERM will be set to dumb (which is not fully functional). See example:
# (unset TERM; ssh 127.0.0.1)
bash-4.4# echo $TERM
dumb
bash-4.4# clear
TERM environment variable not set.
bash-4.4#
In your case it sounds like the remote application requires a more functional TERM so explicitly setting it to TERM=xterm (which will be passed to the remote server) in crontab would fix it.
Note that ssh with a command (ssh user#host command...) will not allocate a pty on remote server so the local TERM will not be passed. To force creating a pty and passing the var we must use ssh -t. See example:
# echo $TERM
dtterm
# ssh 127.0.0.1 'tty; echo $TERM'
not a tty
dumb
# ssh -t 127.0.0.1 'tty; echo $TERM'
/dev/pts/8
dtterm
#
Found Dumb terminals on Wikipedia:
Dumb terminals are those that can interpret a limited number of control codes (CR, LF, etc.) but do not have the ability to process special escape sequences that perform functions such as clearing a line, clearing the screen, or controlling cursor position. In this context dumb terminals are sometimes dubbed glass Teletypes, for they essentially have the same limited functionality as does a mechanical Teletype. This type of dumb terminal is still supported on modern Unix-like systems by setting the environment variable TERM to dumb. Smart or intelligent terminals are those that also have the ability to process escape sequences, in particular the VT52, VT100 or ANSI escape sequences.

Rebooting From CloudFormation::Init Command

I'm having trouble rebooting my EC2 instance from a cfn-init command. I have the following config key in my instance's CloudFormation::Init metadata.
dns-hostname:
commands:
dns-hostname:
env: { publicDns: !Ref PublicDns }
command: |
old=$(hostname)
sed "s|HOSTNAME=localhost.localdomain|HOSTNAME=$publicDns|" --in-place /etc/sysconfig/network
echo HOSTNAME changed from \"$old\" to \"$publicDns\"
reboot
ignoreErrors: true
All the command is supposed to do is change the instance's hostname to the provided public DNS name. A reboot is required for this change to take effect, and since cfn-init doesn't know this, I have to include the actual call to reboot in the last line. Unfortunately, the build fails with the following log message (from /var/log/cfn-init.log):
2017-04-16 12:16:00,301 [DEBUG] Running command dns-hostname
2017-04-16 12:16:00,301 [DEBUG] Running test for command dns-hostname
2017-04-16 12:16:00,309 [DEBUG] Test command output: HOSTNAME will be changed to "bastion.example.com"
2017-04-16 12:16:00,309 [DEBUG] Test for command dns-hostname passed
2017-04-16 12:16:00,321 [ERROR] Command dns-hostname (old=$(hostname)
sed "s|HOSTNAME=localhost.localdomain|HOSTNAME=$publicDns|" --in-place /etc/sysconfig/network
echo HOSTNAME changed from \"$old\" to \"$publicDns\"
reboot
) failed
2017-04-16 12:16:00,321 [DEBUG] Command dns-hostname output: HOSTNAME changed from "ip-10-0-128-4" to "bastion.example.com"
/bin/sh: line 3: reboot: command not found
2017-04-16 12:16:00,321 [INFO] ignoreErrors set to true, continuing build
Clearly, the actual hostname change is not failing, just the call to reboot. I get the same error message if I try to use shutdown -r instead of reboot, and if I try to use an absolute path (sbin/reboot), then it just hangs and stack creation times out. How are these very basic commands not found? Am I missing something simple here? Any help is appreciated!
EDIT: According to this post, when common commands are not available, it may be due to a screwed up PATH. And indeed, the CloudFormation::Init docs say that using the env property will overwrite the current environment, potentially including PATH. However, I added a line to my template to echo $PATH inside the command, and that yielded: "usr/local/bin:/bin:/usr/bin". So my PATH still includes the path to the bash executable, and I am still confused...
Well, it looks like the env property was the issue. Even though I thought that my PATH still had the necessary paths to find the bash executable and thereby run the reboot command, it wasn't until I removed the env property from my template that everything was able to build successfully. I still had some trouble getting the reboot command to behave as expected, as the command doesn't seem to run as soon as you call it. For instance, the following code will output numbers 1-10 before rebooting.
echo 1
echo 2
echo 3
echo 4
echo 5
reboot
echo 6
echo 7
echo 8
echo 9
echo 10
So the instance would apparently try to reboot while in the middle of running other commands from later CloudFormation::Init configs, causeing cfn-init to fail. My solution to this was just to run configs with commands blocks that manually called reboot after all other configs. Long story short, here is the working template snippet:
other-config:
...
# This config comes after the other b/c it manually calls 'reboot'
dns-hostname:
commands:
dns-hostname:
command: !Sub |
publicDns=${PublicDns}
old=$(hostname)
sed "s|HOSTNAME=localhost.localdomain|HOSTNAME=$publicDns|" --in-place /etc/sysconfig/network
echo HOSTNAME changed from \"$old\" to \"$publicDns\"
reboot
ignoreErrors: true
# Any other configs that call reboot can follow

bash doesn't load node on remote ssh command

Excuse me if the subject is vague, but I tried to describe my problem to the best of my possibilities. I have my raspberry pi which I want to deploy to using codeship. Rsyncing the files works perfectly, but when I am to restart my application using pm2 my problem occurs.
I have installed node and pm2 using the node version manager NVM.
ssh pi#server.com 'source /home/pi/.bashrc; cd project; pm2 restart app.js -x -- --prod'0 min 3 sec
bash: pm2: command not found
I have even added:
shopt -s expand_aliases in the bottom of my bashrc but it doesn't help.
How can I make it restart my application after I have done a deploy? Thanks in advance for your sage advice and better wisdom!
EDIT 1: My .bashrc http://pastie.org/10529200
My $PATH: /home/pi/.nvm/versions/node/v4.2.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games
EDIT 2: I added /home/pi/.nvm/versions/node/v4.2.0/bin/pm2 which is the full path to pm2 and now I get the following error: /usr/bin/env: node: No such file or directory
It seems that even if I provide the full path, node isn't executed.
I think the problem is the misinterpretation that the shell executing node has a full environment like an interactive ssh session does. Most likely this is not the case.
When a SSH session spawns a shell it goes through a lot of gyrations to build an environment suitable to work with interactively. Things like inheriting from the login process, reading /etc/profile, reading ~/.profile. But in the cases where your executing bash directly this isn't always guaranteed. In fact the $PATH might be completely empty.
When /usr/bin/env node executes it looks for node in your $PATH which in a non-interactive shell could be anything or empty.
Most systems have a default PATH=/bin:/usr/bin typically /usr/local/bin is not included in the default environment.
You could attempt to force a login with ssh using ssh … '/bin/bash -l -c "…"'.
You can also write a specialized script on the server that knows how the environment should be when executed outside of an interactive shell:
#!/bin/bash
# Example shell script; filename: /usr/local/bin/my_script.sh
export PATH=$PATH:/usr/local/bin
export NODE_PATH=/usr/local/share/node
export USER=myuser
export HOME=/home/myuser
source $HOME/.nvm/nvm.sh
cd /usr/bin/share/my_script
nvm use 0.12
/usr/bin/env node ./script_name.js
Then call it through ssh: ssh … '/usr/local/bin/my_script.sh'.
Beyond these ideas I don't see how to help further.
Like Sukima said, the likelihood is that this is due to an environment issue - SSH'ing into a server does not set up a full environment. You can, however, get around much of this by simply calling /etc/profile yourself at the start of your command using the . operator (which is the same as the "source" command):
ssh pi#server.com '. /etc/profile ; cd project; pm2 restart app.js -x -- --prod'
/etc/profile should itself be set up to call the .bashrc of the relevant user, which is why I have removed that part. I used to have to do this quite a lot for quick proof-of-concept scripts at a previous workplace. I don't know if it would be considered a nasty hack for a more permanent script, but it certainly works, and would require minimal modification to your existing script should that be an issue.
For me I have to load :nvm as I installed node and yarn using :nvm
To load :nvm when ssh remote execution, we call
ssh :user#:host 'source ~/.nvm/nvm.sh; :other_commands_here'
Try:
ssh pi#server.com 'bash -l -c "source /home/pi/.bashrc; cd project; pm2 restart app.js -x -- --prod"'
You should enable some environment values by "source" or dot command ".". Here is an example.
ssh pi#server.com '. /home/pi/.nvm/nvm.sh; cd project; pm2 restart app.js -x -- --prod'
What worked for me was adding this to my .bash_profile:
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
Source: https://stackoverflow.com/a/820533/1824444

Shell Script not running properly

I have a linux shell script that when run from command line works perfectly but when scheduled to run via crontab, it does not give desired results.
The script is quite simple, it checks to see if mysql-proxy is running or not by checking if its pid is found using the pidof command. If found to be off, it attempts to start the proxy.
# Check if mysql proxy is off
# if found off, attempt to start it
if pidof mysql-proxy
then
echo "Proxy running."
else
echo "Proxy off ... attempting to restart"
/usr/local/mysql-proxy/bin/mysql-proxy -P 172.20.10.196:3306 --daemon --proxy-backend-addresses=172.20.10.194 --proxy-backend-addresses=172.20.10.195
if pidof mysql-proxy
then
echo "Proxy started"
else
echo "Proxy restar failed"
fi
fi
echo "==============================================="
The script is saved in a file check-sql-proxy.sh and has permissions set to 777. When I run the script from command line (sh check-sql-proxy.sh) it gives the desired output.
4066
Proxy running.
===============================================
The script is also scheduled to run every 5 minutes in crontab as
*/5 * * * * bash /root/auto-restart-mysql-proxy.sh > /dev/sql-proxy-restart-log.log
However, when I see the sql-proxy-restart-log.log file it contains the output:
Proxy off ... attempting to restart
Proxy restar failed
===============================================
It seems that pidof command fails to return the pid of the running application which brings the flow of script in else condition.
I am unable to figure out how to resolve this since when I run the script manually, it works fine.
Can anyone help what I am missing with regards to permissions or settings?
Thanks in advance.
Mudasser
Check that the shell is what you think it is (usually /bin/sh, not bash)
Also check that PATH environment variable. Usually, for cron jobs it is a good practice to fully qualify all paths to binaries, e.g.
#!/bin/bash
# Check if mysql proxy is off
# if found off, attempt to start it
if /bin/pidof mysql-proxy
etc.
Try pidof /usr/local/mysql-proxy/bin/mysql-proxy (full path to executable)
In common, try use the same command name as was used to start the instance of mysql-proxy.
The problem seems that crontab environment don't have the same environment as you.
You have 2 simple & proper solutions :
In the first lines of crontab :
PATH=/foo:/bar:/qux
SHELL=/bin/bash
or
source ~/.bashrc
in your scripts.

Resources