I have a bash script lets say test.sh. This script contains the following:
#!/bin/bash
echo "khaled"
ads2 svcd&
This script simply prints my name (just for test purposes) and execute ads-service application in the background. When i run the script on my ubuntu, it works correctly. As a test i checked which programs run on the kernel
As you see. ads2 runs and has 12319 process-id.
Now what I'm trying to do is to run the script on the ubuntu, however remotely from a windows pc.
Therefore i opened command-line on windows and executed the following command:
ssh nvidia#ubuntu ip-address ~/test.sh
And i get the following
As you see the scripts run and prints khaled,however on windows command line and what i want is that the script is executed on the ubuntu. this justify why the lineads2 svcd& doe not do anything, neither on windows (which makes sense, since ads2 is installed on ubuntu) nor on linux.
So how can i execute the script on ubuntu ?
thanks in advance
Use the full path to start ads2. When using remote SSH your environment variables may be different than in a local shell.
#!/bin/bash
echo "khaled"
/home/nvidia/ads2 svcd&
Not sure where ads2 is located.
Try the following to locate it on your Ubuntu local shell.
command -v ads2
You may also need nohup to persist the process beyond the life of the SSH session.
If you have the script on the remote server and you want to run this, you would add back ticks,
ssh user#server './test/file.sh'
The script's output would be sent to your local machine, as if you ran the command from your local machine.
My requirement is to save history of the commands into a file called history_yymmdd.txt by running the following command on a remote server.
history > history_20170523.txt
I tried with the following command, but it was creating a blank file on remote server(10.12.13.14).
ssh 10.12.13.14 "history > history_20170523.txt"
When I log in to the remote server and run the history command, then the file was created successfully. But I need to run the command on 20 servers so creating a script to run it remotely on each server is my objective here.
Thanks in advance.
ssh user#machine_name "cat ~/.bash_history > history_20170523.txt"
The 'history' command dumps the contents of .bash_history, so this may be useful to you.
A more elegant solution might be:
scp user#machine_name:~/.bash_history history_20170523.txt
you are doing it in a wrong way, also there is no user for the remote machine. Correct way to do is
ssh -q -tt user#10.12.13.14 'history > history_20170523.txt'
I would like to make a shutdown-script for my raspberry pi to shut down anothe raspberry pi over ssh.
The script works if it is running itself but at the shutdown routine the ssh command is not executed.
So that I have done until now:
Made the script in /etc/init.d:
#!/bin/sh
# the first thing is to test if the shutdown script is working
echo "bla bla bla " | sudo tee -a /test.txt
ssh pi#10.0.0.98 sudo shutdown -h now
Made it executable
sudo chmod +x /etc/init.d/raspi.sh
Made a symlink to the rc0.d
sudo ln -s /etc/init.d/raspi.sh /etc/rc0.d/S01raspi.sh
Now I know so far that the shutdown script is working outside of the shutdown routing by calling itself and the shutdown symlink I made is also working partially because I see the changes in the test.txt file every time I shut down.
Can anyone help me how to solve my problem?
Have you tried with single quotes?
The first link in Google has it
http://malcontentcomics.com/systemsboy/2006/07/send-remote-commands-via-ssh.html
What about the sudo, how do you solve entering the password?
https://superuser.com/questions/117870/ssh-execute-sudo-command
Please check this or other links on the web that have useful information.
I would have send all this in a comment but I cant yet because of reputation.
I have now got the script running by myself. I do not really know why it is now working but I write it down beneath and maybe someone else can clearifiy it.
I don´t think the first two changes at my system makes a difference but I also write it down. In the meanwhile because I do not managed the script to get working I had made a button to shutdown the system manually. Also I made a script which backs the mysql-database up (which is on the Raspberry Pi which I would like to switch off with the script) and copies the backup to the raspberry pi which should switch of the other raspberry automatically via the shutdown-script. This happens with scp and also for the password is a key generated.
I have also changed my script to get a log-message out of the script.
#!/bin/sh
ssh -t -t pi#10.0.0.99 'sudo shutdown -h now' >> /home/osmc/shutdown.log 2>&1
To get it into the shutdown-routine I used:
sudo update-rc.d raspi-b stop 01 0
I hope somebody can say me why my code now worked on the first day but not on the next few days until now.
I structured a command to suspend or shutdown a remote host over ssh. You may find this useful. This may be used to suspend / shutdown a remote computer without an interactive session and yet not keep a terminal busy. You will need to give permissions to the remote user to shutdown / suspend using sudo without a password. Additionally, the local and remote machines should be set up to SSH without an interactive login. The script is more useful for suspending the machine as a suspended machine will not disconnect the terminal.
local_user#hostname:~$ ssh remote_user#remote_host "screen -d -m sudo pm-suspend"
source: कार्यशाला (Kāryaśālā)
I have a webservice that runs on multiple different remote redhat machines. Whenever I want to update the service I will sync down the new webservice source code written in perl from a version control depot(I use perforce) and restart the service using that new synced down perl code. I think it is too boring to log to remote machines one by one and do that series of commands to restart the service one by one manully. So I wrote a bash script update.sh like below in order to "do it one time one place, update all machines". I will run this shell script in my local machine. But it seems that it won't work. It only execute the first command "sudo -u webservice_username -i" as I can tell from the command line in my local machine. (The code below only shows how it will update one of the remote webservice. The "export P4USER=myname" is for usage of perforce client)
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Why I know the only first command is executed? Well because after I input the password for the ssh on my local machine, it shows:
Your environment has been modified. Please check /tmp/webservice.env.
And it just gets stuck there. I mean no return.
As suggested by a commentor, I added "-t" for ssh
#!/bin/sh
ssh -t myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
This would let the local commandline return. But it seems weird, it cannot cd to that "dir", it says "cd:dir: No such file or directory" it also says "p4: command not found". So it looks like the sudo -u command executes with no effect and the export command has either not executed or excuted with no effect.
A detailed local log file is like below:
Your environment has been modified. Please check /tmp/dir/.env.
bash: line 0: cd: dir: No such file or directory
bash: p4: command not found
bash: line 0: cd: bin: No such file or directory
bash: ./prog: No such file or directory
tail: cannot open `../logs/service.log' for reading: No such file or directory
tail: no files remaining
Instead of connecting via ssh and then immediately changing users, can you not use something like ssh -t webservice_username#remotehost1 to connect with the desired username to begin with? That would avoid needing to sudo altogether.
If that isn't a possibility, try wrapping up all of the commands that you want to run in a shell script and store it on the remote machine. If you can get your task working from a script, then your ssh call becomes much simpler and should encounter fewer problems:
ssh myname#remotehost1 '/path/to/script'
For easily updating this script, you can write a short script for your local machine that uploads the most recent version via scp and then uses ssh to invoke it.
Note that when you run:
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Your ssh session runs sudo -u webservice_username -i waits for it to exit and then runs the rest of the commands; it does not execute sudo and then run the commands following. This has to do with the context in which you're running the series of commands. All the commands get executed in the shell of myname#remotehost1 and all sudo -u webservice_username - i is starts a shell for webservice_username and doesn't actually run any commands.
Really the best solution here is like bta said; write a script and then rsync/scp it to the destination and then run that using sudo.
export command simply not working with ssh like this, what you want to do is remote modify ~/.bashrc and it will source itself each time u do ssh login.
I have a node.js script which need to start at boot and run under the www-data user. During development I always started the script with:
su www-data -c 'node /var/www/php-jobs/manager.js
I saw exactly what happened, the manager.js works now great. Searching SO I found I had to place this in my /etc/rc.local. Also, I learned to point the output to a log file and to append the 2>&1 to "redirect stderr to stdout" and it should be a daemon so the last character is a &.
Finally, my /etc/rc.local looks like this:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
su www-data -c 'node /var/www/php-jobs/manager.js >> /var/log/php-jobs.log 2>&1 &'
exit 0
If I run this myself (sudo /etc/rc.local): yes, it works! However, if I perform a reboot no node process is running, the /var/log/php-jobs.log does not exist and thus, the manager.js does not work. What is happening?
In this example of a rc.local script I use io redirection at the very first line of execution to my own log file:
#!/bin/sh -e
#
# rc.local
#
# This script is executed at the end of each multiuser runlevel.
# Make sure that the script will "exit 0" on success or any other
# value on error.
#
# In order to enable or disable this script just change the execution
# bits.
#
# By default this script does nothing.
exec 1>/tmp/rc.local.log 2>&1 # send stdout and stderr from rc.local to a log file
set -x # tell sh to display commands before execution
/opt/stuff/somefancy.error.script.sh
exit 0
On some linux's (Centos & RH, e.g.), /etc/rc.local is initially just a symbolic link to /etc/rc.d/rc.local. On those systems, if the symbolic link is broken, and /etc/rc.local is a separate file, then changes to /etc/rc.local won't get seen at bootup -- the boot process will run the version in /etc/rc.d. (They'll work if one runs /etc/rc.local manually, but won't be run at bootup.)
Sounds like on dimadima's system, they are separate files, but /etc/rc.d/rc.local calls /etc/rc.local
The symbolic link from /etc/rc.local to the 'real' one in /etc/rc.d can get lost if one moves rc.local to a backup directory and copies it back or creates it from scratch, not realizing the original one in /etc was just a symbolic link.
I ended up with upstart, which works fine.
In Ubuntu I noticed there are 2 files. The real one is /etc/init.d/rc.local; it seems the other /etc/rc.local is bogus?
Once I modified the correct one (/etc/init.d/rc.local) it did execute just as expected.
You might also have made it work by specifying the full path to node. Furthermore, when you want to run a shell command as a daemon you should close stdin by adding 1<&- before the &.
I had the same problem (on CentOS 7) and I fixed it by giving execute permissions to /etc/local:
chmod +x /etc/rc.local
if you are using linux on cloud, then usually you don't have chance to touch the real hardware using your hands. so you don't see the configuration interface when booting for the first time, and of course cannot configure it. As a result, the firstboot service will always be in the way to rc.local. The solution is to disable firstboot by doing:
sudo chkconfig firstboot off
if you are not sure why your rc.local does not run, you can always check from /etc/rc.d/rc file because this file will always run and call other subsystems (e.g. rc.local).
I got my script to work by editing /etc/rc.local then issuing the following 3 commands.
sudo mv /filename /etc/init.d/
sudo chmod +x /etc/init.d/filename
sudo update-rc.d filename defaults
Now the script works at boot.
I am using CentOS 7.
$ cd /etc/profile.d
$ vim yourstuffs.sh
Type the following into the yourstuffs.sh script.
type whatever you want here to execute
export LD_LIBRARY_PATH=/usr/local/cuda-7.0/lib64:$LD_LIBRARY_PATH
Save and reboot the OS.
I have used rc.local in the past. But I have learned from my experience that the most reliable way to run your script at the system boot time is is to use #reboot command in crontab. For example:
#reboot path_to_the_start_up_script.sh
This is most probably caused by a missing or incomplete PATH environment variable.
If you provide full absolute paths to your executables (su and node) it will work.
It is my understanding that if you place your script in a certain RUN Level, you should use ln -s to link the script to the level you want it to work in.
first make the script executable using
sudo chmod 755 /path/of/the/file.sh
now add the script in the rc.local
sh /path/of/the/file.sh
before exit 0
in the rc.local,
next make the rc.local to executable with
sudo chmod 755 /etc/rc.local
next to initialize the rc.local use
sudo /etc/init.d/rc.local start
this will initiate the rc.local
now reboot the system.
Done..
I found that because I was using a network-oriented command in my rc.local, sometimes it would fail. I fixed this by putting sleep 3 at the top of my script. I don't know why but it seems when the script is run the network interfaces aren't properly configured or something, and this just allows some time for the DHCP server or something. I don't fully understand but I suppose you could give it a try.
I had exactly same issue, the script was running fine locally but when I reboot/power-on it was not.
I resolved the issue by changing the file path. Basically need to give the complete path in the script. While running locally, file can be accessed but when running on reboot, local path will not be understood.
1 Do not recommend using root to run the apps such as node app.
Well you can do it but may catch more exceptions.
2 The rc.local normally runs as root user.
So if the your script should runs as another user such as www U should make sure the PATH and other environment is ok.
3 I find a easy way to run a service as a user:
sudo -u www -i /the/path/of/your/script
Please prefer the sudo manual~
-i [command]
The -i (simulate initial login) option runs the shell specified by the password database entry of the target user as a loginshell...
rc.local only runs on startup. If you reboot and want the script to execute, it needs to go into the rc.0 file starting with the K99 prefix.