I have seen many ways to launch a script like putting it in profile.D, rc.local, or creating a auto start file but none of those launch the file in a visible window if at all. I need it to be in a visible window in Ubuntu. I need to do this because I am using several emulators to stream to different services, and I don't want to have to start the script on each manually.
I am using visual box for the emulator. The sh file is on a removable drive because it is an external file. I also need it to run as sudo.
Edit: I don't actually need it to run at startup. I just need to have the script run. I can probably just sleep really long for graphic to load.
Edit 2: So I created a service that launched a sh file in /usr/bin/ which was supposed to create a gnome-terminal window that ran my script. It ran, however It didn't create a visible window for some reason. I then tried to specify a display which caused gnome to freak out. Dbus was not launching correctly. another question stated that gnome would not work because of how it was designed and stated to use konsole instead. Konsole also stated that it could not connect to a display, giving a QXcbConnection error. Konsole does not have an option to specify display. I don't know what else to try
Edit 3: So I did the thing in the comment. And the service works. However it only works after I run the file that the service runs in usr/bin manually after every restart. The important parts of the file:
#!/bin/bash
sleep 60
ufw disable
ssh nateguana#$(hostname) -X
xhost +
*launch Gnome**only works after file ran manually*
I have also tried exporting DISPLAY, and changing users with su. I have not tried importing SSHD, as another question said to do, as I think that is only for non local connections. I have also tried every single arrangement of commands possible. Xhost errors stating that it is unable to open display "".
You can use gnome-terminal -e <command> to spawn a new bash terminal which runs the command.
You could use something like
gnome-terminal -e /path/to/bashfile
Bear in mind, this will end the terminal after the bash scipt is done executing.
To avoid this,in a newline add $SHELL to the end of your bash script.
PS: the -e argument is deprecated and might be removed in later versions
This question already has answers here:
UBUNTU: XOpenDisplay(NULL) fails when program run in boot sequence via rc.local
(2 answers)
Closed 7 years ago.
I have to run a bash script on bootup which opens couple of terminals and runs some command in each terminal.
test.sh (My bash file)
#!/bin/bash
sleep 10
gnome-terminal --tab-with-profile="Default" -e 'bash -c '\''export TAB=1; mkdir /home/naman/Desktop/test_folder'\'
I have created a testjob.conf inside /etc/init/ :
testjob.conf (Upstart config file)
description "A test job file for experimenting with Upstart"
author "Naman Kumar"
start on runlevel [2345]
exec echo Test Job ran at `date` >> /var/log/testjob.log
exec bash /home/naman/Desktop/test.sh
Now, the problem is testjob.conf is not able to run the test.sh file on bootup (or it runs it but does not create a folder test_folder). If I remove the last line from testjob.conf : exec bash /home/naman/Desktop/test.sh, everything works and when I do cat /var/log/testjob.log, I get the correct output but if the last line is there, cat /var/log/testjob.log does not give the latest output.
I have also tried updating /etc/rc.local with : bash /home/naman/Desktop/test.sh but that also does not seem to be running the test.sh script on bootup
I am not sure whether they run the scripts but are not able to create the folder or they are not even able to run the script on bootup.
Note: I can not use System -> Preferences -> Startup Applications because I don't have any monitor, so the desktop application does not run. (I am running it on a Single Board Computer with no monitor).
Does anyone know what is the issue here and why is the test.sh script not running properly on bootup?
Thanks in advance.
Naman
Cong ma is pretty close on this. Sysvinit is for system level daemons. It has no idea for users. That means it doesnt have a way to interact with your window system (or gnome).
Further: without a monitor, what would you expect gnome-terminal to do? gnome-terminal would open up a terminal on the monitor: which you can't see.
What you should look at is taking your commands (date, etc) and putting them in /etc/rc.local and not trying to 'olay them into a different terminal or anything. Just literally run the commands there.
I am trying to schedule the execution of a shell-script with the Linux tool "at".
The shell script (video.sh) looks like this:
#!/bin/sh
/usr/bin/vlc /home/x/video.mkv
The "at" command:
at -f /home/x/video.sh -t 201411052225
When the time arrives, nothing happens.
I can execute the shell-script just fine via console or by rightclicking - Execute. VLC starts like it is supposed to. If I change the script to e.g. something simple like
#!/bin/sh
touch something.txt
it works just fine.
Any ideas, why "at" will not properly execute a script that starts a graphical program? How can I make it work?
You're trying to run an X command (a graphical program) at a scheduled time. This will be extremely difficult, and quite fragile, because the script won't have access to the X server.
At the very least, you will need to set DISPLAY to the right value, but even then, I suspect you will have issues with authorisation to use the X screen.
Try setting it to :0.0 and see if that works. But if you're logged out, or the screensaver's on, or any number of other things...
(Also, redirect vlc's stdout and stderr to a file so that you can see what went wrong.)
Your best bet might be to try something like xuserrun.
I suspect that atd is not running. You have to start the atd daemon before (and to set DISPLAY variable like chiastic-security said) ;)
You can test if atd is running with
pidof atd &>/dev/null && echo 'ATD started' || echo >&2 'ATD not started
Your vlc command should be :
DISPLAY=:0 /usr/bin/vlc /home/x/video.mkv
(Default display)
After researching, it seems that it would be easier to add commands to an existing script as opposed to creating a startup script for each of my needs. I am trying to get a series of repititive tasks done at system startup like:
sudo mkdir -p ~/scripts
sudo mount -t vboxsf scripts ~/scripts
Instead of finding a methodology for each system (I read that start script vary from system to system), I would like to know if there is a universal scripts to append this too (like I have done with environment variables in /etc/environment). Is there a universal file I can target to do these mounts?
Thanks, Yucca
some distributions (Redhat/CentOS) have /etc/rc.local exactly for this task. On openSuSE it is /etc/init.d/after.local
Take a look at initd
http://www.ghacks.net/2009/04/04/get-to-know-linux-the-etcinitd-directory/
I feel silly asking this...
I am not an expert on shell scripting, but I am finally in enough of a sysadmin role that I want to do this correctly.
I have a production server that hosts a webapp. Here is my routine.
1 - ssh to server
2 - cd django_src/django_apps/team_proj
3 - svn update
4 - sudo /etc/init.d/apache2 restart
5 - logout
I want to create a shell script for steps 2,3,4.
I can do this, but it will be a very plain and simple bash script simply containing the actual commands I type at the command line.
My question: What is the best way to script this kind of repetitive procedure in bash (Linux, Ubuntu) for a remote server?
Thanks!
The best way is simply as you suggest. Some things you should do for your script would be:
put set -e at the top of the script (after the shebang). This will cause your script to stop if any of the commands fail. So if it cannot cd to the directory, it will not run svn update or restart apache. You can do this programmatically by putting || exit 0 after each command, but if that's all you're doing, you may as well use set -e
Use full paths in your script. Do not assume the directory that the script is run from. In this specific case, the cd command has a relative path. Use a full (absolute) path, or use an environment variable like $HOME.
You may want to set up sudo so that it can run the command without asking for a password. This makes your script non-interactive which means it can be run in the background and from cron jobs and such.
As time goes by, you may add features and take command line arguments to parameterise the script. But don't bother doing this up front. Just evolve your scripts as you need.
There is nothing wrong with a simple bash script simply containing the actual commands you type at the command line. Don't make it more complicated than necessary.
I'd setup a cron job doing that automatically.
Since you're using python, check out fabric - you can use it to automate these kind of tasks. First install fabric:
$ sudo easy_install fabric
then write your fabric script:
from __future__ import with_statement
from fabric.api import *
def svnupdate():
with cd('django_src/django_apps/team_proj'):
run('svn update')
sudo('/etc/init.d/apache2 restart')
Save as fabfile.py, then run using the fab command:
$ fab -H hostname svnupdate
Tell me that's not cool! :-)
you can do this with the shell (bash,ksh,zsh + ssh + tools), or programming languages such as Python,Perl(Ruby or PHP or Java) etc, basically a language that supports SSH protocol and operating system functions. The "best" one is the one that you are more comfortable and have knowledge in. If you are doing sysadmin, the shell is the closest thing you can use. Then after you have done your script, you can use the crontab (cron) , or the at command to schedule your task. check their man page for more information
You can easily do the above using bash/Bourne etc.
However I would take the time and effort to learn Perl (or some similarly powerful scripting language). Why ?
the language constructs are much more powerful
there are no end of libraries to interface to the systems/features you want to script
because of the library support, you won't have to spawn off different commands to achieve what you want (possibly valuable on a loaded system)
you can decompose frequently-used scripts into your own libraries for later use
I choose Perl particularly because it's been designed (perhaps designed is too strong a word for Perl) for these sort of tasks. However you may want to check out Ruby/Python or other suggestions from SO contributers.
For the basic steps look at camh's answer. If you plan to run the script via cron, then implement some simple logging, e.g. by appending start time of each command with exit code to a textfile which you can later analyze for failures of the script.
Expect -- scripting interactive applications
Expect is a tool for automating interactive applications such as telnet, ftp, passwd, fsck, rlogin, tip, etc.... Expect can make easy all sorts of tasks that are prohibitively difficult with anything else. You will find that Expect is an absolutely invaluable tool - using it, you will be able to automate tasks that you've never even thought of before - and you'll be able to do this automation quickly and easily.
http://expect.nist.gov
bonus: Your tax dollars at work!
I would probably do something like this...
project_update.sh
#!/bin/bash
#
# $1 - user#host
# $2 - project directory
[[ -z $1 || -z $2 ]] && { echo "usage: $(basename $0) user#host project_dir"; exit 1; }
declare host=$1 proj_dir=$2
ssh $host "cd $proj_dir;svn update;sudo /etc/init.d/apache2 restart" && echo "Success"
Just to add another tip - you should not give users access to some application in an unknown state. svn up might break during the update, users might see a page that's half-new half-old, etc. If you're deploying the whole application at once, I'd suggest doing svn export instead to a new directory and then either mv current old ; mv new current, or even keeping current as a link to the directory you're using now. Still not perfect and not blocking every possible race condition, but it definitely takes less time than svn up on the live copy.