I am doing the following in a shell script:
tar cvzf mytar.tgz *
It works fine when I run the shell script from a terminal. When it runs the shell script from a cron job using crontab it looks like it is archived because the tgz file is there but the filesize is nothing and when I untar it there is nothing there. However, when I run the shell script via terminal the tgz has a larger filesize and I can untar them.
Anyone know why it won't work via the cronjob?
Try specifying the complete path to the files you want to archive:
tar cvzf mytar.tgz /path/to/your/files/*
Cron runs from a different directory from your $HOME.
What's the working directory of the cronjob process? If there's nothing in it, then the command will archive all of the nothing.
First, no need to be verbose in a cron.
Second, it looks like you are using relative pathing there. Consider using absolute paths, even for the tar command itself.
Last, which user is running the cron? Is there a potential for a permissions issue or a quota issue?
The other answers so far give good advice. Cron has a lot of special rules wrt what is allowed in the command. I have he most success when I make a simple shell script, and put it in $HOME/cron, chmox 755 it and put the full path to it in cron. Making sure to test the script - ensuring to cd'ing as necessary. Be aware that cron not only won't necessarily run the command from your home, but it will also likely have a different PATH and other environment settings will be missing.
Related
How do developers usually deal with different paths for executable files?
My program is currently in /usr/local/bin and I am wondering how to make it work weather it is in /usr/local/bin or in /usr/bin while being able to access the config files from one of the etc folders (depends on the executable path).
I can't just use relative paths because I need to make it relative to the path of the executable file and even then, it wouldn't be enough because I would need to access /etc weather than /usr/local/etc.
Is there a common way to deal with this situation? Is it dealt with during the installation? Do I need to make a different version of my program for the local and for the global path?
In a shell script, you can detect the executable path of the script with
dirname `readlink -f $0`
and work with that.
If you run your program as root, then it should be able to access the configuration files on /etc/ or any other place without a problem. You could grep it from the script or whatever you need.
If your program is not run as root, then you should make sure that the configuration file being accessed on /etc/ gives the user the right to read it. See chmod man for more information.
Finally, your script should run fine from any of the locations you mentioned, although /usr/local/bin/ is generally where locally developed scripts should go. Just call your script by its full path and it will work: ex: /usr/local/bin/script
Note: don't forget to make your script executable: chmod +x /usr/local/bin/script
I want to backup my database daily automatically, so I made a shell script, and then put it in cron.daily folder in Ubuntu 12.
The script is not complicated,
#!/bin/sh
DIR=`date +%m%d%y`
DEST=/db_backups/$DIR
mkdir $DEST
mongodump -d myapp -o $DEST
this script works well when I run manually like ./automongobackup.sh then It make a backup file in proper location. So I expected If I put it in cron.daily, the backup database will generated automatically, But I checked backup folder today the folder was empty and realize something wrong.
Should I set a another option? The chmod is 755. I attached some screenshots, The first one is my ls-l in cron.daily and second is script. Any missing I did?
Try renaming your script to 'automongobackup' rather than 'automongobackup.sh' as run-parts which handles the crons in cron.daily, and cron.hourly etc doesn't like fullstops/periods in the filename.
Reference: https://askubuntu.com/questions/611336/why-putting-a-script-in-etc-cron-hourly-is-not-working
I have a bash script, which I use for configuration of different parameters in text files in my wireless access media server.
The script is located in one directory, and because I do all of configurations using putty, I have to either use the full path of the file or move to the directory that contains the file. I would like to avoid this.
Is it possible to save the bash script in or edit the bash script so that I can run it as command, for example as cp or ls commands?
The script needs to be executable, with:
chmod +x scriptname
(or similar).
Also, you want the script to be located in a directory that is in your PATH.
To see your PATH use:
echo $PATH
Your choices are: to move (or link) the file into one of those directories, or to add the directory it is in to your PATH.
You can add a directory to your PATH with:
PATH=$PATH:/name/of/my/directory
and if you do this in the file $HOME/.bashrc it will happen for each of your shell's automatically.
You can place a softlink to the script under /usr/local/bin (Should be in $PATH like John said)
ln -s /path/to/script /usr/local/bin/scriptname
This should do the trick.
You can write a minimal wrapper in your home directory:
#!/bin/bash
exec /yourpath/yourfile.extension
And run your child script with this command ./NameOfYourScript
update: Unix hawks will probably say the first solution is a no-brainer because of the additional admin work it will load on you. Agreed, but on your requirements, my solution works :)
Otherwise, you can use an alias; you will have to amend your .bashrc
alias menu='bash /yourpath/menuScript.sh'
Another way is to run it with:
/bin/bash /path/to/script
Then the file doesn't need to be executable.
I want to add a small script to the linux PATH so I don't have to actually run it where it's physically placed on disk.
The script is quite simple is about giving apt-get access through a proxy I made it like this:
#!/bin/bash
array=( $# )
len=${#array[#]}
_args=${array[#]:1:$len}
sudo http_proxy="http://user:password#server:port" apt-get $_args
Then I saved this as apt-proxy.sh, set it to +x (chmod) and everything is working fine when I am in the directory where this file is placed.
My question is : how to add this apt-proxy to PATH so I can actually call it as if it where the real apt-get ? [from anywhere]
Looking for command line only solutions, if you know how to do by GUI its nice, but not what I am looking for.
Try this:
Save the script as apt-proxy (without the .sh extension) in some directory, like ~/bin.
Add ~/bin to your PATH, typing export PATH=$PATH:~/bin
If you need it permanently, add that last line in your ~/.bashrc. If you're using zsh, then add it to ~/.zshrc instead.
Then you can just run apt-proxy with your arguments and it will run anywhere.
Note that if you export the PATH variable in a specific window it won't update in other bash instances.
You want to define that directory to the path variable, not the actual binary e.g.
PATH=$MYDIR:$PATH
where MYDIR is defined as the directory containing your binary e.g.
PATH=/Users/username/bin:$PATH
You should put this in your startup script e.g. .bashrc such that it runs each time a shell process is invoked.
Note that order is important, and the PATH is evaluated such that if a script matching your name is found in an earlier entry in the path variable, then that's the one you'll execute. So you could name your script as apt-get and put it earlier in the path. I wouldn't do that since it's confusing. You may want to investigate shell aliases instead.
I note also that you say it works fine from your current directory. If by that you mean you have the current directory in your path (.) then that's a potential security risk. Someone could put some trojan variant of a common utility (e.g. ls) in a directory, then get you to cd to that directory and run it inadvertently.
As a final step, after following the solution form proposed by #jlhonora (https://stackoverflow.com/a/20054809/6311511), change the permissions of the files in the folder "~/bin". You can use this:
chmod -R 755 ~/bin
make an alias to the executable into the ~/.bash_profile file and then use it from anywhere or you can source the directory containing the executables you need run from anywhere and that will do the trick for you.
adding to #jlhonora
your changes in ~./bashrc or ~./zshrc won't reflect until you do
source ~./zshrc or source ./bashrc , or restart your pc
Maybe the title is a bit "stupid" but I do not know how to express my question and how to search for the question also, even if it is something very simple.
I have a set of scripts that produce a set of reports in the folder they are executed by. For example I have the script "my_script.sh" in the folder /a/folder/ and in this folder a set of output is stored. Since I have a lot of experiments that I want to let them run for the whole week I was thinking of creating a bash script that will call all the other scripts.
But the output will be stored in the folder that the global script is present.
For example:
/global/folder/global_script.sh
---> All the output is stored in this folder.
The global_script.sh may contain something like this:
/experiments/exp1/script1.sh >report1.txt
/experiments/exp1/script2.sh >report2.txt
/experiments/exp1/script2.sh >report3.txt
And I want the output of the bash scripts to be in their folder and not in the global folder.
Currently I am doing this manually navigating to the folder and executing the script.
(Ok I can change the code and use absolute paths! but is any better way to do that? )
you could change the working directory before you execute each script, or redirect the output to the directory you want:
cd /experiments/exp1/
sh /experiments/exp1/script1.sh >report1.txt
or
sh /experiments/exp1/script1.sh > /experiments/exp1/report1.txt
What's wrong with simply changing directory?
cd /experiments/exp1
./script1.sh >report1.txt
./script2.sh >report2.txt
./script2.sh >report3.txt