Make a script to download file and a cron job for it through ssh - linux

I'm trying to create a script to download a file daily with the older version overwritten.
I'm pretty sure I need a cron job, and a shell script with a wget line in it, but that is as far as I know. Also, I need to do all of this through ssh, unless there's another way I'm not aware of.
If I do it through SSH, what commands do I need to use through the various steps in the process? What will the cron and the shell files look like? If there's a better way, please enlighten!
Thanks!
Zeem

From your description, I'm picturing the following:
connect to the server via SSH
find the location of wget
which wget
(on my machine it's /usr/bin/wget)
add the following to your /etc/crontab (or cronjobs file) using a text editor, such as pico or vi:
#daily /usr/bin/wget http://remote-host.name/path/to/file.txt /local/path/to/file.txt
(If you add this to the /etc/crontab, you'll probably need the additional user parameter, but you can see crontab help for that.)
hope that helps.

Implement password-less ssh authentication between the hosts.
http://www.linuxproblem.org/art_9.html
So host A can create/implement an script or cronjob on host B using ssh.
To create a cronjob by using a script, your script create (for example) an textfile at /etc/cron.d/CronJobName. It is important, that the content of the file corresponds to the corn format: http://en.wikipedia.org/wiki/Cron#Examples
(I hope, I understand your question right)

Thanks for your answers, Thankfully it was much simpler. I was able to add a cron job via cpanel, and the wget line went straight in there.

Related

no job control in this shell

Quick question regarding this error in Bash. I have some bash scripts that I run at work on some servers. We have been trying to implement Rundeck as a way to automate. When I call these scripts through Rundeck I get the error bash: no job control in this shell. Now I am new to linux and shell scripting but what I have figured is this is due to interactivity of the shell. The scripts I am calling will call other scripts (perl and shell) as part of their operation. But since I am not logged on in a terminal, they can't do this and fail. That is the best understanding I have.
I have tried adding a -l to my #!/bin/bash in my script I send through Rundeck. I have also tried to utilize the ad-hoc command option through Rundeck and run the jobs individually. Still not luck. I thought perhaps it was a pathing issue so I tired setting the path PATH=$PATH but no change.
Basically I need to allow these scripts to open their subprocesses. So the question is, how I modify how I call these scripts so they have full control? Is that possible? Sorry if this lacks info, I just don't know how properly put it out there. let me know what other info is needed. Thanks
Edit: Some code snips on where the message comes in:
system "pntadm -R $Net >/dev/null 2>&1";
system "pntadm -C $Net $BUILD{$Net}{netmask} $MYip";

Script runs from terminal, but not cron. What edits to this script do I need to make?

I have a script used for zipping a database and site files, then dumps the output into a backup folder on the server. The script runs fine from the command line, but it will not work through cron.
After much research, I am thinking that cron cannot run it in its current form because it runs in a different environment.
Here is the script, saved as file_name.sh
#!/bin/bash
NOW=$(date +"%Y-%m-%d-%H%M")
FILE="website.com.$NOW.tar"
BACKUP_DIR="/backupfolder"
WWW_DIR="/var/www/website/"
DB_USER="dbuser"
DB_PASS="dbpw"
DB_NAME="dbname"
DB_FILE="website.com.$NOW.sql"
WWW_TRANSFORM='s,^var/www/website,www,'
DB_TRANSFORM='s,^backupfolder,database,'
tar -cvf $BACKUP_DIR/$FILE --transform $WWW_TRANSFORM $WWW_DIR
mysqldump -u$DB_USER -p$DB_PASS $DB_NAME > $BACKUP_DIR/$DB_FILE
tar --append --file=$BACKUP_DIR/$FILE --transform $DB_TRANSFORM $BACKUP_DIR/$DB_FILE
rm $BACKUP_DIR/$DB_FILE
gzip -9 $BACKUP_DIR/$FILE
I currently have the script stored in /usr/local/scripts/
Is there something wrong with the above code that does not allow it to run through cron?
Which crontab should it go in? crontab -e from terminal, or /etc/crontab? They are two different files.
Several things come to mind: first, one of the most common problems with cron jobs is that generally crond runs things with a very minimal PATH (usually just /usr/bin:/bin), so if the script uses any commands from some other binaries directory, it'll fail. Where is mysqldump on your system (run which mysqldump if you aren't sure)? If this is the problem, adding PATH=/usr/local/bin:/usr/bin:/bin (or whatever's appropriate in your case) at the beginning of your script should fix it. Alternately, you can set PATH in the crontab file (put this line before the entry that runs your script).
If that's not the problem, my next step would be to capture the script's output, with something like:
1 1 * * * /usr/local/scripts/file_name.sh >/tmp/file_name.log 2>&1
... and see if the output is informative. BTW, as #tripleee mentioned, the format of your cron entry is suitable for the files crontab -e edits, but not for /etc/crontab. The /etc version has an additional field specifying which user to run the job as, e.g.
1 1 * * * eric /usr/local/scripts/file_name.sh >/tmp/file_name.log 2>&1
Best practice is to always use crontab -e (the resultant files are usually in /var/spool/cron/) and this works on every unix and linux platform I ever worked on.
Other common issues with cron execution are missing environment variables. Any environment variables set in .bash_profile (or .profile if you use korn shell) will not necessarily be present in the cron environment. This can be overcome by including them in your script.
As Gordon said, paths are another suspect. You can always full path you executables in your script (eg /bin/mysqldump). Some of the more cynical of us do this anyway to make sure we are executing what we intended as apposed to some other file of the same name in the current path.
I can only guess at your specific problem since you fixed it by creating /scripts, that perhaps the permissions on /usr/local/scripts directory did not allow execution by the cron user?
I have had to remove the extension (.sh) for cron to run in some instances.
So I fixed it. Not sure what the problem was, but this worked for me.
I originally had the scripts located in /usr/local/scripts/
I created a new directory here - /scripts/ and moved the scripts there. The new crontab -e command looked like this:
1 1 * * * bash /scripts/file_name.sh
Works perfectly. Again, I am not sure what the issue was before, but it works now.

Run script on two machines

I have a shell script that I need to automate with cron. At our office, there is a specific machine that I must log in to in order to use cron. My problem is, that the script I have written interacts with git, using git commands to pull code and switch branches. The machine where I am able to schedule cron jobs and the script is being run from does not have git on it. I have a separate machine that I log in to when I am using git. Is there an easy way for me to run my script from the cron system and run the git part from the git system?
UPDATE: I am still interested if this can be done, but my team has acquired a new machine that we will set up however we choose, meaning that it will have cron and git. Thanks for any ideas
As some people have mentioned above, ssh is the way to do this. This is a bash line that I use a lot in my job, for gathering data from other servers:
ssh -T $server -l username "/export/home/path/to/script.sh $1 $2" 1>traf1.txt 2>/dev/null
The above code sample will connect to the ip $server, as user username and run the script script.sh, passing it the parameters $1 and $2. Instead of redirection you could also assign the command output to a variable, just as you would with any other command in your script.
PS: Please note that in order for the above to work, you will need to set up passwordless login between those machines. Otherwise your script will break to request password input, which is most probably not the desired behavior.

Replicating SCP command in Shell script in linux?

Hi I have been given a task of copying files from a given server to local machine. Even I can do it manually using the command line but I need to write a script to automate it. I dont have any clue how to do it using shell, how to give the password which we would have done manually. I went through other posts but did not get the precise answer.
Are there better ways than using SCP command?
Thanks in advance
The preferred + more secure way to do this is to set up ssh key pairs
That being said, if there's a specific need to supply passwords as part of your shell script, you can use pscp, which is part of putty-tools:
If you are on ubuntu, you can install it by:
sudo apt-get install putty-tools
(Or use equivalent package managers depending on your system)
Here's an example script of how to use pscp:
#!/bin/bash
password=hello_world
login=root
IP=127.0.0.1
src_dir=/var/log
src_file_name=abc.txt
dest_folder=/home/username/temp/
pscp -scp -pw $password $login#$IP:$src_dir/$src_file_name $dest_folder
This copies /var/log/abc.txt from the specified remote server to your local /home/username/temp/

Help debugging a cron job which has the correct script path and works when manually triggered

I'm struggling trying to debug a cron job which isn't working correctly. The cron job calls a shell script which should unrar a rar file - this works correctly when i run the script manually, but for some reason it's not working via cron. I am using the absolute file path and have verified that the path is correct. Has anyone got any ideas why this could be happening?
Well, you already said that you have used absolute paths, so the number one problem is dealt with.
Next to check are permissions. Which user is the cron job run as? Does it have all the permissions necessary?
Then, a little trick: if you have a shell script that fails and it's not run in a terminal I like to redirect the output of it to some files. Right at the start of the script, add:
exec &>/tmp/my.log
This will redirect STDOUT and STDERR to /tmp/my.log. Then it might also be a good idea to also add the line:
set -x
This will make bash print which command it's about to execute, and at what nesting level.
Happy debugging!
The first thing to check when cron jobs fail is to see if the full environment is available to the script you are trying to execute. In other words, you need to realize that a job executed via cron runs as a detached process meaning it is not associated with a login environment. Therefore whenever you try to debug a cron job that works when you execute manually, you need to be sure the same environment is available to the cronjob as is available to you when you execute it manually. This include any PATH settings, and other envvars that the script may depend on.
For me, the problem was a different shell interpreter in crontab.

Resources