Script to download a web page - linux

i made a web server to show my page locally, because is located in a place with a poor connection so what i want to do is download the page content and replace the old one, so i made this script running in background but i am not very sure if this will work 24/7 (the 2m is just to test it, but i want it to wait 6-12 hrs), so, ¿what do you think about this script? is insecure? or is enough for what i am doing? Thanks.
#!/bin/bash
a=1;
while [ $a -eq 1 ]
do
echo "Starting..."
sudo wget http://www.example.com/web.zip --output-document=/var/www/content.zip
sudo unzip -o /var/www/content.zip -d /var/www/
sleep 2m
done
exit
UPDATE: This code i use now:
(Is just a prototype but i pretend not using sudo)
#!/bin/bash
a=1;
echo "Start"
while [ $a -eq 1 ]
do
echo "Searching flag.txt"
if [ -e flag.txt ]; then
echo "Flag found, and erasing it"
sudo rm flag.txt
if [ -e /var/www/content.zip ]; then
echo "Erasing old content file"
sudo rm /var/www/content.zip
fi
echo "Downloading new content"
sudo wget ftp://user:password#xx.xx.xx.xx/content/newcontent.zip --output-document=/var/www/content.zip
sudo unzip -o /var/www/content.zip -d /var/www/
echo "Erasing flag.txt from ftp"
sudo ftp -nv < erase.txt
sleep 5s
else
echo "Downloading flag.txt"
sudo wget ftp://user:password#xx.xx.xx.xx/content/flag.txt
sleep 5s
fi
echo "Waiting..."
sleep 20s
done
exit 0
erase.txt
open xx.xx.xx.xx
user user password
cd content
delete flag.txt
bye

I would suggest setting up a cron job, this is much more reliable than a script with huge sleeps.
Brief instructions:
If you have write permissions for /var/www/, simply put the downloading in your personal crontab.
Run crontab -e, paste this content, save and exit from the editor:
17 4,16 * * * wget http://www.example.com/web.zip --output-document=/var/www/content.zip && unzip -o /var/www/content.zip -d /var/www/
Or you can run the downloading from system crontab.
Create file /etc/cron.d/download-my-site and place this content into in:
17 4,16 * * * <USERNAME> wget http://www.example.com/web.zip --output-document=/var/www/content.zip && unzip -o /var/www/content.zip -d /var/www/
Replace <USERNAME> with a login that has suitable permissions for /var/www.
Or you can put all the necessary commands into single shell script like this:
#!/bin/sh
wget http://www.example.com/web.zip --output-document=/var/www/content.zip
unzip -o /var/www/content.zip -d /var/www/
and invoke it from crontab:
17 4,16 * * * /path/to/my/downloading/script.sh
This task will run twice a day: at 4:17 and 16:17. You can set another schedule if you'd like.
More on cron jobs, crontabs etc:
Add jobs into cron
CronHowto on Ubuntu
Cron(Wikipedia)

Simply unzipping the new version of your content overtop the old may not be the best solution. What if you remove a file from your site? The local copy will still have it. Also, with a zip-based solution, you're copying EVERY file each time you make a copy, not just the files that have changed.
I recommend you use rsync instead, to synchronize your site content.
If you set your local documentroot to something like /var/www/mysite/, an alternative script might then look something like this:
#!/usr/bin/env bash
logtag="`basename $0`[$$]"
logger -t "$logtag" "start"
# Build an array of options for rsync
#
declare -a ropts
ropts=("-a")
ropts+=(--no-perms --no-owner --no-group)
ropts+=(--omit-dir-times)
ropts+=("--exclude ._*")
ropts+=("--exclude .DS_Store")
# Determine previous version
#
if [ -L /var/www/mysite ]; then
linkdest="$(stat -c"%N" /var/www/mysite)"
linkdest="${linkdest##*\`}"
ropts+=("--link-dest '${linkdest%'}'")
fi
now="$(date '+%Y%m%d-%H:%M:%S')"
# Only refresh our copy if flag.txt exists
#
statuscode=$(curl --silent --output /dev/stderr --write-out "%{http_code}" http://www.example.com/flag.txt")
if [ ! "$statuscode" = 200 ]; then
logger -t "$logtag" "no update required"
exit 0
fi
if ! rsync "${ropts[#]}" user#remoteserver:/var/www/mysite/ /var/www/"$now"; then
logger -t "$logtag" "rsync failed ($now)"
exit 1
fi
# Everything is fine, so update the symbolic link and remove the flag.
#
ln -sfn /var/www/mysite "$now"
ssh user#remoteserver rm -f /var/www/flag.txt
logger -t "$logtag" "done"
This script uses a few external tools that you may need to install if they're not already on your system:
rsync, which you've already read about,
curl, which could be replaced with wget .. but I prefer curl
logger, which is probably installed in your system along with syslog or rsyslog, or may be part of the "unix-util" package depending on your Linux distro.
rsync provides a lot of useful functionality. In particular:
it tries to copy only what has changed, so that you don't waste bandwidth on files that are the same,
the --link-dest option lets you refer to previous directories to create "links" to files that have not changed, so that you can have multiple copies of your directory with only single copies of unchanged files.
In order to make this go, both the rsync part and the ssh part, you will need to set up SSH keys that allow you to connect without requiring a password. That's not hard, but if you don't know about it already, it's the topic of a different question .. or a simple search with your favourite search engine.
You can run this from a crontab every 5 minutes:
*/5 * * * * /path/to/thisscript
If you want to run it more frequently, note that the "traffic" you will be using for every check that does not involve an update is an HTTP GET of the flag.txt file.

Related

linux - rclone and script

I want make daily backups on my dropbox using rclone it works fine with cron but i want make it like this
today i got folder test on my dropbox and tomorrow i want folder test1 and next tomorrow folder test2 instead overwriting test folder so i can get backup from 4 days instead yesterday (i dont know if u guys understand me my english is not perfect sorry)
script code (.sh):
#!/bin/sh
if [ -z "$STY" ]; then
exec screen -dm -S backup -L -Logfile '/root/logs/log' /bin/bash "$0"
fi
rclone copy --update --verbose --transfers 30 --checkers 8 \
--contimeout 60s --timeout 300s --retries 3 \
--low-level-retries 10 --stats 1s \
"/root/test/file" "dropbox:test"
exit
Ubuntu 18.10 64bit
Simply use rclone move: https://rclone.org/commands/rclone_move/
If it exists: move dropbox:test3 to dropbox:test4
If it exists: move dropbox:test2 to dropbox:test3
If it exists: move dropbox:test to dropbox:test2
copy "/root/test/file" to "dropbox:test"

script not running via crontab

I have created a shell script which deletes subfolder of var/cache folder. Please check below script.
#!/bin/sh
now=$(date +"%Y-%m-%d %T")
if rm -rf var/cache/* ; then
echo "$now: Deleted"
else
echo "$now: problem"
fi
When I run this shell file directly by command sh hello.sh it works fine.
But when I run this file using crontab it creates an entry in log file but doesn't delete subfolder of var/cache/..
Please check my crontab as well.
*/1 * * * * /bin/sh /www/html/wp/hello.sh >> /www/html/var/log/redis.flush.cron.log 2>&1
Please suggest how can I run that file using crontab.
Try using an absolute path instead of var/cache. When you run it via cron, it will run a) as a specific user, and b) from the home directory of that user. One or both of these might be causing issues for you.
Instead of this:
if rm -rf var/cache/* ; then
Try something like this:
if rm -rf /full/path/to/var/cache/* ; then

crontab bash script not running

I updated the script with the absolute paths. Also here is my current cronjob entry.
I went and fixed the ssh key issue so I know it works know, but might still need to tell rsync what key to use.
The script runs fine when called manually by user. It looks like not even the rm commands are being executed by the cron job.
UPDATE
I updated my script but basically its the same as the one below. Below I have a new cron time and added an error output.
I get nothing. It looks like the script doesn't even run.
crontab -e
35 0 * * * /bin/bash /x/y/z/s/script.sh 2>1 > /tmp/tc.log
#!/bin/bash
# Clean up
/bin/rm -rf /z/y/z/a/b/current/*
cd /z/y/z/a/to/
/bin/rm -rf ?s??/D????
cd /z/y/z/s/
# Find the latest file
FILE=`/usr/bin/ssh user#server /bin/ls -ht /x/y/z/t/a/ | /usr/bin/head -n 1`
# Copy over the latest archive and place it in the proper directory
/usr/bin/rsync -avz -e /urs/bin/ssh user#server:"/x/y/z/t/a/$FILE" /x/y/z/t/a/
# Unzip the zip file and place it in the proper directory
/usr/bin/unzip -o /x/y/z/t/a/$FILE -d /x/y/z/t/a/current/
# Run Dev's script
cd /x/y/z/t/
./old.py a/current/ t/ 5
Thanks for the help.
I figured it out, I'm use to working in cst and the server was in gmt time.
Thanks everybody for the help.

Apt-get not working after setting proxy in ubuntu server

Hi Im using ubuntu server.
I have set the proxies using :
echo "export http_proxy='http://username:password#proxyIP:port/'" | tee -a ~/.bashrc
Ive set up http, https and ftp proxies. Wget works fine but apt-get does not connect.
Please help!
I don't know about apt-get but I've never got yum to work with the environment variables. Instead, I've had to set the proxy via its config.
Here's a related post for the apt-get conf:
https://askubuntu.com/questions/89437/how-to-install-packages-with-apt-get-on-a-system-connected-via-proxy
Also... how awesome is plain-text passwords sitting in a file that's public-readable by default! (or environment variables and bash history for that matter)
You need to update .bashrc and apt.conf for this to work.
Following link would explain this in detail.
http://codewithgeeks.blogspot.in/2013/11/configure-apt-get-to-work-from-behind.html
apt-get is set to ignore the system default proxy settings.
To set the proxy you have to go to /etc/apt/apt.conf file and add following lines:
Acquire::http::proxy "http://username:password#proxyIP:port/";
Acquire::ftp::proxy "ftp://username:password#proxyIP:port/";
Acquire::https::proxy "https://username:password#proxyIP:port/";
You can create a script also to set this and unset this when ever you want.
Create a script aptProxyOn.sh
if [ $(id -u) -ne 0 ]; then
echo "This script must be run as root";
exit 1;
fi
if [ $# -eq 2 ]
then
printf \
"Acquire::http::proxy \"http://$1:$2/\";\n\
Acquire::ftp::proxy \"ftp://$1:$2/\";\n\
Acquire::https::proxy \"https://$1:$2/\";\n" > /etc/apt/apt.conf.d/01proxies;
sudo cp /etc/apt/apt.conf.d/01proxies /etc/apt/apt.conf
else
printf "Usage $0 <proxy_ip> <proxy_port>\n";
fi
To remove proxy, create a script with name aptProxyOff.sh
if [ $(id -u) -ne 0 ]; then
echo "This script must be run as root";
exit 1;
fi
printf "" > /etc/apt/apt.conf.d/01proxies;
sudo cp /etc/apt/apt.conf.d/01proxies /etc/apt/apt.conf
Give permission for both the files to run. chmod +x aptProxyOn.sh aptProxyOff.sh
You have to run them in the following way.
Proxy On -
sudo ./aptProxyOn.sh username:password#proxyIP port
Proxy Off -
sudo ./aptProxyOff.sh
Tip:
If you have # in your username or password, it will not work directly.
You have to use URL encoding of # which is %40. But while passing commandline arguments you can't use %40, you have to use %%40
This worked out best for me:
switch in to the /etc/apt directory
Edit the following file , if the file is not present this command will create one
gedit /apt.conf.d/apt.conf
and add the following line
Aquire::http::proxy "http://[proxy-server]:[port]";
Aquire::https::proxy "https://[proxy-server]:[port]";

cron job not working properly giving error as "syntax error near unexpected token `)'"

I m creating cron job that takes backup of my entire DB. For that I used following code
*/5 * * * * mysqldump -u mydbuser -p mypassword mydatabase | gzip > /home/myzone/public_html/test.com/newfolder/dbBackup/backup.sql.gz
But instead of getting backup I m getting error as "syntax error near unexpected token `)' ". In my password there is round bracket included is this happening because of this. Please Help me..
Thanks in advance.
) is a special character for the shell (and crontab uses the shell to execute commands).
Add single quotes around your password:
*/5 * * * * mysqldump -u mydbuser -p 'mypassword' mydatabase | ...
try to remove spaces between -u mydbuser and -p mypassword..
-umydbuser -pmypassword
As I suggested in my commentary, move this in external script, and include the script in cron.daily. I've given below the basic skeleton for such a script. This way you gain a couple of advantages => you can test the script, you can easily reuse it, it's also configurable. I don't know if you do this for administration or personal usage. My suggestion is more towards "I do it for administration" :)...
#!/bin/bash
# Backup destination directory
DIR_BACKUP=/your/backup/directory
# Timestamp format for filenames
TIMESTAMP=`date +%Y%m%d-%H%M%S`
# Database name
DB_NAME=your_database_name
# Database user
DB_USER=your_database_user
# Database password
DB_PASSWD=your_database_password
# Database export file name
DB_EXPORT=your_database_export_filename.sql
# Backup file path
BKFILE=$DIR_BACKUP/your-backup-archive-name-$TIMESTAMP.tar
# Format for time recordings
TIME="%E"
###########################################
# Create the parent backup directory if it does not exist
if [ ! -e $DIR_BACKUP ]
then
echo "=== Backup directory not found, creating it ==="
mkdir $DIR_BACKUP
fi
# Create the backup tar file
echo "=== Creating the backup archive ==="
touch $BKFILE
# Export the database
echo "=== Exporting YOUR DATABASE NAME database ==="
time bash -c "mysqldump --user $DB_USER --password=$DB_PASSWD $DB_NAME > $DIR_BACKUP/$DB_EXPORT"
# Add the database export to the tar file, remove after adding
echo "=== Adding the database export to the archive ==="
time tar rvf $BKFILE $DIR_BACKUP/$DB_EXPORT --remove-files
# Compress the tar file
echo "=== Compressing the archive ==="
time gzip $BKFILE
# All done
DATE=`date`
echo "=== $DATE: Backup complete ==="

Resources