linux - rclone and script - linux

I want make daily backups on my dropbox using rclone it works fine with cron but i want make it like this
today i got folder test on my dropbox and tomorrow i want folder test1 and next tomorrow folder test2 instead overwriting test folder so i can get backup from 4 days instead yesterday (i dont know if u guys understand me my english is not perfect sorry)
script code (.sh):
#!/bin/sh
if [ -z "$STY" ]; then
exec screen -dm -S backup -L -Logfile '/root/logs/log' /bin/bash "$0"
fi
rclone copy --update --verbose --transfers 30 --checkers 8 \
--contimeout 60s --timeout 300s --retries 3 \
--low-level-retries 10 --stats 1s \
"/root/test/file" "dropbox:test"
exit
Ubuntu 18.10 64bit

Simply use rclone move: https://rclone.org/commands/rclone_move/
If it exists: move dropbox:test3 to dropbox:test4
If it exists: move dropbox:test2 to dropbox:test3
If it exists: move dropbox:test to dropbox:test2
copy "/root/test/file" to "dropbox:test"

Related

How to make Ubuntu bash script wait on password input when using scp command

I want to run a script that deletes a files on computer and copies over another file from a connected host using scp command
Here is the script:
#!/bin/bash
echo "Moving Production Folder Over"
cd ~
sudo rm -r Production
scp -r host#192.168.123.456:/home/user1/Documents/Production/file1 /home/user2/Production
I would want to cd into the Production directory after it is copied over. How can I go about this? Thanks!

Why use sleep after chmod in a Dockerfile

The Azure documentation gives instructions for how to enable SSH in a custom container. They suggest to add these commands in my Dockerfile:
# Install OpenSSH and set the password for root to "Docker!". In this example, "apk add" is the install instruction for an Alpine Linux-based image.
RUN apk add openssh \
&& echo "root:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY sshd_config /etc/ssh/
# Copy and configure the ssh_setup file
RUN mkdir -p /tmp
COPY ssh_setup.sh /tmp
RUN chmod +x /tmp/ssh_setup.sh \
&& (sleep 1;/tmp/ssh_setup.sh 2>&1 > /dev/null)
# Open port 2222 for SSH access
EXPOSE 80 2222
Why is there a sleep 1 after the chmod +x command? I know it's not harmful, but I'd really like to understand why it's there.
The sleep 1 command is there to pause the script for 1 second before continuing. It is often used as a way to give the system time to complete a task or stabilize before the script continues.
In this case, the chmod +x command is used to make the ssh_setup.sh script executable. It is likely that the sleep 1 command is included to give the system time to complete this task before the script is run.
Keep in mind that this is just a suggestion and the use of the sleep 1 command may not be necessary in all cases. It is included as a way to potentially avoid any issues that may arise if the script continues before the system has had a chance to fully process the previous command.

crontab bash script not running

I updated the script with the absolute paths. Also here is my current cronjob entry.
I went and fixed the ssh key issue so I know it works know, but might still need to tell rsync what key to use.
The script runs fine when called manually by user. It looks like not even the rm commands are being executed by the cron job.
UPDATE
I updated my script but basically its the same as the one below. Below I have a new cron time and added an error output.
I get nothing. It looks like the script doesn't even run.
crontab -e
35 0 * * * /bin/bash /x/y/z/s/script.sh 2>1 > /tmp/tc.log
#!/bin/bash
# Clean up
/bin/rm -rf /z/y/z/a/b/current/*
cd /z/y/z/a/to/
/bin/rm -rf ?s??/D????
cd /z/y/z/s/
# Find the latest file
FILE=`/usr/bin/ssh user#server /bin/ls -ht /x/y/z/t/a/ | /usr/bin/head -n 1`
# Copy over the latest archive and place it in the proper directory
/usr/bin/rsync -avz -e /urs/bin/ssh user#server:"/x/y/z/t/a/$FILE" /x/y/z/t/a/
# Unzip the zip file and place it in the proper directory
/usr/bin/unzip -o /x/y/z/t/a/$FILE -d /x/y/z/t/a/current/
# Run Dev's script
cd /x/y/z/t/
./old.py a/current/ t/ 5
Thanks for the help.
I figured it out, I'm use to working in cst and the server was in gmt time.
Thanks everybody for the help.

A variety of rsync commands in a cron script to sync Ubuntu & Mac home directories

I have a Thinkpad running Linux (Ubuntu 14.04) which is on a wired network and a Mac running Yosemite on wireless, in a different subnet. They're both work machines. I also have a 1TB encrypted USB external Lenovo disk. I have created the following script to run from cron from the Thinkpad to sync the hidden folders in /home/greg with the external drive (connected to the TP), providing it's mounted to the right dir. Then it should sync the remaining, non-hidden) content of /home/greg and perhaps select customised parts of /etc. Once that's done, it should do something similar for the Mac, keeping the hidden files separate but doing a union of the content. My first rsync is meant to only include the hidden files (.*/) in /home/greg and the second rsync is meant to grab everything that's not hidden in that directory. The following is a work in progress.
#!/bin/bash
#source
LOCALHOME="/home/greg/"
#target disk
DRIVEBYIDPATH="/dev/disk/by-id"
DRIVEID="disk ID here"
DRIVE=$DRIVEBYIDPATH/$DRIVEID
#mounted target directories
DRIVEMOUNTPOINT="/media/Secure-BU-drive"
THINKPADHIDDENFILES="/TPdot"
MACHIDDENFILES="/MACdot"
BACKUPDIR="/homeBU"
#if test -a $DRIVE ;then echo "-a $?";fi
# Check to see if the drive is showing up in /dev/disk/by-id
function drivePresent {
if test -e $DRIVE
then echo "$DRIVE IS PRESENT!"
driveMounted
else
echo "$DRIVE is NOT PRESENT!"
fi
}
# Check to see if drive is mounted where expected by rsync and if not mount it
function driveMounted {
mountpoint -q $DRIVEMOUNTPOINT
if [[ $? == 0 ]]
then
syncLocal #make sure local has completed before remote starts
#temp disabled syncRemote
else
echo "drive $DRIVEID is PRESENT but NOT MOUNTED. Mounting $DRIVE on $DRIVEMOUNTPOINT"
mount $DRIVE $DRIVEMOUNTPOINT
if [ $? == 0 ]
then
driveMounted
#could add a counter + while/if to limit the retries to say 5?
fi # check mount worked, then re-test until true, at which point the test will follow the other path
fi
}
# Archive THINKPAD to USB encrypted drive
function syncLocal {
echo "drive $DRIVEID is PRESENT and MOUNTED on $DRIVEMOUNTPOINT- now do rsync"
#rsync for all the Linux application specific files (hidden directories)
rsync -ar --delete --update $LOCALHOME/.* $DRIVEMOUNTPOINT/$BACKUPDIR/$THINKPADHIDDENFILES
#rsync for all the content (non-hidden directories)
rsync -ar --delete --exclude-from ./excludeFromRsync.txt $LOCALHOME $DRIVEMOUNTPOINT/$BACKUPDIR
#rsync for Linux /etc dir (includes some custom scripts and cron jobs)
#rsync
}
# Sync MAC to USB encrypted drive
function syncRemote { # Sync Mac to USB encrypted drive
echo "drive $DRIVEID is PRESENT and MOUNTED on $DRIVEMOUNTPOINT- now do rsync"
#rsync for all the Mac application specific files (hidden directories)
rsync -h --progress --stats -r -tgo -p -l -D --update /home/greg/ /media/Secure-BU-drive/
#rsync for all the content (non-hidden directories)
rsync -av --delete --exclude-from ./excludeFromRsync.txt $LOCALHOME $DRIVEMOUNTPOINT/$BACKUPDIR
#rsync for Mac /etc dir (includes some custom scripts and cron jobs)
rsync
}
#This is the program starting
drivePresent
The content of the exclude file mentioned in the second rsync in syncLocal is (nb syncRemote is disabled for the moment):
.cache/
/Downloads/
.macromedia/
.kde/cache-North/
.kde/socket-North/
.kde/tmp-North/
.recently-used
.local/share/trash
**/*tmp*/
**/*cache*/
**/*Cache*/
**~
/mnt/*/**
/media/*/**
**/lost+found*/
/var/**
/proc/**
/dev/**
/sys/**
**/*Trash*/
**/*trash*/
**/.gvfs/
/Documents/RTC*
.*
My problem is that the first local rsync that's meant to be capturing ONLY the /home/greg/.* files seems to have captured everything or possible has failed silently and allowed the second local rsync to run but without excluding the /home/greg/.* files?
I know I've added a load of possibly irrelevant context but I thought it might help set my expectations for the related rsyncs. Sorry if I've gone overboard.
Thanks in advance
Greg
You have to be very careful with .* as it will pull in . and ... So that's your first rsync line:
rsync -ar --delete --update $LOCALHOME/.* $DRIVEMOUNTPOINT/$BACKUPDIR/$THINKPADHIDDENFILES
The shell expand * and so rsync sees . and .. and it will go full-recursive on those two!
I wonder if this might help: --exclude . --exclude .. Well, I'm sure you know about using -vn to help you debug rsync issues.

Script to download a web page

i made a web server to show my page locally, because is located in a place with a poor connection so what i want to do is download the page content and replace the old one, so i made this script running in background but i am not very sure if this will work 24/7 (the 2m is just to test it, but i want it to wait 6-12 hrs), so, ¿what do you think about this script? is insecure? or is enough for what i am doing? Thanks.
#!/bin/bash
a=1;
while [ $a -eq 1 ]
do
echo "Starting..."
sudo wget http://www.example.com/web.zip --output-document=/var/www/content.zip
sudo unzip -o /var/www/content.zip -d /var/www/
sleep 2m
done
exit
UPDATE: This code i use now:
(Is just a prototype but i pretend not using sudo)
#!/bin/bash
a=1;
echo "Start"
while [ $a -eq 1 ]
do
echo "Searching flag.txt"
if [ -e flag.txt ]; then
echo "Flag found, and erasing it"
sudo rm flag.txt
if [ -e /var/www/content.zip ]; then
echo "Erasing old content file"
sudo rm /var/www/content.zip
fi
echo "Downloading new content"
sudo wget ftp://user:password#xx.xx.xx.xx/content/newcontent.zip --output-document=/var/www/content.zip
sudo unzip -o /var/www/content.zip -d /var/www/
echo "Erasing flag.txt from ftp"
sudo ftp -nv < erase.txt
sleep 5s
else
echo "Downloading flag.txt"
sudo wget ftp://user:password#xx.xx.xx.xx/content/flag.txt
sleep 5s
fi
echo "Waiting..."
sleep 20s
done
exit 0
erase.txt
open xx.xx.xx.xx
user user password
cd content
delete flag.txt
bye
I would suggest setting up a cron job, this is much more reliable than a script with huge sleeps.
Brief instructions:
If you have write permissions for /var/www/, simply put the downloading in your personal crontab.
Run crontab -e, paste this content, save and exit from the editor:
17 4,16 * * * wget http://www.example.com/web.zip --output-document=/var/www/content.zip && unzip -o /var/www/content.zip -d /var/www/
Or you can run the downloading from system crontab.
Create file /etc/cron.d/download-my-site and place this content into in:
17 4,16 * * * <USERNAME> wget http://www.example.com/web.zip --output-document=/var/www/content.zip && unzip -o /var/www/content.zip -d /var/www/
Replace <USERNAME> with a login that has suitable permissions for /var/www.
Or you can put all the necessary commands into single shell script like this:
#!/bin/sh
wget http://www.example.com/web.zip --output-document=/var/www/content.zip
unzip -o /var/www/content.zip -d /var/www/
and invoke it from crontab:
17 4,16 * * * /path/to/my/downloading/script.sh
This task will run twice a day: at 4:17 and 16:17. You can set another schedule if you'd like.
More on cron jobs, crontabs etc:
Add jobs into cron
CronHowto on Ubuntu
Cron(Wikipedia)
Simply unzipping the new version of your content overtop the old may not be the best solution. What if you remove a file from your site? The local copy will still have it. Also, with a zip-based solution, you're copying EVERY file each time you make a copy, not just the files that have changed.
I recommend you use rsync instead, to synchronize your site content.
If you set your local documentroot to something like /var/www/mysite/, an alternative script might then look something like this:
#!/usr/bin/env bash
logtag="`basename $0`[$$]"
logger -t "$logtag" "start"
# Build an array of options for rsync
#
declare -a ropts
ropts=("-a")
ropts+=(--no-perms --no-owner --no-group)
ropts+=(--omit-dir-times)
ropts+=("--exclude ._*")
ropts+=("--exclude .DS_Store")
# Determine previous version
#
if [ -L /var/www/mysite ]; then
linkdest="$(stat -c"%N" /var/www/mysite)"
linkdest="${linkdest##*\`}"
ropts+=("--link-dest '${linkdest%'}'")
fi
now="$(date '+%Y%m%d-%H:%M:%S')"
# Only refresh our copy if flag.txt exists
#
statuscode=$(curl --silent --output /dev/stderr --write-out "%{http_code}" http://www.example.com/flag.txt")
if [ ! "$statuscode" = 200 ]; then
logger -t "$logtag" "no update required"
exit 0
fi
if ! rsync "${ropts[#]}" user#remoteserver:/var/www/mysite/ /var/www/"$now"; then
logger -t "$logtag" "rsync failed ($now)"
exit 1
fi
# Everything is fine, so update the symbolic link and remove the flag.
#
ln -sfn /var/www/mysite "$now"
ssh user#remoteserver rm -f /var/www/flag.txt
logger -t "$logtag" "done"
This script uses a few external tools that you may need to install if they're not already on your system:
rsync, which you've already read about,
curl, which could be replaced with wget .. but I prefer curl
logger, which is probably installed in your system along with syslog or rsyslog, or may be part of the "unix-util" package depending on your Linux distro.
rsync provides a lot of useful functionality. In particular:
it tries to copy only what has changed, so that you don't waste bandwidth on files that are the same,
the --link-dest option lets you refer to previous directories to create "links" to files that have not changed, so that you can have multiple copies of your directory with only single copies of unchanged files.
In order to make this go, both the rsync part and the ssh part, you will need to set up SSH keys that allow you to connect without requiring a password. That's not hard, but if you don't know about it already, it's the topic of a different question .. or a simple search with your favourite search engine.
You can run this from a crontab every 5 minutes:
*/5 * * * * /path/to/thisscript
If you want to run it more frequently, note that the "traffic" you will be using for every check that does not involve an update is an HTTP GET of the flag.txt file.

Resources