Copying files from another directory with shellscript - linux

I have a question, I have a script that takes a file and copy it to several computers. but my question is the file is in /opt/scripts but the files to copy are in /opt/file/copy. how can I do without moving the file from its directories.
#Here I put a list or array of my server endings
IPS=('15' )
#Name of the file to copy t
FILE="$1"
DIRECTORY1=/opt/bots
# if number of parameters less than or equal to 0
if [ $# -le 0 ]; then
echo "The tar name must be entered."
exit 1
fi
# I loop through the array or list with a for
for i in ${IPS[#]}
do
xxxxxxxx
done```

If you want to copy files from serverA (in /sourcedirectory) to serverB (in /targetdirectory), you can use scp, assuming ssh is setup.
On serverA, do:
cd /sourcedirectory
scp file useronserverB#serverB:/targetdirectory/
Then the file on serverA did not move, and it is copied into the /targetdirectory on serverB.

Related

Bash script deletes files older than N days using lftp - but does not remove recursive directories and files

I have finally got this script working and it logs on to my remote FTP and removes files in a folder that are older than N days. I cannot however get it to remove recursive directories also. What can be changed or added to make this script remove files in subfolders as well as subfolders that are also older than N days? I have tried adding the -r function at a few places but it did not work. I think it needs to be added to where the script also builds the list of files to be removed. Any help would be greatly appreciated. Thank you in advance!
#!/bin/bash
# Simple script to delete files older than specific number of days from FTP.
# This script use 'lftp'. And 'date' with '-d' option which is not POSIX compatible.
# FTP credentials and path
FTP_HOST="xxxxxxxxxxxx"
FTP_USER="xxxxxx"
FTP_PASS="xxxxxxxxxxxxxxxxx"
FTP_PATH="/directadmin"
# Full path to lftp executable
LFTP=`which lftp`
# Enquery days to store from 1-st passed argument or strictly hardcode it, uncomment one to use
STORE_DAYS=${1:? "Usage ${0##*/} X, where X - count of daily archives to store"}
# STORE_DAYS=7
function removeOlderThanDays() {
# Make some temp files to store intermediate data
LIST=`mktemp`
DELLIST=`mktemp`
# Connect to ftp get file list and store it into temp file
${LFTP} << EOF
open ${FTP_USER}:${FTP_PASS}#${FTP_HOST}
cd ${FTP_PATH}
cache flush
cls -q -1 --date --time-style="+%Y%m%d" > ${LIST}
quit
EOF
# Print obtained list, uncomment for debug
# echo "File list"
# cat ${LIST}
# Delete list header, uncomment for debug
# echo "Delete list"
# Let's find date to compare
STORE_DATE=$(date -d "now - ${STORE_DAYS} days" '+%Y%m%d')
while read LINE; do
if [[ ${STORE_DATE} -ge ${LINE:0:8} && "${LINE}" != *\/ ]]; then
echo "rm -f \"${LINE:9}\"" >> ${DELLIST}
# Print files which are subject to deletion, uncomment for debug
#echo "${LINE:9}"
fi
done < ${LIST}
# More debug strings
# echo "Delete list complete"
# Print notify if list is empty and exit.
if [ ! -f ${DELLIST} ] || [ -z "$(cat ${DELLIST})" ]; then
echo "Delete list doesn't exist or empty, nothing to delete. Exiting"
exit 0;
fi
# Connect to ftp and delete files by previously formed list
${LFTP} << EOF
open ${FTP_USER}:${FTP_PASS}#${FTP_HOST}
cd ${FTP_PATH}
$(cat ${DELLIST})
quit
I have addressed this sort of thing a few times.
How to connect to a ftp server via bash script?
Provide commands automatically to ftp in bash script
Bash FTP upload - events to log
Better to use scp and/or ssh when you can, especially if you can set up passwordless access with public keys. Otherwise, I recommend a more robust language like Python or Perl that lets you check the return codes of these steps individually and respond accordingly.

How to skip a file inside if statement in shell script and move to next file in the same directory?

I had a directory with 10 files.I need to read each file one by one .If any failure while processing the file then I need to capture that file name to send over the mail and skip the current file and move to the next file in the directory.
I tried like this
for test in $test_path
do
if [ ! -s $test ];then
echo ' failed'
fi
#other codes are here that needs to run with the above input file
done

How can I copy files from one directory to another (which is a subdirectory in the original)?

I'm new to Linux shell script and I'm struggling with a problem. An error pops up telling that the if conditional has too many arguments. What I have to do is basically described on the title, but I've written a code that is not working, what's wrong with it? The original directory is called artists and the subdirectory where the files need to be copied to is called artists_copy.
#!/bin/bash
count=0
elem=$(ls)
for file in $elem; do
let count+=1
done
for i in {$count}; do
if [ -e $elem[$i] ]; then
cp $elem[$i] artists_copy
echo "Copied file $elem[$i] to artists_copy"
fi
done

shell script to move data from other server's (or nodes) sub-dirs to current server(node) matching sub-dir

I've .parquet files for multiple dates (from 20190927 to 20200131) inside /data/pg/export/schema.table_YYYYMMDD<random alphanumric string> directory structure in seven different nodes. When process ran, it created sub-directory in schema.table_YYYYMMDD<random alphanumric string> format (such as schema.table_20190927) inside /data/pg/export path for each date. However, it did append some random letter on sub-dir on other hosts. So for instance, I've folder, files in following format:
on node#1 (10.245.122.100)
/data/pg/export/schema.table_20190927 contains:
----1.parquet
----2.parquet
----3.parquet
on node#2 (10.245.122.101)
/data/pg/export/schema.table_20190927S8rW4dQ2 contains:
----4.parquet
----5.parquet
----6.parquet
on node#3 (10.245.122.102)
/data/pg/export/schema.table_20190927P5SJ9aX4 contains:
----7.parquet
----8.parquet
----9.parquet
and so on for other nodes.
How I can bring files from /data/pg/export/schema.table_20190927S8rW4dQ2 on node#2 (10.245.122.101) and /data/pg/export/schema.table_20190927P5SJ9aX4 on node#3 (10.245.122.102) (and similar for other hosts) to /data/pg/export/schema.table_20190927 on node#1 (10.245.122.100) so
final output look like:
***on node#1 (10.245.122.100)***
/data/pg/export/schema.table_20190927 will have:
----1.parquet
----2.parquet
----3.parquet
----4.parquet
----5.parquet
----6.parquet
----7.parquet
----8.parquet
----9.parquet
Welcome to SO. Since it is your first question (well the first I see), and I liked the challenge, here is a script that will do that. For your next question, you must provide your own code with a specific problem you are having, and not expect a complete script as an answer. See my comment for stuff to read on using SO.
The bash knowledge required to make this work is:
while loop
date calculation
variable value incrementation (so basic math)
I made some assumptions:
you have a single user on all nodes which can be used to do scp from node1
that user is hopefully setup to use ssh keys to login, otherwise you will type your password a lot of times!
you have connected at least 1 time to each node, and they are listed in your known_hosts file
on each node, there is 1 and only one directory with a specific date in the name.
all files are copied in each directory. You can modify the scp command to get only the .parquet files if you want.
Basic ideas in the code
loop on each node, so from 2 to 7
loop on dates, so from 20190927 to 20200131
copy files for each node, each date within the loops
this was tested on Linux Mint (== Ubuntu) so the date command is the gnu version, which allows for date calculation the way I did it.
Before use, modify the value of the user variable with your user name.
DISCLAIMER: I did not have multiple systems to test the scp command, so this command was added by memory.
The code:
#!/bin/bash
#
# This script runs on node1
# The node1 IP is 10.245.122.100
#
# This script assumes that you want to copy all files under
# /data/pg/export/schema.table_YYYMMDD<random>
#
###############################################################################
# node1 variables
targetdirprefix="/data/pg/export/schema.table_"
user="YOURUSER"
# Other nodes variables
total_number_nodes=7 # includes node1
ip_prefix=10.245.122
ip_lastdigit_start=99 # node1 == 100, so start at 99
# loop on nodes ---------------------------------------------------------------
# start at node 2, node1 is the target node
nodecount=2
# Stop at maxnode+1, here he last node will be 7
(( whileexit = total_number_nodes + 1 ))
while [[ "$nodecount" -lt "$whileexit" ]]
do
# build the current node IP
(( currentnode_lastdigit = ip_lastdigit_start + nodecount ))
currentnode_ip="${ip_prefix}.${currentnode_lastdigit}"
# DEBUG
echo "nodecount=$nodecount, ip=$currentnode_ip"
# loop on dates ---------------------------------------
firstdate="20190927"
lastdate="20200131"
loopdate="$firstdate"
while [[ "$loopdate" -le "$lastdate" ]]
do
# DEBUG
echo "loopdate=$loopdate"
# go into the target directory (create it if required)
targetdir="${targetdirprefix}${loopdate}"
if [[ -d "$targetdir" ]]
then
cd "$targetdir"
else
mkdir -p "$targetdir"
if [[ "$?" -ne 0 ]]
then
echo "ERROR: could not create directory $targetdir, exiting."
exit 1
else
cd "$targetdir"
fi
fi
# copy the date's file into the target dir (i.e. localy, since we did a cd before)
# the source directory is the same as the targetdir, with extra chars at the end
# this script assumes there is only 1 directory with that particular date!
scp ${user}#${currentnode_ip}:${targetdir}* .
if [[ "$?" -ne 0 ]]
then
echo "WARNING: copy failed from node $nodecount, date $loopdate."
echo " The script will continue for other dates and nodes..."
fi
loopdate=$(date --date "$loopdate +1 days" +%Y%m%d)
done
(( nodecount += 1 ))
done

variable part in a variable path in ksh script

I'm sorry if something similar was already answered in the past, but I wasn't able to find it. I'm writing a script to perform some housekeeping tasks, and I get stuck in the step below. To put you in the record, it's a script which reads a config file in order to be able to use it as standard protocol in different environments.
The problem is with this code:
# Check if destination folder exist, if not create it.
if [ ! -d ${V_DestFolder} ]; then # Create folder
F_Log "${IF_ROOT} mkdir -p ${V_DestFolder}"
${IF_ROOT} mkdir -p ${V_DestFolder}
continue
fi
# If movement, check write permissions of destination folder.
V_CheckIfMovement=`echo $1|grep #`
if [ $? -eq 0 ]; then # File will be moved.
V_DestFolder=`echo $1|awk -F"#" {'print $2'}`
if [ ! -w ${V_DestFolder} ]; then # Destination folder IS NOT writable.
F_Log "Destination folder ${V_DestFolder} does not have WRITE permissions. Skipping."
continue
fi
fi
Basically I need to move (in this step) some files from one route to another.
It checks if the folder (name read from config file) exists, if not it will be created, after that check if the folder have write rights and move the files.
Here you can see the part of config file which is read in this step:
app/tom*/instances/*/logs|+9|/.*\.gz)$/|move#/app/archive/tom*/logs
I need to say the files are properly moved when I change the tom* of the destination for anything, as "test" or any word without * (as it should).
What I need to know is how I can use a variable in "tom*" in destination. Variable should contain the same name of tom* in the source, which I use as the name of the cell.
This is because I use different tomcat cells with the reference tom7 or tom8 plus 3 letters to describe each one. as example tom7dog or tom7cat.
You should give the shell a chance to evaluate.
V_DestFolder=`echo $1|awk -F"#" {'print $2'}`
for p in ${V_DestFolder}; do
if [ ! -w ${p} ]; then

Resources