Linux script for copying files from multiple windows machines - linux

Having a issue trying to make a bash script that will read ip address and usernames from a file that mount a connection to that share on windows and then copy ano file types into a new folder called the users name.
At the moment it doesn't quite work, it makes hundreds of folders called *.ano if it can not find the windows share.
Help please
Text file:
192.168.0.2 user1
192.168.0.3 user2
bash script:
USER='/home/user/user.ip'
IPADDY=$(grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $USER)
USERNAME=$(awk '{ print $NF }' $USER)
for i in $IPADDY $USERNAME
do
mkdir /home/user/Documents/$USERNAME
mount -t smbfs //$IPADDY/$USERNAME /home/user/$USERNAME
rsync -va /home/user/$USERNAME/*.ano /home/user/Documents/$USERNAME/*.ano
done
Hi all thanks for such a quick reply, I have change the code as follow but still get multiple files have I done something wrong here
USER='/home/user/user.ip'
IPADDY=$(grep -o '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' $USER)
USERNAME=$(awk '{ print $NF }' $USER)
while read IPADDY USERNAME; do
mkdir /home/user/Documents/$USERNAME
mount -t smbfs //$IPADDY/$USERNAME /home/user/$USERNAME
rsync -va /home/user/$USERNAME/*.ano /home/user/Documents/$USERNAME/
done < $USER

The problem is in the for command. In your script, i iterates over the contents of $IPADDY, then it iterates over the contents of $USERNAME. Meanwhile, $USERNAME inside the loop gets expanded to user1 user2, resulting in:
mkdir /home/user/Documents/user1 user2
The mount line becomes:
mount -t smbfs //192.168.0.2 192.168.0.3/user1 user2 /home/user/user1 user2
And so on.
Rather, loop over the file itself:
while read IPADDY USERNAME; do
#awesome things here based on $IPADDY and $USERNAME
done < $USER
You might want to add [[ -z $IPADDY ]] && continue to skip over any possible blank lines in the file.

One problem is that you use a wildcard (*) for the destination files. But those files don't exist - therefore /home/user/Documents/$USERNAME/*.ano cannot match and rsync will create a file *.log.
Better do:
rsync -va /home/user/$USERNAME/*.ano /home/user/Documents/$USERNAME/

Related

Bash script to Process two files and loop through to have mountpoint check

I have two files on is which contains host-names and another one which contains Linux mount-point information which i'm processing from file mount.txt.
What realy I'm looking for is to login to each hosts and check if the mount-point mentioned in the /tmo/mounts file exits on the hosts if it exits then just do ls -ld mount-point else skip it.
Somehow being a novice I'm not able to get how to process the mount-point check
#!/bin/bash
REMOTE_HOSTS="/tmp/hosts"
REMOTE_MOUNTS="/tmp/mounts"
awk -F":" '{print $1}' mount.txt | sort -u > $REMOTE_HOSTS
awk '{print $3}' mount.txt | sort -u > $REMOTE_MOUNTS
for hosts in $(cat $REMOTE_HOSTS);
do
echo "------ $hosts ----"
ssh -o StrictHostKeyChecking=no -i /home/data/.ans root#$hosts
done
Side-Note: /home/data/.ans is my rsa key for rot login.
Hostname File:
/tmp/hosts
my-hosts01
my-hosts02
Moun-point File :
/tmp/mounts
/data/oracle01
/data/oracle02
/data/oracle03
Please advise and help how could i do that, sorry if i could not make it more readable.
You have to make a difference between a mount-point, which is simply a directory, and a mounted element, which can be a storage or another thing.
Knowing that :
if you want to check the mount-point existence, you simply have to check the directory : Check if a directory exists in a shell script
if you want to check if an element is mounted on the mounted point : Check if directory mounted with bash

How to download files from my list with wget and ftp

I need to download only defined files with wget and ftp.
For example:
1.I retrieve all files using:
echo ls -R | ftp ftp://user:password#host > ./list.txt
2.Then I will parse the result and get a list with absolute paths for each file:
/path/to-the/file-1
/path/to-the/file-2
etc.
3.And now I need to download all files from the result list using wget and ftp.
And I don't want to create a separate FTP session for each file download process.
Please give your advice. Thank you.
Update:
For recursive download I'm using it: wget -r ftp://user:password#host:/ -nH -P /download/path. It works great, but I need to pass a file with a list of remote files for downloading via FTP with one FTP session.
Sorry, I missed the "single session" part when I commented. I think you need to have your script generate a second script to run a single FTP session.
So, your script will not do any FTP itself, it will just write another script that does the transfers. So, it will write a script that does this
ftp -n <SOMEADDRESS> <<EOS
quote USER <USERNAME>
quote PASS <PASSWORD>
bin
get file1 localname1
get file2 localname2
...
get fileN localnameN
quit
EOS
Then it will execute that script, by doing:
bash < thatScript
So your script will look like this:
#!/bin/bash
ScriptName=funkyFTPer
cat - <<END > $ScriptName
ftp -n 192.168.0.1 <<EOS
quote USER freddy
quote PASS frog
END
# Your selection code goes here ***PHNQZ***
echo get file1 localname1 >> $ScriptName
echo get file2 localname2 >> $ScriptName
echo get fileN localnameN >> $ScriptName
echo quit >> $ScriptName
echo EOS >> $ScriptName
echo "Now run bash < $ScriptName"
Then delete the script as it contains your password. Or you can put the password in your .netrc file.
As regards creating directories locally, you can do that in the first script using mkdir -p. The -p has the advantage that it creates all directories in between in one go and doesn't get upset if they already exist.
So, just looking at the area of code where it says ***PHNQZ*** above, let's say your code decides you need file freddy/frog/c.txt, you could do:
remotename="freddy/frog/c.txt"
localdir=${remotename%/*} # Get just directory part using "bash Parameter Substitution"
mkdir -p "$localdir" # make directory and all parts in between

Creating a Directory Through shell script and retrieve the path

I am trying to create a directory through shell script. Through some commands I am able to create a directory with permission string 777. Now I want to fetch the path of the created directory so that I can move a file into that.
Below is the code though which I am trying.
It will store datetime
NOW=$(date +"%Y-%m-%d")
It will store hostname
HOST=$(hostname -s)
Create directory with permission
LOG_DIRECTORY=$(mkdir -m 777 DIP_${HOST}_${NOW}_50users)
To fetch path
path="$(dirname /home/e250842/${LOG_DIRECTORY})";
And display path
echo "$path"
But the problem is that LOG_DIRECTORY is not a path.So please suggest some command to fetch the path like /home/e250842/CreatedDirectoryName/.
An example is also helpful.
Thanks in Advance.
You can change your code
LOG_DIRECTORY=$(mkdir -m 777 DIP_${HOST}_${NOW}_50users)
path="$(dirname /home/e250842/${LOG_DIRECTORY})"
to
LOG_DIRECTORY="DIP_${HOST}_${NOW}_50users"
mkdir -m 777 "${LOG_DIRECTORY}"
path="/home/e250842/${LOG_DIRECTORY}"
now=$(date +"%Y-%m-%d")
host=$(hostname -s)
path=$(pwd)/DIP_${host}_${now}_50users
mkdir -m 777 "$path"
echo "$path" -- display path

Copying syslog file to a new directory in linux

I'm currently having an assignment to write a bash script that can perform backup log (syslog, dmesg and message) files to a new directory. I wrote my script like this:
cd /var/log
sudo cp syslog Assignment
The file "Assignment" is in my home directory. When I used the "ls" command in my Assignment folder, I don't find a copy of syslog in there. Can someone tell me where did I go wrong? Thanks in advance.
I think you mean Assignment folder, not Assignment file. Anyways if you cd to /var/log, then when you do a cp in /var/log it will think Assignment is local to /var/log. If you do an ls in /var/log now you will see a copy of syslog called Assignment in /var/log. To get syslog copied to the assignment folder in your home directory you need to specify the absolute path not the relative path. Use the tilde, ~, to specify the home directory. So your script should say
cd /var/log
sudo cp syslog ~/Assignment/
You can try this:
#!/bin/sh
if ! [ $1 ] ; then
echo "Usage:";
echo $0 "<directory_where_to_save_logs>";
return;
fi
if [ ! -d "$1" ]; then
echo "Creating directory $1";
mkdir $1;
fi
cp /var/log/syslog* $1
cp /var/log/dmesg* $1
Thanks

FTP File upload - script STUCK

I have a basic bash script that I'm using to copy a file and upload it to my FTP:
cp -i /var/mobile/file.db /var
cd /var
HOST=MYFTPHOST
USER=USERNAME
PASS=PASSWORD
ftp -inv $HOST << EOF
user $USER $PASS
cd websitefolder
put sms.db
bye
EOF
rm -f file.db
When I run the script, it saves the file to my FTP perfectly. But I'm running the script from different computer's so somehow, I'd like the script to upload the file.db to my FTP like this everytime it uploads it:
file1.db
file2.db
file3.db
file4.db
Your question is a little unclear, but if I understand correctly, you're trying to name the database files in sequential order without overwriting any old files. You'll have to get the list of files from the FTP server in order to find out what files have already been uploaded.
This code will get the list of files from the server that begin with "file" and end with ".db", count them, then change the name of your "file.db" to "fileXX.db", where "XX" is the next number in the naming sequence (i.e. file1.db, file2.db, file3.db, etc).
I'm not sure where "sms.db" came from. I've changed it to "file.db" in the script.
cp -i /var/mobile/file.db /var
cd /var
HOST=MYFTPHOST
USER=USERNAME
PASS=PASSWORD
ftp -inv $HOST << EOF
user $USER $PASS
cd websitefolder
LIST=$(ls | grep file*.db)
bye
EOF
FILECOUNT=0
for FILE in $LIST
do
if [ -f $FILE ];
then
FILECOUNT+=1
done
FILECOUNT+=1
NEXTDB="file$FILECOUNT.db"
mv file.db $NEXTDB
ftp -inv $HOST << EOF
put $NEXTDB
bye
EOF

Resources