CENTOS - Bash script to initiate a file transfer from a directory - linux

I am trying to create a bash script to initiate a file transfer to another machine via tftp app. currently i would do this manually by running the command ./tftp "filename" tftp://ipaddress/filename.
What i would like to do is have a bash script that looks at a folder e.g (filetransfer) for any files an initiates that same command. can someone please help? as i am a noob at bash scripting
so far i have tried the below
when running this is says that the filename is bad
#!/bin/bash
for filename in ./*
do
./tftp "$filename" tftp://ipaddress/"$filename"
done
also tried this
when running this one below it transfers everything on the directory below it.
#!/bin/bash
cd /path/to/the/directory/*
for i in *
do
./tftp "$i" tftp://ipaddress/"$i"
done

In the code you posted, filename, respecitvely i, can also take the name of a subdirectory, since you are looping over all entries in the directory. If you want to restrict the transfer to plain files, do a
[[ -f $filename ]] && ./tftp "$filename" tftp://ipaddress/"$filename"

Related

sftp upload bash ignore files

I try to achieve some semi-automatic sftp upload/deployment. The main key is NOT to upload all files. I do not know which files to upload but I know which files not to upload.
My bash script looks like:
#!/bin/bash
IP="123.123.123.123"
HOSTNAME="ftp.my-host.com"
PATH="subdirectory"
sftp username#$HOSTNAME:$PATH < "sftp-pattern"
in the sftp-pattern file i want to store my sftp commands. But I could not find any hints how to ignore several patterns. like *.sql.
Ideally i'd ignore everything that is gitignored.
I do NOT have an ssh connection.
Since you are using a shell script, you could use a loop. Something like this should work.
#!/bin/bash
IP="123.123.123.123"
HOST="ftp.my-host.com"
DIR="/tmp/"
for f in `/bin/ls $DIR`
do
if echo $f | /usr/bin/grep '.sql' > /dev/null
then
echo SKIPPING $f
else
sftp username#$HOST:$DIR/$f
fi
done
The answer should be git-ftp as #Clijsters mentioned. It is more feature rich and needs no fiddeling around with loops and pipes.
dwrights solution is working though, if it is ONLY sql that you want to exclude.
Thanks!!

Change directory to path of parent/calling script in bash

I have dozens of scripts, all in different directories. (exported/expanded Talend jobs)
At this moment each job has 1 or 2 scripts, starting with the same lines, most important one:
CD ***path-to-script***
and several lines to set the Java path and start the job.
I want to create a script, which will be ran from all these scripts.
e.g.:
/scripts/talend.sh
And in all talend scripts, the first line will run /scripts/talend.sh, some examples of where these scripts are ran from:
/talend-job1_0.1/talend-job1_0.1/talend-job1/talend-job1.sh
/talend-task2_0.1/talend-task2_0.1/talend-task2/talend-task2.sh
/talend-job3_0.1/talend-job3_0.1/talend-job3/talend-job3.sh
How can I determine where the /scripts/talend.sh is started from, so I can CD to that path from within /scripts/talend.sh.
The Talend scripts are not run from within the directory itself, but from a cronjob, or a different users home directory.
EDIT:
The question was marked as duplicate, but Getting the source directory of a Bash script from within is not answering my question 100%.
Problem is:
- The basic script is being called from different scripts
- Those different scripts can be run from command line, with, and with or without a symbolic link.
- The $0, the $BASH_SOURCE and the pwd all do some things, but no solution mentioned covers all the difficulties.
Example:
/scripts/talend.sh
In this script I want to configure the $PATH and $HOME_PATH of Java, and CD to the place where the Talend job is placed. (It's a package, so that script MUST be run from that location).
Paths to the jobs are, for example:
/u/talend/talendjob1/sub../../talendjob1.sh
/u/talend/talendjob2/sub../../talendjob2.sh
/u/talend/talendjob3/sub../../talendjob3.sh
Multiple jobs are run from a TMS application. This application cannot run these scripts with the whol name (to long, name can only be 6 long), so in a different location I have symbolic links:
/u/tms/links/p00001 -> /u/talend/talendjob1/sub../../talendjob1.sh
/u/tms/links/p00002 -> /u/talend/talendjob1/sub../../talendjob2.sh
/u/tms/links/p00003 -> /u/talend/talendjob1/sub../../talendjob3.sh
/u/tms/links/p00004 -> /u/talend/talendjob1/sub../../talendjob4.sh
I think you get an overview of the complexity and why I want only one basic talend script, where I can leave all basic stuff. But I only can do that, if I know the source of the Talend script, because there I have to be to start that talend job.
These answers (beyond the first) are specific to Linux, but should be very robust there -- working with directory names containing spaces, literal newlines, wildcard characters, etc.
To change to your own source directory (a FAQ covered elsewhere):
cd "$(basename "$BASH_SOURCE")"
To change to your parent process's current directory:
cd "/proc/$PPID/cwd"
If you want to change to the directory passed as the first command-line argument to your parent process:
{ IFS= read -r -d '' _ && IFS= read -r -d '' argv1; } <"/proc/$PPID/cmdline"
cd "$argv1"
That said, personally, I'd just export the job directory to the environment variable in the parent process, and read that environment variable in the children. Much, much simpler, more portable, more accurate, and compliant with best process.
You can store pwd in a variable and then cd to it when you want to go back
This works for me:
In
/scripts/talend.sh
do
cd ${1%/*}
${1%/*} will strip off everything after the last / effectively providing a dirname for $1, which is the path to the script that calls this one.
and than call the script with the line:
/scripts/talend.sh $0.
Calling the script with $0 passes the name of the current script as an argument to the child which as shown above can be used to cd to the correct directory.
When you source /scripts/talend.sh the current directory is unchanged:
The scripts
# cat /scripts/talend.sh
echo "Talend: $(pwd)"
# cat /talend-job1_0.1/talend-job1_0.1/talend-job1/talend-job1.sh
echo Job1
. /scripts/talend.sh
Executing job1
# cd /talend-job1_0.1/talend-job1_0.1
# talend-job1/talend-job1.sh
Job1
Talend: /talend-job1_0.1/talend-job1_0.1
When you want to see the dir where the calling script is in, see get dir of script.
EDIT:
When you want to have the path of the callling script (talend-job1.sh) without having to cd to that dir first, you should get the dir of the script (see link above) and source talend.sh:
# cat /scripts/talend.sh
cd "$( dirname "${BASH_SOURCE[0]}" )"
echo "Talend: $(pwd)"
In talend.sh get the name of the calling script and then the directory:
parent_cmd=$(ps -o args= $PPID)
set -- $parent_cmd
parent_cmd=$(dirname $2)
Update: as pointed by Charles Duffy in the comments below this will cause havoc when used with paths containing white-space or glob patterns.
If procfs is available you could read the content of /proc/$PPID/cmdline or if portability is a concern do a better parsing of the args.
In /scripts/talend.sh:
cd "$(dirname "$0")"
Or:
cd "$(dirname "$BASH_SOURCE")"
Another one is:
cd "$(dirname "$_")"
#This must be the first line of your script after the shebang line
#Otherwise don't use it
Note: The most reliable of the above is $BASH_SOURCE

How can I automatically rename, copy and delete files in linux for my ip camera webcam?

I have an ip camera that automatically ftps images every few seconds to directory on my linux Ubuntu Server web server. I'd like to make a simple webcam page that references a static image and just refreshes every few seconds. The problem is that my ipcamera's firmware automatically names every file with a date_time.jpg type filename, and does not have the option to overwrite with the same file name over and over.
I'd like to have a script running on my linux machine to automatically copy a new file that has been ftp'd into a directory into a different directory, rename it in the process and then delete the original.
Regards,
Glen
I made a quick script, you would need to uncomment the rm -f line to make it delete things :)
it currently prints the command it would have run, so you can test with higher confidence.
You also need to set THE WORK_DIR and DEST_DIR variables near the top of the script.
#!/bin/bash
#########################
# configure vars
YYYYMMDD=`date +%Y%m%d`
WORK_DIR=/Users/neil/linuxfn
DEST_DIR=/Users/neil/linuxfn/dest_dir
##########################
LATEST=`ls -tr $WORK_DIR/$YYYYMMDD* 2>/dev/null | tail -1`
echo "rm -f $DEST_DIR/image.jpg ; mv $LATEST $DEST_DIR/image.jpg"
#rm -f $DEST_DIR/image.jpg ; mv $LATEST $DEST_DIR/image.jpg
This give me the following output when i run it on my laptop:
mba1:linuxfn neil$ bash renamer.sh
rm -f /Users/neil/linuxfn/dest_dir/image.jpg ; mv /Users/neil/linuxfn/20150411-2229 /Users/neil/linuxfn/dest_dir/image.jpg
Inotify (http://en.wikipedia.org/wiki/Inotify) can be set up to do as you ask, but it would probably be better to use a simple web script (PHP, Python, Perl, etc.) to serve the latest file from the directory, instead.

How to SCP files which are being FTPed by another process &delete them on completion?

Files are being transferred to a directory on my machine by FTP protocol. I need to SCP these files to another machine & delete them on completion.
How can I detect if file trasfer by FTP has been done & the file is safe to do SCP?
There's no reliable way to detect completion of the transfer. Some clients send ALLO command and pass the size of the file before actually uploading the file, but this is not a definite rule, so you can't rely on it. All in all, it's possible that the client streams the data and there's no definite "end" of file on its side.
If the client is under your control, you can make it upload files with extension A and after upload rename the files to extension B. And then you transfer only files with extension B.
You can do a script like this:
#!/bin/bash
EXPECTED_ARGS=1
E_BADARGS=65
#Arguments control
if [ $# -lt $EXPECTED_ARGS ]
then
echo "Usage: `basename $0` <folder_update_1> <folder_update_2> <folder_update_3> ..."
exit $E_BADARGS
fi
folders=( "$#" )
for folder in ${folders[#]}
do
#Send folder or file to new machine
time rsync --update -avrt -e ssh /local/path/of/$folder/ user#192.168.0.10:/remote/path/of/$folder/
#Delete local file or folder
rm -r /local/path/of/$folder/
done
It is configured to send folders. If you want files need make little changes on script as:
time rsync --update -avrt -e ssh /local/path/of/$file user#192.168.0.10:/remote/path/of/$file
rm /local/path/of/$file/
Rsync is similar to scp. I prefer use rsync but you can change it.

Need to monitor directory change, and perform action

1st of all: I'm not programmer, neither Linux guru, just have to work with Linux, Oracle, shell scripts.
My current task is to monitor a table in Oracle (tool: sqlplus), and if it contains a certain row, then watch a linux directory for a growing tmp file, and log its attributes (e.g. ls -l), in every 5 second.
The most important part is: this tmp file will be deleted if the above record is deleted from the oracle table, and I need the last contents of this tmp file.
I can't control the Oracle data, just got query rights.
The available tools are: bash, awk, sed, some old version of perl, ruby (not 1.9*), and python (2.5). I don't have install rights, so most of the outside libraries are not accessible. I know I can run some libraries from my $HOME, but I don't have internet connection on that machine: so can't download any library.
Inotify is not available (older kernel).
Any idea where to start/how to do it? Thanks in advance.
How about creating a hard link in another directory, then, when the file "disappears" in the original location, the hard link will still have access to the content.
This is ugly and naive... but...
#!/bin/bash
WASTHERE=0
MONITORING=/tmp/whatever.dat
LASTBACKUP=/tmp/mybackup.dat
LOGFILE=/tmp/mylog.log
# Just create an empty file to start with
touch "$LASTBACKUP"
while [ 1 ];
do
if [[ ! -e "$MONITORING" ]]; then
if [[ $WASTHERE -ne 0 ]]; then
echo "File is gone! Do something with $LASTBACKUP";
WASTHERE=0
fi
else
WASTHERE=1
ls -l "$MONITORING" >> $LOGFILE
cp "$MONITORING" "$LASTBACKUP"
fi
sleep 5
done
The unfortunate part about this is that if anything happens to the file being 'monitored' while the script is sleeping (content is written to it, for example) and the file is then deleted before the script wakes up, the newly written content will not be in the 'backup.'

Resources