BASH: script doesn't recognize the files on the fly - linux

My script should do a simple trick - unpack the files as 'root' from tared file, and process them on the fly with different user. Smth like this.
./script # starts with 'root' user
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#!/bin/bash
TAR="some.tar"
FILES=$(find /home/user \( -name "xxx*.yyy" \) | sort -n | tail -n 3)
tar -xpf $TAR --overwrite --same-owner -P
sudo -u user bash <<- EOF # switch to 'user'
sh process_unpacked_$FILES.sh
EOF
It works, however it doesn't catch the unpacked files (they appear in the directory) on the fly. I mean if i execute the same script again, when the files are already unpacked and exist in directory, script goes and finishes the trick.
The files in tar belong to 'user'. How to make the script work in a single execution?

Related

Is this possible in this command to cd into the directory thats printed in output

When I do ls | grep -e *-folder1 it prints my-folder1 that's the name of the folder matched in the command in current directory.
Is there a way I can add something like cd into this directory. This is more of an attempt to learn Bash or commands on Linux, rather than about doing what I am trying to accomplish.
You could do
ls | grep -- -folder1 | while read -r dir
do
cd "$dir"
# do things in $dir
done
# do things in the original directory
but parsing the output of ls is not recommended. You could instead use globbing:
for dir in *-efolder*
do
cd "$dir"
# do things in $dir
cd .. # need to back out again
done
# do things in the original directory
If the purpose isn't to grep on all folders matching a certain pattern and to cd down into each one of them, but to simply cd into a directory ending with -folder1, then:
cd *-folder1
If you get zero or multiple hits, cd will shown an error.

Linux: How to compress approximately 3TB sized folder

I need to back up a folder containing more than 2500 sub folders and around 3TB size then ftp it to windows based FTP Server.
As i know tar command is not able to do so.
Thus is it possible to create a script to "tar" each 500 sub folders?
If yes please share your command.
BTW: one way i would do is just using mput in ftp without compression:
touch ftp_temp.sh
chmod +x ftp_temp.sh
vim ftp_temp.sh :
'''
#!/bin/bash
HOST='10. 20.30.40'
USER='ftpuser'
PASSWD='ftpuserpasswd'
ftp -n -v $HOST << EOT
ascii
user $USER $PASSWD
prompt
cd upload
mput
bye
EOT
'''
The following pattern may give you an idea:
find . -type d -maxdepth 1 -print0 | xargs -0 -n 500 echo
find will locate the names of all directories in the current directory, while xargs will pass up to 500 of them at a time to another command (in the above example, the echo command).

Running a script in terminal using Linux

Im trying to run this script in the terminal but its not working and says permission denied. scriptEmail is filename.
% find . -type d -exec ./scriptEmail {} \;
scriptEmail is written as follows:
# !/bin/bash
# Mail Script
find gang-l -type f -name "*" -exec sh -c ' file = "$0" java RemoveHeaders "$file" > processed/$file ' {} ';'
My read write permission
-rwxr-xr-x
As for permissions:
Check that your shebang is at the very top of your file, and that it starts exactly with #!; # ! will not work.
Check that your files are given execute permissions; chmod 750 scriptEmail will do.
Check that your file uses UNIX newlines -- with DOS newlines, your shebang may have a hidden character making it point to an interpreter which doesn't actually exist.
Check that the directory your file is stored in is in a directory where executable scripts are allowed (not mounted with the noexec flag, or in a SELinux context disallowing execution).
If your mount point is noexec or your ability to create executable scripts is blocked by SELinux or similar, then use find . -type d -exec bash ./scriptEmail {} \; to explicitly specify an interpreter rather than attempting to execute your script.
Second: Since you're executing your script with find already -- and using that to recurse through directories -- you don't need a second find inside (which would have you potentially operating on processed/dirA/dirB/file as well as processed/dirB/file and processed/file -- with errors for all of these where the directory doesn't exist).
#!/bin/sh
cd "$1" || exit # if we can't cd to directory given in argument, exit.
mkdir -p processed || exit # if we can't create our output directory, exit.
for f in *; do # ...iterate through all directory contents...
[ -f "$f" ] || continue # ...if they aren't files, skip them...
java RemoveHeaders "$f" >processed/"$f" # run the processing for one item
done
Try
sudo find . -type d -exec ./scriptEmail {} \;

Execute multiple commands on target files from find command

Let's say I have a bunch of *.tar.gz files located in a hierarchy of folders. What would be a good way to find those files, and then execute multiple commands on it.
I know if I just need to execute one command on the target file, I can use something like this:
$ find . -name "*.tar.gz" -exec tar xvzf {} \;
But what if I need to execute multiple commands on the target file? Must I write a bash script here, or is there any simpler way?
Samples of commands that need to be executed a A.tar.gz file:
$ tar xvzf A.tar.gz # assume it untars to folder logs
$ mv logs logs_A
$ rm A.tar.gz
Here's what works for me (thanks to Etan Reisner suggestions)
#!/bin/bash # the target folder (to search for tar.gz files) is parsed from command line
find $1 -name "*.tar.gz" -print0 | while IFS= read -r -d '' file; do # this does the magic of getting each tar.gz file and assign to shell variable `file`
echo $file # then we can do everything with the `file` variable
tar xvzf $file
# mv untar_folder $file.suffix # untar_folder is the name of folder after untar
rm $file
done
As suggested, the array way is unsafe if file name contained space(s), and also doesn't seem to work properly in this case.
Writing a shell script is probably easiest. Take a look at sh for loops. You could use the output of a find command in an array, and then loop over that array to perform a set of commands on each element.
For example,
arr=( $(find . -name "*.tar.gz" -print0) )
for i in "${arr[#]}"; do
# $i now holds each of the filenames output by find
tar xvzf $i
mv $i $i.suffix
rm $i
# etc., etc.
done

Using a while/for loop with the 'find' command to copy files and directories

I have the follwoing problem: I want to make a script that backups a certain directory completely to another directory. I may not use cp -r or any other recursive command. So I was thinking of using a while or for loop. The directory that needs to be back upped is given with a parameter. This is what I have so far:
OIFS="$IFS"
IFS=$'\n'
for file in `find $1`
do
cp $file $HOME/TestDirectory
done
IFS="$OIFS"
But when I execute it, this is what my terminal says: Script started, file is typescript
Try this:
find "$1" -type f -print0 | xargs -0 cp -t $HOME/TestDirectory
Don't run your script through script !
Add this (shebang) at top of file:
#!/bin/bash
find "$1" -type f -print0 | xargs -0 cp -t $HOME/TestDirectory
change permission of your script for adding executable flag:
chmod +x myScript
run your script localy whith arguments:
./myScript rootDirectoryWhereSearchForFiles

Resources