Need to run a command in current directory only if the file is not set with executable - linux

Here is the problem:
Use a bash for loop which loops over files that have the string "osl-guest" and ".tar.gz" in your current directory (using the ‘ls’ command, see sample output below), and runs the command ‘tar -zxf’ on each file individually ONLY IF the file is not set with executable. For example, to run the ‘tar -zxf’ command on the file ‘file1’, the command would be: tar -zxf file1
Sample output of "ls -l":
-rw-r--r-- 1 lance lance 42866 Nov 1 2011 vmlinuz-2.6.35-gentoo-r9-osl-guest_i686.tar.gz
-rwxr-xr-x 1 lance lance 42866 Nov 1 2011 vmlinuz-3.4.5-gentoo-r3-osl-guest_i686.tar.gz
-rw-r--r-- 1 lance lance 42866 Nov 1 2011 vmlinuz-3.5.3-gentoo-r2-osl-guest_i686.tar.gz

You can perform the loop in the following way, without the need to call ls:
# For each file matching the pattern
for f in *osl-guest*.tar.gz; do
# If the file is not executable
if [[ ! -x "$f" ]]; then
tar -zxf $f;
fi;
done;
The *osl-guest*.tar.gz simply uses shell expansion in order to get the list of files you want, rather than making a call it ls.
The if statement checks if the file is executble, -x is the test for an executable and the use of ! negates the result, so it will only enter the if block when the file is not executable.

Related

Linux - Sum total of files in different directories

How do I calculate the sum total size of multiple files located in different directories?
I have a text file containing the full path and name of the files.
I figure a simple script using while read line and du -h might do the trick...
Example of text file (new2.txt) containing list of files to sum:
/mount/st4000/media/A/amediafile.ext
/mount/st4000/media/B/amediafile.ext
/mount/st4000/media/C/amediafile.ext
/mount/st4000/media/D/amediafile.ext
/mount/st4000/media/E/amediafile.ext
/mount/st4000/media/F/amediafile.ext
/mount/st4000/media/G/amediafile.ext
/mount/st4000/media/H/amediafile.ext
/mount/st4000/media/I/amediafile.ext
/mount/st4000/media/J/amediafile.ext
/mount/st4000/media/K/amediafile.ext
Note: the folder structure is not necessarily consecutive as in A..K
Based on the suggestion from AndreaT, adapting it slightly, I tried
while read mediafile;do du -b "$mediafile"|cut -f -1>>subtotals.txt;done<new2.txt
subtotals.txt looks like
733402685
944869798
730564608
213768
13332480
366983168
6122559750
539944960
735039488
1755005744
733478912
To add all the subtotals
sum=0; while read num; do ((sum += num)); done < subtotals.txt; echo $sum
Assuming that file input is like this
/home/administrator/filesum/cliprdr.c
/home/administrator/filesum/cliprdr.h
/home/administrator/filesum/event.c
/home/administrator/filesum/event.h
/home/administrator/filesum/main.c
/home/administrator/filesum/main.h
/home/administrator/filesum/utils.c
/home/administrator/filesum/utils.h
and the result of command ls -l is
-rw-r--r-- 1 administrator administrator 13452 Oct 4 17:56 cliprdr.c
-rw-r--r-- 1 administrator administrator 1240 Oct 4 17:56 cliprdr.h
-rw-r--r-- 1 administrator administrator 8141 Oct 4 17:56 event.c
-rw-r--r-- 1 administrator administrator 2164 Oct 4 17:56 event.h
-rw-r--r-- 1 administrator administrator 32403 Oct 4 17:56 main.c
-rw-r--r-- 1 administrator administrator 1074 Oct 4 17:56 main.h
-rw-r--r-- 1 administrator administrator 5452 Oct 4 17:56 utils.c
-rw-r--r-- 1 administrator administrator 1017 Oct 4 17:56 utils.h
the simplest command to run is:
cat filelist.txt | du -cb | tail -1 | cut -f -1
with following output (in bytes)
69370
Keep in mind that du prints actual disk usage rounded up to a multiple of (usually) 4kb instead of logical file size.
For small files this approximation may not be acceptable.
To sum one directory, you will have to do a while, and export the result to the parent shell.
I used an echo an the subsequent eval :
eval ' let sum=0$(
ls -l | tail -n +2 |\
while read perms link user uid size date day hour name ; do
echo -n "+$size" ;
done
)'
It produces a line, directly evaluated, which looks like
let sum=0+205+1201+1201+1530+128+99
You just have to reproduce twice this command on both folders.
The du command doesn't have a -b option on the unix systems I have available. And there are other ways to get file size.
Assuming you like the idea of a while loop in bash, the following might work:
#!/bin/bash
case "$(uname -s)" in
Linux) stat_opt=(-c '%s') ;;
*BSD|Darwin) stat_opt=(-f '%z') ;;
*) printf 'ERROR: I don'\''t know how to run on %s\n' "$(uname -s)" ;;
esac
declare -i total=0
declare -i count=0
declare filename
while read filename; do
[[ -f "$filename" ]] || continue
(( total+=$(stat "${stat_opt[#]}" "$filename") ))
(( count++ ))
done
printf 'Total: %d bytes in %d files.\n' "$total" "$count"
This would take your list of files as stdin. You can run it in BSD unix or in Linux -- the options to the stat command (which is not internal to bash) are the bit that are platform specific.

Bash: Running one command after another using string variable

I understand that running one command after another is done in bash using the following command
command1 && command2
or
command1; command2
or even
command1 & command2
I also understand that a command stored in a bash variable can be run by simply firing the variable as:
TestCommand="ls"
$TestCommand
Doing the above will list all the files in the directory and I have tested that it does.
But doing the same with multiple commands generates an error. Sample below:
TestCommand="ls && ls -l"
$TestCommand
ls: cannot access &&: No such file or directory
ls: cannot access ls: No such file or directory
My question is why is this happening and is there any workaround?
And before you bash me for doing something so stupid. The preceding is just to present the problem. I have a list of files in my directory and I am using sed to convert the list into a single executable string. Storing that string in a bash variable, I am trying to run it but failing.
When you put two command in a single string variable, it is executed as single command. so when you are using "$TestCommand" to execute two "ls" commands, it is executing only one(first) "ls" command. it considers && and ls(second) as argument of first ls command.
As your current working directory is not having any files named && and ls it is returning error :
ls: cannot access &&: No such file or directory
ls: cannot access ls: No such file or directory
So, basically your commands behaves like this
ls file1 file2 -l
and it will give you output like this if file1 and file2 exists:
HuntM#~/scripts$ ls file1 file2 -l
-rw-r--r-- 1 girishp staff 0 Dec 8 12:44 file1
-rw-r--r-- 1 girishp staff 0 Dec 8 12:44 file2
Now your solution:
You can create function OR one more script to execute 2 commands as below:
caller.sh
#!/bin/bash
myLs=`./myls.sh`
echo "$myLs"
myls.sh
#!/bin/bash
ls && ls -l

How do I find the latest date folder in a directory and then construct the command in a shell script?

I have a directory in which I will have some folders with date format (YYYYMMDD) as shown below -
david#machineX:/database/batch/snapshot$ ls -lt
drwxr-xr-x 2 app kyte 86016 Oct 25 05:19 20141023
drwxr-xr-x 2 app kyte 73728 Oct 18 00:21 20141016
drwxr-xr-x 2 app kyte 73728 Oct 9 22:23 20141009
drwxr-xr-x 2 app kyte 81920 Oct 4 03:11 20141002
Now I need to extract latest date folder from the /database/batch/snapshot directory and then construct the command in my shell script like this -
./file_checker --directory /database/batch/snapshot/20141023/ --regex ".*.data" > shardfile_20141023.log
Below is my shell script -
#!/bin/bash
./file_checker --directory /database/batch/snapshot/20141023/ --regex ".*.data" > shardfile_20141023.log
# now I need to grep shardfile_20141023.log after above command is executed
How do I find the latest date folder and construct above command in a shell script?
Look, this is one of approaches, just grep only folders that have 8 digits:
ls -t1 | grep -P -e "\d{8}" | head -1
Or
ls -t1 | grep -E -e "[0-9]{8}" | head -1
You could try the following in your script:
pushd /database/batch/snapshot
LATESTDATE=`ls -d * | sort -n | tail -1`
popd
./file_checker --directory /database/batch/snapshot/${LATESTDATE}/ --regex ".*.data" > shardfile_${LATESTDATE}.log
See BashFAQ#099 aka "How can I get the newest (or oldest) file from a directory?".
That being said, if you don't care for actual modification time and just want to find the most recent directory based on name you can use an array and globbing (note: the sort order with globbing is subject to LC_COLLATE):
$ find
.
./20141002
./20141009
./20141016
./20141023
$ foo=( * )
$ echo "${foo[${#foo[#]}-1]}"
20141023

rsync prints "skipping non-regular file" for what appears to be a regular directory

I back up my files using rsync. Right after a sync, I ran it expecting to see nothing, but instead it looked like it was skipping directories. I've (obviously) changed names, but I believe I've still captured all the information I could. What's happening here?
$ ls -l /source/backup/myfiles
drwxr-xr-x 2 me me 4096 2010-10-03 14:00 foo
drwxr-xr-x 2 me me 4096 2011-08-03 23:49 bar
drwxr-xr-x 2 me me 4096 2011-08-18 18:58 baz
$ ls -l /destination/backup/myfiles
drwxr-xr-x 2 me me 4096 2010-10-03 14:00 foo
drwxr-xr-x 2 me me 4096 2011-08-03 23:49 bar
drwxr-xr-x 2 me me 4096 2011-08-18 18:58 baz
$ file /source/backup/myfiles/foo
/source/backup/myfiles/foo/: directory
Then I sync (expecting no changes):
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
And here's the weird part:
$ echo 'hi' > /source/backup/myfiles/foo/test
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
backup/myfiles/foo
backup/myfiles/foo/test
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
So it worked:
$ ls -l /source/backup/myfiles/foo
-rw-r--r-- 1 me me 3126091 2010-06-15 22:22 IMGP1856.JPG
-rw-r--r-- 1 me me 3473038 2010-06-15 22:30 P1010615.JPG
-rw-r--r-- 1 me me 3 2011-08-24 13:53 test
$ ls -l /destination/backup/myfiles/foo
-rw-r--r-- 1 me me 3126091 2010-06-15 22:22 IMGP1856.JPG
-rw-r--r-- 1 me me 3473038 2010-06-15 22:30 P1010615.JPG
-rw-r--r-- 1 me me 3 2011-08-24 13:53 test
but still:
$ rsync -rtvp /source/backup /destination
sending incremental file list
backup/myfiles
skipping non-regular file "backup/myfiles/foo"
skipping non-regular file "backup/myfiles/bar"
Other notes:
My actual directories "foo" and "bar" do have spaces, but no other strange characters. Other directories have spaces and have no problem. I 'stat'-ed and saw no differences between the directories that don't rsync and the ones that do.
If you need more information, just ask.
Are you absolutely sure those individual files are not symbolic links?
Rsync has a few useful flags such as -l which will "copy symlinks as symlinks". Adding -l to your command:
rsync -rtvpl /source/backup /destination
I believe symlinks are skipped by default because they can be a security risk. Check the man page or --help for more info on this:
rsync --help | grep link
To verify these are symbolic links or pro-actively to find symbolic links you can use file or find:
$ file /path/to/file
/path/to/file: symbolic link to `/path/file`
$ find /path -type l
/path/to/file
Are you absolutely sure that it's not a symbolic link directory?
try a:
file /source/backup/myfiles/foo
to make sure it's a directory
Also, it could very well be a loopback mount
try
mount
and make sure that /source/backup/myfiles/foo is not listed.
You should try the below command, most probably it will work for you:
rsync -ravz /source/backup /destination
You can try the following, it will work
rsync -rtvp /source/backup /destination
I personally always use this syntax in my script and works a treat to backup the entire system (skipping sys/* & proc/* nfs4/*)
sudo rsync --delete --stats --exclude-from $EXCLUDE -rlptgoDv / $TARGET/ | tee -a $LOG
Here is my script run by root's cron daily:
#!/bin/bash
#
NFS="/nfs4"
HOSTNAME=`hostname`
TIMESTAMP=`date "+%Y%m%d_%H%M%S"`
EXCLUDE="/home/gcclinux/Backups/root-rsync.excludes"
TARGET="${NFS}/${HOSTNAME}/SYS"
LOGDIR="${NFS}/${HOSTNAME}/SYS-LOG"
CMD=`/usr/bin/stat -f -L -c %T ${NFS}`
## CHECK IF NFS IS MOUNTED...
if [[ ! $CMD == "nfs" ]];then
echo "NFS NOT MOUNTED"
exit 1
fi
## CHECK IF LOG DIRECTORY EXIST
if [ ! -d "$LOGDIR" ]; then
/bin/mkdir -p $LOGDIR
fi
## CREATE LOG HEADER
LOG=$LOGDIR/"rsync_result."$TIMESTAMP".txt"
echo "-------------------------------------------------------" | tee -a $LOG
echo `date` | tee -a $LOG
echo "" | tee -a $LOG
## START RUNNING BACKUP
/usr/bin/rsync --delete --stats --exclude-from $EXCLUDE -rlptgoDv / $TARGET/ | tee -a $LOG
In some cases just copy file to another location (like home) then try again

bash script to rename all files in a directory?

i have bunch of files that needs to be renamed.
file1.txt needs to be renamed to file1_file1.txt
file2.avi needs to be renamed to file2_file2.avi
as you can see i need the _ folowed by the original file name.
there are lot of these files.
So far all the answers given either:
Require some non-portable tool
Break horribly with filenames containing spaces or newlines
Is not recursive, i.e. does not descend into sub-directories
These two scripts solve all of those problems.
Bash 2.X/3.X
#!/bin/bash
while IFS= read -r -d $'\0' file; do
dirname="${file%/*}/"
basename="${file:${#dirname}}"
echo mv "$file" "$dirname${basename%.*}_$basename"
done < <(find . -type f -print0)
Bash 4.X
#!/bin/bash
shopt -s globstar
for file in ./**; do
if [[ -f "$file" ]]; then
dirname="${file%/*}/"
basename="${file:${#dirname}}"
echo mv "$file" "$dirname${basename%.*}_$basename"
fi
done
Be sure to remove the echo from whichever script you choose once you are satisfied with it's output and run it again
Edit
Fixed problem in previous version that did not properly handle path names.
For your specific case, you want to use mmv as follows:
pax> ll
total 0
drwxr-xr-x+ 2 allachan None 0 Dec 24 09:47 .
drwxrwxrwx+ 5 allachan None 0 Dec 24 09:39 ..
-rw-r--r-- 1 allachan None 0 Dec 24 09:39 file1.txt
-rw-r--r-- 1 allachan None 0 Dec 24 09:39 file2.avi
pax> mmv '*.*' '#1_#1.#2'
pax> ll
total 0
drwxr-xr-x+ 2 allachan None 0 Dec 24 09:47 .
drwxrwxrwx+ 5 allachan None 0 Dec 24 09:39 ..
-rw-r--r-- 1 allachan None 0 Dec 24 09:39 file1_file1.txt
-rw-r--r-- 1 allachan None 0 Dec 24 09:39 file2_file2.avi
You need to be aware that the wildcard matching is not greedy. That means that the file a.b.txt will be turned into a_a.b.txt, not a.b_a.b.txt.
The mmv program was installed as part of my CygWin but I had to
sudo apt-get install mmv
on my Ubuntu box to get it down. If it's not in you standard distribution, whatever package manager you're using will hopefully have it available.
If, for some reason, you're not permitted to install it, you'll have to use one of the other bash for-loop-type solutions shown in the other answers. I prefer the terseness of mmv myself but you may not have the option.
for file in file*.*
do
[ -f "$file" ] && echo mv "$file" "${file%%.*}_$file"
done
Idea for recursion
recurse() {
for file in "$1"/*;do
if [ -d "$file" ];then
recurse "$file"
else
# check for relevant files
# echo mv "$file" "${file%%.*}_$file"
fi
done
}
recurse /path/to/files
find . -type f | while read FN; do
BFN=$(basename "$FN")
NFN=${BFN%.*}_${BFN}
echo "$BFN -> $NFN"
mv "$FN" "$NFN"
done
I like the PERL cookbook's rename script for this. It may not be /bin/sh but you can do regular expression-like renames.
The /bin/sh method would be to use sed/cut/awk to alter each filename inside a for loop. If the directory is large you'd need to rely on xargs.
One should mention the mmv tool, which is especially made for this.
It's described here: http://tldp.org/LDP/GNU-Linux-Tools-Summary/html/mass-rename.html
...along with alternatives.
I use prename (perl based), which is included in various linux distributions. It works with regular expressions, so to say change all img_x.jpg to IMAGE_x.jpg you'd do
prename 's/img_/IMAGE_/' img*jpg
You can use the -n flag to preview changes without making any actual changes.
prename man entry
#!/bin/bash
# Don't do this like I did:
# files=`ls ${1}`
for file in *.*
do
if [ -f $file ];
then
newname=${file%%.*}_${file}
mv $file $newname
fi
done
This one won't rename sub directories, only regular files.

Resources