Bash script file size - linux

I have got script like that:
#!/bin/sh
cd /home/gamesimport/
ls -t games*.xml | tail -n+2 | xargs rm
mv games*.xml games_ok.xml
It's just deleting old games*.xml files, renaming the lastest games.xml file but I would like also to change name if games.xml file is larger then 1 MB. How would I do that?

FILESIZE=$(stat -c%s games_ok.xml)
MAX=1048576
if [ $FILESIZE -ge $MAX ]; then
#do something else
fi
should work

Simply, use find:
find some/where -name games\*.xml -size +1M -exec mv {} {}.big \;

Related

Need guidance with a bash script to check log files in a certain directory for a certain string

I would like to preface this with I am a complete noob with scripting. So I have a situation where I need to manually look for a phone number that could live in one of hundreds of files.
so the logs live in the following directory.
/actlogs/sbclogger_archive
The logs file names are in directories numbered 01-31 inside of that directory and all the files are zipped.
Inside of those numbered directories are tons of files but the only ones I want to search are "sipd.logthenthedate.gz" and "sipmsg.logthenthedate.gz".
So I need to look in all the files in the following directory.
"/actlogs/sbclogger_archive"
Which has 31 directories labeled "01-31"
Then in each 01-31 there is hundreds of files the only ones I want to look are are "sipd.logthenthedate.gz" and "sipmsg.logthenthedate.gz".
The script I am using is below, please let me know what I could do to make this work.
#!/bin/bash
read -p "Enter a phone number: " text
read -p "Enter directory of log file's, Hint it should be /actlogs/sbclogger_archive: " directory
#arr=( $(find $directory -type f -exec grep -l "$text" {} \; | sort -r) )
#find $directory -type f -exec grep -qe "$text" {} \; -exec bash -c '
file=$(find $directory -type f -name 'sipd.log*' -exec grep -qe "$text" {} \; -exec bash -c 'select f; do echo $f; break; done' find-sh {} +;)
if [ -z "$file" ]; then
echo "No matches found."
else
echo "select tool:"
tools=("nano" "less" "vim" "quit")
select tool in "${tools[#]}"
do
case $tool in
"quit")
break
;;
*)
$tool $file
break
;;
esac
done
fi
This would give you the list of files matching:
find \( -name 'sipd.log[0-9]*.gz' -o -name 'sipmsg.log[0-9]*.gz' \) \
-exec sh -c 'gunzip -c {}| grep -m1 -q 888333' \; -print
./18/sipd.log20200118.gz
./7/sipd.log20200107.gz
Note: -m1 tells grep to stop after first match, since you need only the file name in this case, it's enough.
If you have zgrep, you can shorten it to:
find \( -name 'sipd.log[0-9]*.gz' -o -name 'sipmsg.log[0-9]*.gz' \) \
-exec zgrep -l '888333' {} \;
./18/sipd.log20200118.gz
./7/sipd.log20200107.gz
Also, some of the tools you are suggesting do not support gzip files (nano and some variants of less for example). In which case you might need to decompress the file and compress it again when done.
And, you might want to consider a loop if you want to "quit". Feeding the file list to the tool doesn't make sense.
Note: AFAIK zgrep doesn't do recursive:
DESCRIPTION
Zgrep invokes grep on compressed or gzipped files. These grep options will cause zgrep to terminate with an
error code:
(-[drRzZ]|--di*|--exc*|--inc*|--rec*|--nu*). All other options specified are passed directly to grep. If no file is specified, then
the
standard input is decompressed if necessary and fed to grep. Otherwise the given files are uncompressed if necessary and fed to
grep.
so zgrep -rl "$text" "$directory" or zgrep -rl --include 'simpd.log*.gz' "$test" {01..31} won't work except if you have a special zgrep
As you must unzip before using your tool, i would divide the problem in two blocks.
Firstly, i would expand the paths you need (looking under <directory> for the phone <text>), and then iterate to apply the tool (because some tools like vim or nano cannot be piped).
Try something like this:
#!/bin/bash
#...
# text/directory input stuff
#...
tmpdir=$(mktemp -d)
trap 'rm -rf ${tmpdir}' EXIT
while IFS= read -r file; do
unzipped=${tmpdir}/$(basename "${file}" .gz)
gunzip -c "${file}" > "${unzipped}"
${tool} "${unzipped}"
done < <(zgrep -lw "${text}" "${directory}"/{01..31}/{sipd.logthenthedate.gz,sipmsg.logthenthedate.gz} 2>/dev/null)
Above is the proposed invert-form by Charles Duffy following this Bash FAQ.
If you prefer to iterate an array, you could build in this way:
# shellcheck disable=SC2207
files=( $(zgrep -lw "${text}" "${directory}"/{01..31}/{sipd.logthenthedate.gz,sipmsg.logthenthedate.gz} 2>/dev/null) )
for file in "${files[#]}"; do
# etc.
as in our particular case, the files to match have no spaces in their names and shellcheck warning is not so important (hidden above).
BRs

Create duplicate file and rename it

I want duplicates of the files with different name.
I am currently trying out these commands before putting them into my bash script.
$ set dir = /somewhere/states
$ find $dir -name "total.txt" -type f | xargs ls -1
/somewhere/states/florida/fixed.fl_Asite_ttl/somewhere/total.txt
/somewhere/states/hawaii/fixed.hi_Bsite_ttl/somewhere/total.txt
/somewhere/states/kentucky/fixed.ky_Asite_ttl/somewhere/total.txt
/somewhere/states/michigan/fixed.mi_Csite_ttl/somewhere/total.txt
/somewhere/states/texas/fixed.tx_Vsite_ttl/somewhere/total.txt
I know I can rename file using something like this, but it isn't exactly what I want:
$ find $dir -name "total.txt" -exec sh -c 'cp {} `dirname {}`/`basename {} `why.xls' \;
/somewhere/states/florida/fixed.fl_Asite_ttl/somewhere/total.txtwhy.xls
/somewhere/states/hawaii/fixed.hi_Bsite_ttl/somewhere/total.txtwhy.xls
/somewhere/states/kentucky/fixed.ky_Asite_ttl/somewhere/total.txtwhy.xls
/somewhere/states/michigan/fixed.mi_Csite_ttl/somewhere/total.txtwhy.xls
/somewhere/states/texas/fixed.tx_Vsite_ttl/somewhere/total.txtwhy.xls
May I know how to copy the files and have the new files in the same dir?
below are the examples.
I want to name the new files as everything behind "fixed." and before "/somewhere" and changing the file extension as well
/somewhere/states/florida/fixed.fl_Asite_ttl/somewhere/fl_Asite_ttl.xls
/somewhere/states/hawaii/fixed.hi_Bsite_ttl/somewhere/hi_Bsite_ttl.xls
/somewhere/states/kentucky/fixed.ky_Asite_ttl/somewhere/ky_Asite_ttl.xls
/somewhere/states/michigan/fixed.mi_Csite_ttl/somewhere/mi_Csite_ttl.xls
/somewhere/states/texas/fixed.tx_Vsite_ttl/somewhere/tx_Vsite_ttl.xls
Update:
/somewhere/states/florida_fixed_ttl/fixed.fl_Asite_ttl/somewhere/total.txt
Probably not the most elegant but this should work:
find . -name total.txt | while read F ; do [[ $F =~ fixed.[^/]* ]] ; N=$(echo $BASH_REMATCH | sed s/fixed\.//) ; echo "cp $F $(dirname $F)/$N.xls" ; done
If you are happy with the output just remove the last echo, i.e. this:
echo "cp $F $(dirname $F)/$N.xls"
to this:
cp "$F" "$(dirname $F)/$N.xls"
Note, if the .txt and .xls contents will always remain the same you can use ln instead of cp -- one file, two names.

In Shell, How to move the contents of a dir to another dir in same level

/tmp/test$ ls
dir1 dir2 dirM
/tmp/test$ mv * dirM/
mv: cannot move `dirM' to a subdirectory of itself, `dirM/dirM'
Is there a way to move everything (except dirM) to dirM using shell script?
You don't say what shell you are using. In POSIX-compliant sh, you can loop over the contents of the directory, skipping dirM:
for d in ./*/; do
[ ! -d "$d" ] || [ "$d" -eq "./dirM/" ] && continue
mv "$d" dirM
done
In bash, you can use extended patterns to match all but dirM:
shopt -s extglob nullglob
for d in !(dirM)/; do
mv "$d" dirM
done
In either case, if you simply want to hide the error, use
mv * dirM 2> /dev/null
This isn't recommended, as it will discard any error that might occur.
It can be done like this
find -type d -not -name 'dirM' -exec mv {} 'dirM' \;
-type d -not -name 'dirM' #find all directories except dirM
exec mv {} 'dirM' #move these directories to dirM
Since my old answer was voted down, here is another. Much better than all the others.
mv !(dirM) DirM
Doesn't parse ls.
Tested under version 4.1-1
!(pattern-list)
Matches anything except one of the given patterns
https://www.gnu.org/software/bash/manual/html_node/Pattern-Matching.html#Pattern-Matching
How about:
mv $(ls -d * | grep -v ^dirM$) dirM
or if you are not using bash
mv `ls -d * | grep -v ^dirM$` dirM

shell script or linux command to recursively find all js/css file under public folder

I need help with shell script or linux command to recursively find all js/css file under public folder, then create the filename.min.jsm on the same directory of previous found but also put the filename.js inside that filename.min.jsm.
For example
public/test/a.js
public/b.js
public/test2/test3/c.js
output:
public/test/a.js
public/test/a.min.jsm -> a.js is written inside of this file
public/b.js
public/b.min.jsm -> b.js is written inside of this file
public/test2/test3/c.js
public/test2/test3/c.min.jsm c.js is written inside of this file
Here is a simple refactoring of #choroba's answer which inlines the shell script so you don't need a separate file.
find \( -name '*.js' -o -name '*.css' \) -exec sh -c '
filename=$1
path=${filename%/*}
basename=${filename##*/}
prefix=${basename%%.*}
echo "$basename" > "$path/$prefix".min.jsm
' _ {} \;
I added -o -name '*.css' and a set of parentheses for grouping the conditions, too. If you don't want to run this on CSS files, revert that change, or add a conditional to the embedded shell script snippet.
Create the following script:
#!/bin/bash
filename=$1
path=${filename%/*}
basename=${filename##*/}
prefix=${basename%%.*}
echo "$basename" > "$path/$prefix".min.jsm
Then, run
find -name '*.js' -exec /path/to/script.sh {} \;
It's not clear what you want to do with the css files, but that's left as an exercise for the reader.
You can try the following script:
#!/bin/bash
export MY_TMPDIR=$(mktemp -d)
trap 'rm -rf ${MY_TMPDIR}' EXIT
export fstyle_tmp="${MY_TMPDIR}/fstyle"
find /* -regextype posix-extended -regex '.+\.(js|css)' > "${fstyle_tmp}"
while read line; do
line2=$(echo "${line}" | sed -e 's/\.js$//' -e 's/\.css$//')
cp "${line}" "${line2}.min.jsm"
done < "${fstyle_tmp}"
exit 0
#EOF
If you dont want copy css content change:
line2=$(echo "${line}" | sed -e 's/\.js$//' -e 's/\.css$//')
cp "${line}" "${line2}.min.jsm"
For:
line2=$(echo "${line}" | sed -e 's/\.js$//' -e 's/\.css$//')
if [[ "${line}" =~ \.js$ ]]; then
cp "${line}" "${line2}.min.jsm"
else
touch "${line2}.min.jsm"
fi

Unzip files in Linux using bash cause errors when path contains spaces

File path like this:
path/path/path/File name 2.3.pdf.zip
What am I doing wrong:
# unzip files back to normal
# and remove zip files
for f in `find "$1" -type f -iname "*.zip"`; do
dir=`dirname "$f"`
unzip -o "$f" -d "$dir"
rm -f "$f"
done
Error message:
unzip: cannot find or open file, file.zip, or file.ZIP
Using UnZip 5.52
Red Hat Enterprise Linux Server release 5.10 (Tikanga)
I think your loop is splitting up the output of find based on the spaces. You may want to do something to read one line at a time, like this
find "$1" -type f -iname "*.zip" | while read f
do
dir=`dirname "$f"`
unzip -o "$f" -d "$dir"
rm -f "$f"
done
Or, alternatively, you could set IFS:
IFS='\n'
for f in `find "$1" -type f -iname "*.zip"`; do
dir=`dirname "$f"`
unzip -o "$f" -d "$dir"
rm -f "$f"
done
Perhaps your problem is here
dir=`dirname $f`
Try instead
dir=`dirname "$f"`
Also, you have some unnecessary semicolons at the end of a few of your lines.

Resources