How to get the Directory name and file name of a bash script by bash? - linux

Follow are known. Possible it helps:
Get the filename.extension incl. fullpath:
Script: /path1/path2/path3/path4/path5/bashfile.sh
#!/bin/bash
echo $0
read -r
Output:
/path1/path2/path3/path4/path5/bashfile.sh
Get filename.extension:
Script: /path/path/path/path/path/bashfile.sh
#!/bin/bash
echo ${0##*/}
read -r
Output:
bashfile.sh
Question:
How to get directory name and file name of a bash script by bash ?
Script: `/path1/path2/path3/path4/path5/bashfile.sh`
Wanted output:
/path5/bashfile.sh
Remark:
Perhaps its possible, if you look from right side, remove all left from "/*/"

Little bit shorter than the first fitting solution:
Script: /path1/path2/path3/path4/path5/bashfile.sh
#!/bin/bash
n=$(($(echo $0 | tr -dc "/" | wc -m)+1))
echo "/""$(echo "$0" | cut -d"/" -f$(($n-1)),$n)"
read -r
Output:
/path5/bashfile.sh
Perhaps they are a shorter solution.

readlink -f $0 |awk -F"/" '{print "/"$(NF-1)"/"$NF}'
# or
awk -F"/" '{print "/"$(NF-1)"/"$NF}' <(readlink -f $0)
# or
awk -F"/" '{print "/"$(NF-1)"/"$NF}' <<<$(readlink -f $0)
# or
sed -E 's/^(.*)(\/\w+\/\w+\.\w+$)/\2/g' <(readlink -f $0)
output
/path5/bashfile.sh

#!/bin/bash
echo "/$(basename "$(dirname "$0")")/$(basename "$0")"
echo
echo
read -r
Output:
/Dirname/Filname.Extension

Related

Function to search of multiple patterns using grep

I want to make a bash script to use grep to search for lines which have multiple patterns (case-insensitive). I want to create a bash script which I can use as follows:
myscript file.txt pattern1 pattern2 pattern3
and it should get traslated to:
grep -i --color=always pattern1 file.txt | grep -i pattern2 | grep -i pattern3
I tried following bash script, but it is not working:
#!/bin/bash
grep -i --color=always $2 $1 | grep -i $3 | grep -i $4 | grep -i $5 | grep -i $6 | grep -i $7
The error is:
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
I think you can do a recursive function:
search() {
if [ $# -gt 0 ]; then
local pat=$1
shift
grep "$pat" | search "$#"
else
cat
fi
}
In your script you would call this function and pass the search patterns as arguments. Say that $1 is the file and the rest of the arguments are patterns then you would do
file=$1
shift
cat "$file" | search "$#"
When you have GNU awk, you can use
awk 'BEGIN {IGNORECASE=1} /pattern1/ && /pattern2/ && /pattern3/' file.txt
EDIT:
You can use this in a script like this:
inputfile="$1"
shift
awk -f <(echo "BEGIN {IGNORECASE=1}"; printf " /%s/ &&" $* | sed 's/&&$//') "${inputfile}"
If you omit one argument or other at the end, so that $3 etc. will be missing, then some grep command will not receive an argument and will whine.

Concatenating xargs with the use of if-else in bash

I've got two test files, namely, ttt.txt and ttt2.txt, the Content of which is shown as below:
#ttt.txt
(132) 123-2131
543-732-3123
238-3102-312
#ttt2.txt
1
2
3
I've already tried the following commands in bash and it works fine:
if grep -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" ttt1.txt ; then echo "found"; fi
# with output 'found'
if grep -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" ttt2.txt ; then echo "found"; fi
But when I combine the above command with xargs, it complains error '-bash: syntax error near unexpected token `then''. Could anyone give me some explanation? Thanks in advance!
ll | awk '{print $9}' | grep ttt | xargs -I $ if grep --quiet -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" $; then echo "found"; fi
$ is a special character in bash (it marks variables) so don't use it as your xargs marker, you'll only get confused.
The real problem here though is that you are passing if grep --quiet -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" $ as the argument to xargs, and then the remainder of the line is being treated as a new command, because it breaks at the ;.
You can wrap the whole thing in a sub-invocation of bash, so that xargs sees the whole command:
$ ll | awk '{print $9}' | grep ttt | xargs -I xx bash -c 'if grep --quiet -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" xx; then echo "found"; fi'
found
Finally, ll | awk '{print $9}' | grep ttt is a needlessly complicated way of listing the files that you're looking for. You actually you don't need any of the code above, just do this:
$ if grep --quiet -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" ttt*; then echo "found"; fi
found
Alternatively, if you want to process each file in turn (which you don't need here, but you might want when this gets more complicated):
for file in ttt*
do
if grep --quiet -oE "(\(\d{3}\)[ ]?\d{3}-\d{4})|(\d{3}-\d{3}-\d{4})" "$file"
then
echo "found"
fi
done

dynamically run linux shell commands

I have a command that should be executed by a shell script.
Actually the command does not matter the only thing that is important the further command execution and the right escaping of the critical parts.
The command that usually is executed normally in putty is something like this(maybe some additional flags for ls)
rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash`
but now I have a batch of such command so I would like to execute them in a loop
like
for i in {0..100}
do
str=str$i
${!str}
done
where str is :
str0="rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash`"
str1="rm -r `ls /test/parse_second/ | awk '{print $2}' | grep trash`"
and that gives me a lot of headache cause the execution done by ${!str} brakes the quotations and inline shell between `...` marks
my_rm() { rm -r `ls /test/$1 | awk ... | grep ... `; }
for i in `whatevr`; do
my_rm $i
done;
Getting this right is surprisingly tricky, but it can be done:
for i in $(seq 0 100)
do
str=str$i
eval "eval \"\$$str\""
done
You can also do:
for i in {0..10}
do
<whatevercommand>
done
It's actually simpler to place them on arrays and use glob patterns:
#!/bin/bash
shopt -s nullglob
DIRS=("/test/parse_first/" "/test/parse_second/")
for D in "${DIRS[#]}"; do
for T in "$D"/*trash*; do
rm -r -- "$T"
done
done
And if rm could accept multiple arguments, you don't need to have an extra loop:
for D in "${DIRS[#]}"; do
rm -r -- "$D"/*trash*
done
UPDATE:
#!/bin/bash
readarray -t COMMANDS <<'EOF'
rm -r `ls /test/parse_first/ | awk '{print $2}' | grep trash
rm -r `ls /test/parse_second/ | awk '{print $2}' | grep trash
EOF
for C in "${COMMANDS[#]}"; do
eval "$C"
done
Or you could just read commands from another file:
readarray -t COMMANDS < somefile.txt

command not working as expected if run via /bin/sh -c

I have to concatenate a set of files. Directory structure is like this:
root/features/xxx/multiple_files... -> root/xxx/single_file
what i have written (and it works fine):
for dirname in $(ls -d root/features/*|awk -F/ '{print $NF}');do;mkdir root/${dirname};cat root/features/${dirname}/* > root/${dirname}/final.txt;done
But when i run the same thing via sh shell
/bin/sh -c "for dirname in $(ls -d root/features/*|awk -F/ '{print $NF}');do;mkdir root/${dirname};cat root/features/${dirname}/* > root/${dirname}/final.txt;done"
it gives me errors:
/bin/sh: -c: line 1: syntax error near unexpected token `201201000'
/bin/sh: -c: line 1: `201201000'
My process always appends /bin/sh -c before running any commands. Any suggestions what might be going wrong here? Any alternate ways? I have spent a really long time on this ,without making much headway!
EDIT:
`ls -d root/features/*|awk -F/ '{print $NF}' returns
201201
201201000
201201001
201201002
201201003
201201004
201201005
201201006
201201007
201202000
201205000
201206000
201207000
201207001
201207002
Always use sh -c 'cmd1 | cmd2' with single quotes.
Always use sh -eu -xv -c 'cmd1 | cmd2' to debug.
Always use bash -c 'cmd1 | cmd2' if your code is Bash-specific (cf. process substitution, ...).
Remove ; after do in for ... ; do; mkdir ....
Escape possible single quotes within single quotes like so: ' --> '\''.
(And sometimes just formatting your code clarifies a lot.)
Applied to your command this should look somewhat like this ...
# test version
/bin/sh -c '
for dirname in $(ls -d /* | awk -F/ '\''{print $NF}'\''); do
printf "%s\n" "mkdir root/${dirname}";
printf "%s\n" "cat root/features/${dirname}/* > root/${dirname}/final.txt";
echo
done
' | nl
# test version using 'printf' instead of 'ls'
sh -c '
printf "%s\000" /*/ | while IFS="" read -r -d "" file; do
dirname="$(basename "$file")"
printf "%s\n" "mkdir root/${dirname}";
printf "%s\n" "cat root/features/${dirname}/* > root/${dirname}/final.txt";
echo
done
' | nl
I got this to run in the little test environment I set up on my box. Turns out it didn't like the double quotes. The issue I ran into was the quotes around the awk statement...if you wrap it in double quotes it prints the whole thing.....I used cut to get the desired result, but my guess is you'll have to change the -f arg to 3 instead of 2..I think.
/bin/sh -c 'for dirname in $(ls -d sh_test/* | awk -F/ '\''{print $NF}'\''); do mkdir sh_test_root/${dirname}; cat sh_test/${dirname}/* > sh_test_root/${dirname}/final.txt;done'
edit: Tested edit proposed by nadu and it works fine. The above reflects that change.

Shell file size in Linux

How can I get the size of a file into a variable?
ls -l | grep testing.txt | cut -f6 -d' '
gave the size, but how can I store it in a shell variable?
filesize=$(stat -c '%s' testing.txt)
You can do it this way with ls (check the man page for the meaning of -s)
var=$(ls -s1 testing.txt | awk '{print $1}')
Or you can use stat with -c '%s'.
Or you can use find (GNU):
var=$(find testing.txt -printf "%s")
size() {
file="$1"
if [ -b "$file" ]; then
/sbin/blockdev --getsize64 "$file"
else
wc -c < "$file" # Handles pseudo files like /proc/cpuinfo
# stat --format %s "$file"
# find "$file" -printf '%s\n'
# du -b "$file" | cut -f1
fi
}
fs=$(size testing.txt)
size=`ls -l | grep testing.txt | cut -f6 -d' '`
You can get the file size in bytes with the command wc, which is fairly common on Linux systems since it's part of GNU coreutils:
wc -c < file
In a Bash script you can read it into a variable like this:
FILESIZE=$(wc -c < file)
From man wc:
-c, --bytes
print the byte counts
a=\`stat -c '%s' testing.txt\`;
echo $a

Resources