Bash command rev to reverse delemiters - linux

I am working on a shell script that converts exported Microsoft in-addr.apra.txt files to a more useful format so that i can use it in the future in other products for automation purposes. No i am figuring a problem which (im not a programmer) can not solve in a simple way.
Sample script
x=123.223.224
rev $x
gives me
422.322.321
but i want to have the output as follow:
224.223.123
is there a easy way to do it without rev or putting each group in a variable? Or is there a sample i can use? or maybe i use the wrong tools to do it?

Using awk:
x='123.223.224'
awk 'BEGIN{FS=OFS="."} {for (i=NF; i>=2; i--) printf $i OFS; print $1}' <<< "$x"
224.223.123

Use awk for this!
If your text file always contains three octets, simply use . as separator:
echo $x | awk -F. '{ print $3 "." $2 "." $1 }'
For more complex cases, use internal split():
echo $x | awk '{
n = split($0, a, ".");
for(i = n; i > 1; i--) {
printf "%s.", a[i];
}
print a[1]; }'
In this sample split() will split every line (which is passed as argument $0) using delimiter ., saves resulting array into a and returns length of that array (which is saved to n). Note that unlike C,
split() array indexes are starting with one.
Or python:
python -c "print '.'.join(reversed('$x'.split('.')))"

Here is my script.
#!/bin/sh
value=$1
delim=$2
total_fields=$(echo "$value" | tr -cd $2 | wc -c)
let total_fields=total_fields+1
i=1
reverse_value=""
while [ $total_fields -gt 0 ]; do
cur_value=$(echo "$value" | cut -d${delim} -f${total_fields})
if [ $total_fields -ne 1 ]; then
cur_value="$cur_value${delim}"
fi
#echo "$cur_value"
reverse_value="$reverse_value$cur_value"
#echo "$i --> $reverse_value"
let total_fields=total_fields-1
done
echo "$reverse_value"

Using a few small tools.
tr '.' '\n' <<< "$x" | tac | paste -sd.
224.223.123

Related

Difficulty to create .txt file from loop in bash

I've this data :
cat >data1.txt <<'EOF'
2020-01-27-06-00;/dev/hd1;100;/
2020-01-27-12-00;/dev/hd1;100;/
2020-01-27-18-00;/dev/hd1;100;/
2020-01-27-06-00;/dev/hd2;200;/usr
2020-01-27-12-00;/dev/hd2;200;/usr
2020-01-27-18-00;/dev/hd2;200;/usr
EOF
cat >data2.txt <<'EOF'
2020-02-27-06-00;/dev/hd1;120;/
2020-02-27-12-00;/dev/hd1;120;/
2020-02-27-18-00;/dev/hd1;120;/
2020-02-27-06-00;/dev/hd2;230;/usr
2020-02-27-12-00;/dev/hd2;230;/usr
2020-02-27-18-00;/dev/hd2;230;/usr
EOF
cat >data3.txt <<'EOF'
2020-03-27-06-00;/dev/hd1;130;/
2020-03-27-12-00;/dev/hd1;130;/
2020-03-27-18-00;/dev/hd1;130;/
2020-03-27-06-00;/dev/hd2;240;/usr
2020-03-27-12-00;/dev/hd2;240;/usr
2020-03-27-18-00;/dev/hd2;240;/usr
EOF
I would like to create a .txt file for each filesystem ( so hd1.txt, hd2.txt, hd3.txt and hd4.txt ) and put in each .txt file the sum of the value from each FS from each dataX.txt. I've some difficulties to explain in english what I want, so here an example of the result wanted
Expected content for the output file hd1.txt:
2020-01;/dev/hd1;300;/
2020-02;/dev/hd1;360;/
2020-03;/dev/hd1;390:/
Expected content for the file hd2.txt:
2020-01;/dev/hd2;600;/usr
2020-02;/dev/hd2;690;/usr
2020-03;/dev/hd2;720;/usr
The implementation I've currently tried:
for i in $(cat *.txt | awk -F';' '{print $2}' | cut -d '/' -f3| uniq)
do
cat *.txt | grep -w $i | awk -F';' -v date="$(cat *.txt | awk -F';' '{print $1}' | cut -d'-' -f-2 | uniq )" '{sum+=$3} END {print date";"$2";"sum}' >> $i
done
But it doesn't works...
Can you show me how to do that ?
Because the format seems to be so constant, you can delimit the input with multiple separators and parse it easily in awk:
awk -v FS='[;-/]' '
prev != $9 {
if (length(output)) {
print output >> fileoutput
}
prev = $9
sum = 0
}
{
sum += $9
output = sprintf("%s-%s;/%s/%s;%d;/%s", $1, $2, $7, $8, sum, $11)
fileoutput = $8 ".txt"
}
END {
print output >> fileoutput
}
' *.txt
Tested on repl generates:
+ cat hd1.txt
2020-01;/dev/hd1;300;/
2020-02;/dev/hd1;360;/
2020-03;/dev/hd1;390;/
+ cat hd2.txt
2020-01;/dev/hd2;600;/usr
2020-02;/dev/hd2;690;/usr
2020-03;/dev/hd2;720;/usr
Alternatively, you could -v FS=';' and use split to split first and second column to extract the year and month and the hdX number.
If you seek a bash solution, I suggest you invert the loops - first iterate over files, then over identifiers in second column.
for file in *.txt; do
prev=
output=
while IFS=';' read -r date dev num path; do
hd=$(basename "$dev")
if [[ "$hd" != "${prev:-}" ]]; then
if ((${#output})); then
printf "%s\n" "$output" >> "$fileoutput"
fi
sum=0
prev="$hd"
fi
sum=$((sum + num))
output=$(
printf "%s;%s;%d;%s" \
"$(cut -d'-' -f1-2 <<<"$date")" \
"$dev" "$sum" "$path"
)
fileoutput="${hd}.txt"
done < "$file"
printf "%s\n" "$output" >> "$fileoutput"
done
You could also almost translate awk to bash 1:1 by doing IFS='-;/' in while read loop.

bash count sequential files

I'm pretty new to bash scripting so some of the syntaxes may not be optimal. Please do point them out if you see one.
I have files in a directory named sequentially.
Example: prob01_01 prob01_03 prob01_07 prob02_01 prob02_03 ....
I am trying to have the script iterate through the current directory and count how many extensions each problem has. Then print the pre-extension name then count
Sample output for above would be:
prob01 3
prob02 2
This is my code:
#!/bin/bash
temp=$(mktemp)
element=''
count=0
for i in *
do
current=${i%_*}
if [[ $current == $element ]]
then
let "count+=1"
else
echo $element $count >> temp
element=$current
count=1
fi
done
echo 'heres the temp:'
cat temp
rm 'temp'
The Problem:
Current output:
prob1 3
Desired output:
prob1 3
prob2 2
The last count isn't appended because it's not seeing a different element after it
My Guess on possible solutions:
Have the last append occur at the end of the for loop?
Your code has 2 problems.
The first problem doesn't answer your question. You make a temporary file, the filename is stored in $temp. You should use that one, and not the file with the fixed name temp.
The problem is that you only write results when you see a new problem/filename. The last one will not be printed.
Fixing only these problems will result in
results() {
if (( count == 0 )); then
return
fi
echo $element $count >> "${temp}"
}
temp=$(mktemp)
element=''
count=0
for i in prob*
do
current=${i%_*}
if [[ $current == $element ]]
then
let "count+=1" # Better is using ((count++))
else
results
element=$current
count=1
fi
done
results
echo 'heres the temp:'
cat "${temp}"
rm "${temp}"
You can do without the script with
ls prob* | cut -d"_" -f1 | sort | uniq -c
When you want the have the output displayed as given, you need one more step.
ls prob* | cut -d"_" -f1 | sort | uniq -c | awk '{print $2 " " $1}'
You may use printf + awk solution:
printf '%s\n' *_* | awk -F_ '{a[$1]++} END{for (i in a) print i, a[i]}'
prob01 3
prob02 2
We use printf to print each file that has at least one _
We use awk to get a count of each file's first element delimited by _ by using an associative array.
I would do it like this:
$ ls | awk -F_ '{print $1}' | sort | uniq -c | awk '{print $2 " " $1}'
prob01 3
prob02 2

Hex compare in bash scripting

I am facing some issue when I am reading the 3rd word(a hex string) of each line in a text file and compare it with a hex number. Can some one please help me on it.
#!/bin/bash
A=$1
cat $A | while read a; do
a1=$(echo \""$a"\" | awk '{ print $3 }')
#echo $a > cut -d " " -f 3
echo $a1
(("$a1" == 0x10F7))
echo $?
done
But when I use below, the comparison happens correctly,
a1= 0xADCAFE
(( "$a1" == 0x10F7 ))
echo $?
Then why it is showing issue when I read like below,
a1=$(echo \""$a"\" | awk '{ print $3 }')
or> a1=$(echo $a | awk '{ print $3 }')
echo $a prints intended hex value, but comparison does not happen.
Regards,
Running Awk inside a while read loop is an antipattern. Just do the loop in Awk; it's good at that.
awk '$3 == 4343' "$1"
If you want to compare against a string whose value is "0x10F7" then it's
awk '$3 == "0x10F7"' "$1"
If you want to match either, case insensitively etc, a regex is a good way to do that.
awk '$3 ~ /^(0x10[Ff]7|4343)$/' "$1"
Notice how the $1 in double quotes is handled by the shell, and gets replaced by a (properly quoted!) copy of the script's first command-line argument before Awk runs, while the Awk script in single quotes has its own namespace, so $3 is an Awk variable which refers to the third field in the current input line.
Either way, avoid the useless use of cat and always always always quote variables which contain file names with double quotes.
That's literal double quotes. You seem to have tried both a dangerous bare $a and a doubly double-quoted "\"$a\"" where the simple "$a" would be what you actually want.
Thank you all for your responses, Now my script is working fine. I was trying to match two files, below script does the purpose
#!/bin/bash
A=$1
B=$2
dos2unix -f "$A"
dos2unix -f "$B"
rm search_match.txt search_data_match.txt search_nomatch.txt search_data_nomatch.txt
while read line;do
search_word=$(echo $line | awk '{ print $1 }')
grep "$search_word" $B >> temp_file.txt
while read var;do
file1_hex=$(echo $line | awk '{ print $2 }')
file2_hex=$(echo $var | awk '{ print $3 }')
(("$file1_hex" == "$file2_hex"))
zero=$(echo $?)
if [ "$zero" -eq 0 ] ; then
echo $line >> search_match.txt
echo $var >> search_data_match.txt
else
echo $line >> search_nomatch.txt
echo $var >> search_data_nomatch.txt
fi
done < "temp_file.txt"
rm temp_file.txt
done < "$A"

How to increment version number using shell script?

I have a version number with three columns and two digits (xx:xx:xx). Can anyone please tell me how to increment that using shell script.
Min Value
00:00:00
Max Value
99:99:99
Sample IO
10:23:56 -> 10:23:57
62:54:99 -> 62:55:00
87:99:99 -> 88:00:00
As a one liner using awk, assuming VERSION is a variable with the version in it:
echo $VERSION | awk 'BEGIN { FS=":" } { $3++; if ($3 > 99) { $3=0; $2++; if ($2 > 99) { $2=0; $1++ } } } { printf "%02d:%02d:%02d\n", $1, $2, $3 }'
Nothing fancy (other than Bash) needed:
$ ver=87:99:99
$ echo "$ver"
87:99:99
$ printf -v ver '%06d' $((10#${ver//:}+1))
$ ver=${ver%????}:${ver: -4:2}:${ver: -2:2}
$ echo "$ver"
88:00:00
We just use the parameter expansion ${ver//:} to remove the colons: we're then left with a usual decimal number, increment it and reformat it using printf; then use some more parameter expansions to group the digits.
This assumes that ver has already been thorougly checked (with a regex or glob).
It's easy, just needs some little math tricks and bc command, here is how:
#!/bin/bash
# read VERSION from $1 into VER
IFS=':' read -r -a VER <<< "$1"
# increment by 1
INCR=$(echo "ibase=10; ${VER[0]}*100*100+${VER[1]}*100+${VER[2]}+1"|bc)
# prepend zeros
INCR=$(printf "%06d" ${INCR})
# output the result
echo ${INCR:0:2}:${INCR:2:2}:${INCR:4:2}
If you need overflow checking you can do it with the trick like INCR statement.
This basically works, but may or may not do string padding:
IN=43:99:99
F1=`echo $IN | cut -f1 '-d:'`
F2=`echo $IN | cut -f2 '-d:'`
F3=`echo $IN | cut -f3 '-d:'`
F3=$(( F3 + 1 ))
if [ "$F3" -gt 99 ] ; then F3=00 ; F2=$(( F2 + 1 )) ; fi
if [ "$F2" -gt 99 ] ; then F2=00 ; F1=$(( F1 + 1 )) ; fi
OUT="$F1:$F2:$F3"
echo $OUT
try this one liner:
awk '{gsub(/:/,"");$0++;gsub(/../,"&:");sub(/:$/,"")}7'
tests:
kent$ awk '{gsub(/:/,"");$0++;gsub(/../,"&:");sub(/:$/,"")}7' <<< "22:33:99"
22:34:00
kent$ awk '{gsub(/:/,"");$0++;gsub(/../,"&:");sub(/:$/,"")}7' <<< "22:99:99"
23:00:00
kent$ awk '{gsub(/:/,"");$0++;gsub(/../,"&:");sub(/:$/,"")}7' <<< "22:99:88"
22:99:89
Note, corner cases were not tested.

Parsing a CSV string in Shell Script and writing it to a File

I am not a Linux scripting expert and I have exhausted my knowledge on this matter. Here is my situation.
I have a list of states passed as a command line argument to a shell script ( e.g "AL,AK,AS,AZ,AR,CA..." ). The Shell script needs to extract each of the state code and write it to a file ( states.txt) , with each state in one line. See below
AL
AK
AS
AZ
AR
CA
..
..
How can this be achieved using a linux shell script.
Thanks in advance.
Use tr:
echo "AL,AK,AS,AZ,AR,CA" | tr ',' '\n' > states.txt
echo "AL,AK,AS,AZ,AR,CA" | awk -F, '{for (i = 1; i <= NF; i++) print $i}';
Naive solution:
echo "AL,AK,AS,AZ,AR,CA" | sed 's/,/\n/g'
I think awk is the simplest solution, but you could try using cut in a loop.
Sample script (outputs to stdout, but you can just redirect it):
#!/bin/bash
# Check for input
if (( ${#1} == 0 )); then
echo No input data supplied
exit
fi
# Initialise first input
i=$1
# While $i still contains commas
while { echo $i| grep , > /dev/null; }; do
# Get first item of $i
j=`echo $i | cut -d ',' -f '1'`
# Shift off the first item of $i
i=`echo $i | cut --complement -d ',' -f '1'`
echo $j
done
# Display the last item
echo $i
Then you can just run it as ./script.sh "AL,AK,AS,AZ,AR,CA" > states.txt (assuming you save it as script.sh in the local directory and give it execute permission)

Resources