why bash script adds single quotes on variable expansion? - linux

I am trying to use bash script to add resolution through xrandr and i keep getting error, here is my script:
#!/bin/bash
out=`cvt 1500 800`
out=`echo $out | sed 's/\(.*\)MHz\(.*\)/\2/g'`
input=`echo $out | sed 's/Modeline//g'`
#echo $input
xrandr --newmode $input
input2=`echo $out | cut -d\" -f2`
#echo $input2
xrandr --addmode VNC-0 $input2
running with bash -x
input=' "1504x800_60.00" 98.00 1504 1584 1736 1968 800 803 813 831 -hsync +vsync'
+ xrandr --newmode '"1504x800_60.00"' 98.00 1504 1584 1736 1968 800 803 813 831 -hsync +vsync
if you look at the last line, it adds for some reason single quote ' at the start (before ") and after ", why ??

Singe quotes are added by bash -x when printing debug output.
It won't affect your actual variables value:
out=`cvt 1500 800`
echo $out
# 1504x800 59.92 Hz (CVT) hsync: 49.80 kHz; pclk: 98.00 MHz Modeline "1504x800_60.00" 98.00 1504 1584 1736 1968 800 803 813 831 -hsync +vsync
echo $input
"1504x800_60.00" 98.00 1504 1584 1736 1968 800 803 813 831 -hsync +vsync 98.00 1504 1584 1736 1968 800 803 813 831 -hsync +vsync
What's actually happens, that quotes inside a variable's value aren't parsed when the variable is substituted.
The best way to do this sort of thing is using an array instead of a simple text variable:
xrandr_opts=() # declaring array
input=`echo $out | sed 's/Modeline//g'`
read -a xrandr_opts <<< $input # splitting $input to array
xrandr --newmode "${xrandr_opts[#]}"
As for your specific case, following change will do the trick:
#!/bin/bash
out=`cvt 1500 800`
out=`echo $out | sed 's/\(.*\)MHz\(.*\)/\2/g'`
input=`echo $out | sed 's/Modeline//g'`
#echo $input
#xrandr --verbose --newmode $input
xrandr_opts=() # declaring array
input=`echo $input | sed 's/\"//g'`
read -a xrandr_opts <<< $input # splitting $input to array
opts_size=`echo ${#xrandr_opts[#]}`
xrandr --newmode `printf \'\"%s\"\' ${xrandr_opts[0]}`
${xrandr_opts[#]:1:$opts_size}
input2=`echo $out | cut -d\" -f2`
#echo $input2
xrandr --verbose --addmode VNC-0 $input2
Looks like xrandr --newmode won't accept double quotes. I can't say exactly what's the reason, but at least the script works :)

Related

ffmpeg messes up variables [duplicate]

This question already has answers here:
Bash script stops execution of ffmpeg in while loop - why?
(3 answers)
Execute "ffmpeg" command in a loop [duplicate]
(3 answers)
Closed 7 days ago.
I am trying to split audio files by their chapters. I have downloaded this as audio with yt-dlp with its chapters on. I have tried this very simple script to do the job:
#!/bin/sh
ffmpeg -loglevel 0 -i "$1" -f ffmetadata meta # take the metadata and output it to the file meta
cat meta | grep "END" | awk -F"=" '{print $2}' | awk -F"007000000" '{print $1}' > ends #
cat meta | grep "title=" | awk -F"=" '{print $2}' | cut -c4- > titles
from="0"
count=1
while IFS= read -r to; do
title=$(head -$count titles | tail -1)
ffmpeg -loglevel 0 -i "$1" -ss $from -to $to -c copy "$title".webm
echo $from $to
count=$(( $count+1 ))
from=$to
done < ends
You see that I echo out $from and $to because I noticed they are just wrong. Why is this? When I comment out the ffmpeg command in the while loop, the variables $from and $to turn out to be correct, but when it is uncommented they just become some stupid numbers.
Commented output:
0 465
465 770
770 890
890 1208
1208 1554
1554 1793
1793 2249
2249 2681
2681 2952
2952 3493
3493 3797
3797 3998
3998 4246
4246 4585
4585 5235
5235 5375
5375 5796
5796 6368
6368 6696
6696 6961
Uncommented output:
0 465
465 70
70 890
890 08
08 1554
1554 3
3 2249
2249
2952
2952 3493
3493
3998
3998 4246
4246 5235
5235 796
796 6368
6368
I tried lots of other stuff thinking that they might be the problem but they didn't change anything. One I remember is I tried havin $from and $to in the form of %H:%M:%S which, again, gave the same result.
Thanks in advance.
Here is an untested refactoring; hopefully it can at least help steer you in another direction.
Avoid temporary files.
Avoid reading the second input file repeatedly inside the loop.
Refactor the complex Awk scripts into a single script.
To be on the safe side, add a redirection from /dev/null to prevent ffmpeg from eating the input data.
#!/bin/sh
from=0
ffmpeg -loglevel 0 -i "$1" -f ffmetadata - |
awk -F '=' '/END/ { s=$2; sub(/007000000.*/, "", s); end[++i] = s }
/title=/ { t=$2; sub(/^([^-]-){3}/, "", t); title[++j] = t }
END { for(n=1; n<=i; n++) print end[n]; print title[n] }' |
while IFS="" read -r end; do
IFS="" read -r title
ffmpeg -loglevel 0 -i "$1" -ss "$from" -to "$end" -c copy "$title".webm </dev/null
from="$end"
done
The Awk script reads all the data into memory, and then prints one "end" marker followed by the corresponding title on the next line; I can't be sure what your ffmpeg -f ffmetadata command outputs, so I just blindly refactored what your scripts seemed to be doing. If the output is somewhat structured you can probably read one record at a time.

Reading from file bash Linux

I am having a hard time with the following bash script:
basically what the script does is receives a directory and then it searches in all of the folders that are in the directory for files that end with .log. after that it should print to the stdout all the lines from those files sorted by the date they were written in.
my script is this:
#!/bin/bash
find . -name ".*log" | cat *.log | sort --stable --reverse --key=2,3
when i run the script it does return the list but the sort doesnt work properly. my guess is because in some files there are \n which makes it start a new line.
is there a way to ignore the \n that are in the file while still having each line return on a new line?
thank you!
xxd command output:
ise#ise-virtual-machine:~$ xxd /home/ise/Downloads/f1.log
00000000: 3230 3139 2d30 382d 3232 5431 333a 3333 2019-08-22T13:33
00000010: 3a34 342e 3132 3334 3536 3738 3920 4865 :44.123456789 He
00000020: 6c6c 6f0a 576f 726c 640a 0032 3032 302d llo.World..2020-
00000030: 3031 2d30 3154 3131 3a32 323a 3333 2e31 01-01T11:22:33.1
00000040: 3233 3435 3637 3839 206c 6174 650a 23456789 late.
ise#ise-virtual-machine:~$ xxd /home/ise/Downloads/f2.log
00000000: 3230 3139 2d30 392d 3434 5431 333a 3434 2019-09-44T13:44
00000010: 3a32 312e 3938 3736 3534 3332 3120 5369 :21.987654321 Si
00000020: 6d70 6c65 206c 696e 650a mple line.
ise#ise-virtual-machine:~$ xxd /home/ise/Downloads/f3.log
00000000: 3230 3139 2d30 382d 3232 5431 333a 3333 2019-08-22T13:33
00000010: 3a34 342e 3132 3334 3536 3738 3920 4865 :44.123456789 He
00000020: 6c6c 6f0a 576f 726c 6420 320a 0032 3032 llo.World 2..202
00000030: 302d 3031 2d30 3154 3131 3a32 323a 3333 0-01-01T11:22:33
00000040: 2e31 3233 3435 3637 3839 206c 6174 6520 .123456789 late
00000050: 320a 2.
Given that the entries in the log file are terminated with \0 (NUL), find, sed and sort can be combined:
find . -name '*.log' | xargs sed -z 's/\n//g' | sort -z --key=2,3 --reverse
By assuming each record in the file starts with the date and the option --key=2,3 is not necessary, please try:
find . -name "*.log" -exec cat '{}' \; | sort -z | xargs -I{} -0 echo "{}"
The final command xargs .. echo .. will be necessary to print properly the null-terminated lines.
If you still require --key option, please modify the code as you like. I'm not aware how the lines look like as of now.
[UPDATE]
According to the provided information by the OP, I assume the format of the log files
will be:
Each record starts with the date in "yyyy-mm-ddTHH:MM:SS.nanosec" format
and a simple dictionary order sort can be applied.
Each record ends with "\n\0" except for the last record of the file
which ends just with "\n".
Each record may contain newline character(s) in the middle as a part
of the record for the line folding purpose.
Then how about:
find . -name "*.log" -type f -exec cat "{}" \; -exec echo -ne "\0" \; | sort -z
echo -ne "\0" appends a null character to the last record of a file.
Otherwise the record will be merged to the next record of another file.
The -z option to sort treats the null character as a record separator.
No other option to sort will be required so far.
Result with the posted input by the OP:
2019-08-22T13:33:44.123456789 Hello
World
2019-08-22T13:33:44.123456789 Hello
World 2
2019-09-44T13:44:21.987654321 Simple line
2020-01-01T11:22:33.123456789 late
2020-01-01T11:22:33.123456789 late 2
It still keeps the null character "\0" at the end of each record.
If you want to trim it off, please add the tr -d "\0" command
at the end of the pipeline as:
find . -name "*.log" -type f -exec cat "{}" \; -exec echo -ne "\0" \; | sort -z | tr -d "\0"
Hope this helps.

how to combine a top command with a date column

Good day,
I need to add a Column header "TIME" that will display the current time for each time the output is executed on a new line with the following code:
top -b -n 1 -p 984 -o +PID -o +VIRT | sed -n '7,12p' | awk '{printf "%1s %-4s\n",$1,$5}'
Output I'm looking for:
TIME PID VIRT
12:00:00 984 1024
12:16:01 984 995
12:44:29 984 1008
(The values is only for display, not correct)
also it should be in a endless loop with interval of 10s until user stops it.
everything is executed from PIDandVIRT.sh
(Linux script)
Thank you for the help in advance
I would recommend to use the ps command instead of top:
echo "TIME PID VSIZE"
while true ; do
echo "$(date +%H:%I:%S) $(ps -p 984 -o pid,vsize --no-headers)"
sleep 1
done
Set an awk variable to the result of the date command:
awk -v time=$(date '+%H:%M:%S') '{printf "%s %1s %-4s\n", time, $1, $5}'
To get it in a loop, use while
while :; do
top -b -n 1 -p 984 -o +PID -o +VIRT | sed -n '7,12p' | awk -v time=$(date '+%H:%M:%S') '{printf "%s %1s %-4s\n", time, $1, $5}'
sleep 10
done

Bash Script - getting input from standard input or a file

I have a bash script that prints columns by name taken from the command line. It works well if I give the script the file as one of the arguments. It does not work well if I pipe input to the script and use /dev/stdin as the file. Does anyone know how I can modify the script to accept standard input from a pipe correctly? Here is my script.
#!/bin/bash
insep=" "
outsep=" "
while [[ ${#} > 0 ]]
do
option="$1"
if [ -f $option ] || [ $option = /dev/stdin ];
then
break;
fi
case $option in
-s|--in_separator)
insep="$2"
shift # past argument
shift # past argument
;;
-o|--out_separator)
outsep="$2"
shift # past argument
shift # past argument
;;
*)
echo "unknown option $option"
exit 1;
;;
esac
done
headers="${#:2}"
grep_headers=$(echo "${headers[#]}" | sed 's/ /|/g')
file=$1
columns=$(awk -F: 'NR==FNR{b[($2)]=tolower($1);next}{print $1,b[$1]}' \
<(head -1 $file | sed "s/$insep/\n/g" | egrep -iwn "$grep_headers" | awk '{s=tolower($0);print s}') \
<(awk -F: -v header="$headers" 'BEGIN {n=split(tolower(header),a," ");for(i=1;i<=n;i++) print a[i]}' $file ) \
| awk '{print "$"$2}' ORS='OFS' | sed "s/OFS\$//")
awk -v insep="$insep" -v outsep="$outsep" "BEGIN{FS=insep;OFS=outsep}{print $columns}" $file
exit;
Sample Input:
col_1 col_2 col_3 col_4 col_5 col_6 col_7 col_8 col_9 col_10
10000 10010 10020 10030 10040 10050 10060 10070 10080 10090
10001 10011 10021 10031 10041 10051 10061 10071 10081 10091
10002 10012 10022 10032 10042 10052 10062 10072 10082 10092
10003 10013 10023 10033 10043 10053 10063 10073 10083 10093
10004 10014 10024 10034 10044 10054 10064 10074 10084 10094
10005 10015 10025 10035 10045 10055 10065 10075 10085 10095
10006 10016 10026 10036 10046 10056 10066 10076 10086 10096
10007 10017 10027 10037 10047 10057 10067 10077 10087 10097
10008 10018 10028 10038 10048 10058 10068 10078 10088 10098
Running with file as an argument (works as expected):
> ./shell_scripts/print_columns.sh file1.txt col_1 col_4 col_6 col_2 | head
col_1 col_4 col_6 col_2
10000 10030 10050 10010
10001 10031 10051 10011
10002 10032 10052 10012
10003 10033 10053 10013
Piping from standard in (does not work as expected):
> head file1.txt | ./shell_scripts/print_columns.sh /dev/stdin col_1 col_4 col_6 col_2 | head
0185 10215 10195
10136 10166 10186 10146
10137 10167 10187 10147
10138 10168 10188 10148
10139 10169 10189 10149
An example:
script.sh:
#!/bin/bash
if [[ -f "$1" ]]; then
file="$1"
cat "$file"
shift
else
while read -r file; do echo "$file"; done
fi
echo "${#}"
Test with:
./script.sh file1.txt abc 123 456
and with UUOC:
cat file1.txt | ./script.sh abc 123 456

confused with swap,free and /proc/pid/smaps show different results

I just ran into a swap problem, so I tried to find which process was using swap, with the script(getswap.sh) shown in this. It was php-fpm, about 200 subprocess, either toke 1M swap space. So I killed php-fpm. Then I ran the script again, and total swap used decreased a lot. Howerver, result in free -m only decreased for about 3M. What is the problem?
before killing php-fpm:
[root#eng /tmp]# bash getswap.sh | sort -n -k5>out
[root#eng /tmp]# cat out|awk '{a+=$5;}END{print a;}'
202076
[root#eng /tmp]# free -m
total used free shared buffers cached
Mem: 64259 60566 3692 0 192 17098
-/+ buffers/cache: 43275 20983
Swap: 4095 155 3940
after killing php-fpm:
[root#eng /tmp]# bash getswap.sh | sort -n -k5>out
[root#eng /tmp]# cat out|awk '{a+=$5;}END{print a;}'
108456
[root#eng /tmp]# free -m
total used free shared buffers cached
Mem: 64259 60402 3857 0 192 17043
-/+ buffers/cache: 43166 21092
Swap: 4095 152 3943
and the script:
#!/bin/bash
function getswap {
SUM=0
OVERALL=0
for DIR in `find /proc/ -maxdepth 1 -type d | egrep "^/proc/[0-9]"` ; do
PID=`echo $DIR | cut -d / -f 3`
PROGNAME=`ps -p $PID -o comm --no-headers`
for SWAP in `grep Swap $DIR/smaps 2>/dev/null| awk '{ print $2 }'`
do
let SUM=$SUM+$SWAP
done
echo "PID=$PID - Swap used: $SUM - ($PROGNAME )"
let OVERALL=$OVERALL+$SUM
SUM=0
done
echo "Overall swap used: $OVERALL"
}
getswap
thx in advance

Resources