Writing to the same file with awk and truncate - linux

My system is Arch Linux and my window manager is DWM. I use dash as my shell interpreter.
I have written this extension shell script for my timer.
xev -root |
awk -F'[ )]+' '/^KeyPress/ { a[NR+2] }
NR in a {
if ($8 == "Return") {
exit 0;
} else if ($8 == "BackSpace") {
system("truncate -s-1 timer.txt");
} else if (length($8) == 1) {
printf "%s", $8;
fflush(stdout);
}
system("pkill -RTMIN+3 dwmblocks");
}' | tee timer.txt
The timer itself sits in dwmblocks status bar. I want to name my timers first and then let it start. But I don't think that's that important.
The purpose of this script - I want to input characters into the root window of DWM and have them appear in my status bar instantly. So, xev produces the key pressed information, then awk takes that information, finds the exact key (from all the information that xev outputs) and checks. If the key is "Return", awk exits (job done). If key is "BackSpace" awk calls truncate from the system. If it's a regular character key, then awk outputs it to timer.txt with tee (I could use "> timer.txt" too, I think, but I want to see the output in my terminal for debugging.
After every relevant keypress (single character) I fflush stdout. After all of that I finally call pkill so that dwmblocks knows that it should update. (dwmblocks issues cat operation on the file)
Okay, "Return" and character input works fine. But there's a problem with "BackSpace". I've read about it a bit (I'd say I'm still a Unix newbie even though I've been using Linux for two years now) and I found out that writing to the same file from different processes is bad news. Still. Could it be done somehow? The fact is that truncate only writes to the file when awk, doesn't, so, maybe, it wouldn't be that big of a deal?
This exact script worked earlier yesterday but now it doesn't. At first, I tried using sed instead of truncate and truncate seemed to let me delete characters from timer.txt but now truncate seems to not work anymore too. Well, it kinda works. I can input my characters and then I can delete them. BUT. After pressing Backspace I can not enter any more characters. If I try to enter a character Backspace stops working too.
So yeah. I'd have several questions. First - what the hell is the problem? As I've said, it used to work and now it doesn't. Am I wandering into undefined behavior in this script?
Second - could this be done - meaning - could I somehow write and delete from the same file. Maybe with some other tool, not awk?
Thanks in advance.

This probably isn't an answer but it's too much to go in a comment. I don't know the details of most of the tools you mention, nor do I really understand what it is you're trying to do but:
A shell is a tool to manipulate files and processes and schedule calls to other tools. Awk is a tool to manipulate text. You're trying to use awk like a shell - you have it sequencing calls to truncate and pkill and calling system to spawn a subshell each time you want to execute either of them. What you should be doing, for example, is just:
shell { truncate }
but what you're actually doing is:
shell { awk { system { shell { truncate } } } }
Can you take that role away from awk and give it back to your shell? It should make your overall script simpler, conceptually at least, and probably more robust.
Maybe try something like this (untested):
#!/usr/bin/env bash
while IFS= read -r str; do
case $str in
Return ) exit 0 ;;
BackSpace ) truncate -s-1 timer.txt ;;
? ) printf "%s" "$str" | tee -a timer.txt ;;
esac
pkill -RTMIN+3 dwmblocks
done < <(
xev -root |
awk -F'[ )]+' '/^KeyPress/{a[NR+2]} NR in a{print $8; fflush()}'
)
I moved the write to timer.txt inside the loop to make sure tees not trying to write to it while you're truncating it - that may not be necessary.

Related

Is it possible to make a list of disk in bash?

I'm a beginner and not a native english speaker please excuse my clumsiness.
I'm trying to make a linux install script for personal use (and to learn more about linux and bash scripting) but I'm struggling on finding a way to create a disk selection menu :
I wish to make a list witch would look like that :
NAME SIZE DEVICES
sda 256gib intel-ssdx
sdb 1000gib TLxxxxxxxx
nvme0n1 128gib WDxxxxxxxx
So far i've tried to echo fdisk -l and lsblk in text file and use cat to prompt it
Code :
lsblk
Set DiskLayout=("Automatic Install" "Manual Install" "Check pending change" "Quit")
select DiskLayoutopt in "${DiskLayout[#]}"
do
case $DiskLayoutopt in
"Automatic Install")
read Sdsk -p "Select drive"
;;
"Manual Install")
parted -a optimal
;;
"Check pending change")
echo ""
"Quit")
exit 1
;;
*) echo "invalid option $REPLY";;
esac
done
The following code will get your menu:
#!/usr/bin/env bash
disk=()
size=()
name=()
while IFS= read -r -d $'\0' device; do
device=${device/\/dev\//}
disk+=($device)
name+=("`cat "/sys/class/block/$device/device/model"`")
size+=("`cat "/sys/class/block/$device/size"`")
done < <(find "/dev/" -regex '/dev/sd[a-z]\|/dev/vd[a-z]\|/dev/hd[a-z]' -print0)
for i in `seq 0 $((${#disk[#]}-1))`; do
echo -e "${disk[$i]}\t${name[$i]}\t${size[$i]}"
done
This is some tough bash script... Hope you'll learn quick.
Here's some help:
First line is a shebang to tell your system which interpreter is needed for that script. Indeed, this script only works with bash.
Try running with bash myscript.sh on systems that don't work (ie BSD).
variable=() is an array.
Adding something to that array is done by variable+=("my value")
The while loop reads variable device from what it gets from find command
while read device; do
something
done < <(find)
The find command uses a regular expression that says anything like /dev/sdX where X goes from a to z, or anything like /dev/vdX or anything like /dev/hdX (where X still goes from a to z).
The or operator is a pipe | which has to be escaped with an antislash, hence giving \|.
The devices read by the while look look like '/dev/sda' so we need so strip '/dev/' out of it using the following:
device=${device/\/dev\//}
This is a bash substitution which works the following way:
variable="my foo function"
echo ${variable/foo/bar}
This outputs my bar function.
Indeed, we still need to escape / since this is the separator character for the substition, so it becomes \/.
Getting the disk name via
"`cat "/sys/class/block/$device/device/model"`"
cat "/sys/class/block/sda/device/model" gives the disk model.
In order to get the result into a variable, we'll need to quote it with ` sign, eg:
myvar=`cat /var/file`
Last but not least, the for loop part:
for i in seq 0 $((${#disk[#]}-1)); do
echo -e "${disk[$i]}\t${name[$i]}\t${size[$i]}"
done
${#disk[#]} is the number of elements in array disk.
Actually ${#var} is the number of elements in var, which when being a string, is the number of characters. ${var[#]} means all elements of an array.
seq 0 X returns a sequence of 0 to X numbers, in order to construct the for loop.
Using echo -e translates escaped characters into litterals. In our case '\t' become tabs.
Last but not least, showing ${disk[$i]} is disk array value of index $i where $i is an integer.
Btw, bash is quite limited to do these tasks, but really fun to learn in the first place.
Harder tasks might be better accomplished in a higher level scripting language like Python. Anyway, have fun learning bash, it's a life saver in sysadmin's career.

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

Using AWK and setting results to bash variables/arrays?

I have a file that replicates the results from show processlist command from mySQL.
The file looks like this:
*************************** 1. row ***************************
Id: 1
User: system user
Host:
db: NULL
Command: Connect
Time: 1030455
State: Waiting for master to send event
Info: NULL
*************************** 2. row ***************************
Id: 2
User: system user
Host:
db: NULL
Command: Connect
Time: 1004
State: Has read all relay log; waiting for the slave
I/O thread to update it
Info: NULL
And it keeps going on for a few more times in the same structure.
I want to use AWK to only get these parameters: Time,ID,Command and State, and store every one of these parameters into a different variable or array so that I can later use / print them in my bash shell.
The problem is, I am pretty bad with AWK, I dont know how to both seperate the parameters I want from the file and also set them as a bash variable or array.
Many thanks in advance for the help!
EDIT: Here is my code so far
echo "Enter age"
read age
cat data | awk 'BEGIN{ RS="row"
FS="\n"
OFS="\n"}
{ print $2,$7}
' | awk 'BEGIN{ RS="Id"}
{if ($4 > $age){print $2}}'
The file 'data' contains blocks like I have pasted above. The code should, if the 'age' entered is smaller than the Time parameter in the data file (which is $4 in my awk code), return the ID parameter, but it returns nothing.
If I remove the if statement and print $4 instead of $2 this is my output
Enter age
1
1030455
1004
2144
2086
0
So I was thinking maybe that blank line is somehow messing up my AWK print? Is there a simple way to ignore that blank line while keeping my other data?
This is how you'd use awk to produce the values you want as a set of tab-separated fields on each line per "row" block from the input:
$ cat tst.awk
BEGIN {
RS="[*]+ [[:digit:]]+[]. row [*]+\n"
FS="\n"
OFS="\t"
}
NR>1 {
sub(/\n$/,"") # remove the trailing newline
gsub(/\n\s+/," ") # compress all multi-line fields into single lines
gsub(OFS," ") # ensure the only OFS in the output IS between fields
delete n2v
for (i=1; i<=NF; i++) {
name = gensub(/:.*/,"","",$i)
value = gensub(/^[^:]+:\s+/,"","",$i)
n2v[name] = value
}
if (n2v["Time"]+0 > age) { # force a numeric comparison
print n2v["Time"], n2v["Id"], n2v["Command"], n2v["State"]
}
}
$ awk -v age=2000 -f tst.awk file
1030455 1 Connect Waiting for master to send event
If the target age is already stored in a shell variable just init the awk variable from the shell variable of the same name:
$ age="2000"
$ awk -v age="$age" -f tst.awk file
The above uses GNU awk for multi-char RS (which you already had), gensub(), \s, and delete array.
When you say "and store every one of these parameters into a different variable or array" it could mean one of several things so I'll leave that part up to you but you might be looking for something like:
arr=( $(awk '...') )
or
awk '...' |
while IFS="\t" read -r Time Id Command State
do
<do something with those 4 vars>
done
but by far the most likely situation is that you don't want to use shell at all but instead just stay inside awk.
Remember - every time you write a loop in shell just to manipulate text you have the wrong approach. UNIX shell is an environment from which to call UNIX tools and the UNIX tool for general text manipulation is awk.
Until you edit your question to tell us more about your problem though, we can't guess what the right solution is from this point on.
At the first level you have your shell which you use to run any other child process. It's impossible to modify parents environment from within child process. When you run your bash script file (which has +x right) it's spawned as new process (child). It can set it's own environment but when it ends its live you'll get back to the original (parent).
You can set some variables on bash and export them to it's environment. It'll be inherited by it's children. However it can't be done in opposite direction (parent can't inherit from its child).
If you wish to execute some commands from the script file in the current bash's context you can source the script file. source ./your_script.sh or . ./your_script.sh will do that for you.
If you need to run awk to filter some data for you and keep results in the bash you can do:
awk ... | read foo
This works as read is shell buildin function rather than external process (check type read, help, help read, man bash to check it by yourself).
or:
foo=`awk ....`
There are many other constructions you can use. Whatever bash script you do please compare your code with bash pitfalls webpage.

Bash run a function in background

Have a relatively simple question here. I need to run a function in the background in bash. Normally I would do it just like so:
FUNCTION &
but things are a bit more complicated than that. I have the following line that runs the main function for each record in a text database. I cant really edit this code all that much without vastly changing the rest of the entire project, but im still open to new ideas.
cat databases/$WAN | grep -v \# | while read LINE; do MAIN; done
I want to spawn a new terminal in background for each record to do a sort of parallel type processing, making things go much faster. Main takes a minute to process for each record. This however does not work.
cat databases/$WAN | grep -v \# | while read LINE; do MAIN &; done
Any suggestions?
* UPDATE *
Thanks for all the responses. Let me see if I can answer some of those questions.
gniourf_gniourf - Yes I know using cat like this is wrong. This was early on, and critical code, so I have not updated it yet. I now read into the while loop for most things I do. I will fix it eventually. You may be right about syntax. When I break it up like so, things seem to work now:
cat databases/$WAN | grep -v \# | while read LINE
do
MAIN & > /dev/null 2>&1
done
So that fixes the background problem. I wonder what was messed up in my single line syntax. Thanks
chepner - I don't believe LINE is a variable. I could be wrong though. Some things about Bash still confuse me. Maybe it is and is a variable that the entire record from the database gets stored to prior to processing.
Bruce K - Waiting is exactly what I was trying to avoid. If I let it run in the same terminal one at a time, it will slowly process each record in order. If I push each record to a seperate terminal for processing, all records will be processed simultaneously (at least in our eyes). The additional overhead is intentional in order to speed up how quickly the loop through the database occurs.
Radix - Yes you're right. I'll read up on that. Thanks for the link.
This worked for me:
$ function testt(){ echo "lineee is <$lineee>";}
$ grep 5432 /etc/services|while read lineee;do testt&done
lineee is <postgres 5432/udp # POSTGRES>
lineee is <postgres 5432/tcp # POSTGRES>
If, for some reason, your MAIN function is not seeing a LINE variable, you can try:
"export" the LINE variable beforehand:
$ export LINE
$ # do your thing
Or, pass the line read as an argument to the function:
$ function testt(){ LINE="$1"; echo "LINE is <$LINE>";}
$ grep 5432 /etc/services|while read LINE;do testt "$LINE"&done

How to do multiple if, else statements in this case in bash script?

Right now I am running a few programs with bash scripts (with Cygwin).
Basically what I am doing is after the program is starting, a loop is ran that checks that the program is still running.
I was doing:
while true
do
if [ "$(ps -W | grep -w name | gawk '{print $8,$9}' | gawk -F \\ '{print $4}')" == 'program' ];then
sleep 1
else
"start program" (whatever is needed here)
fi
done
But I started to realize having such a script multiple times is just causing unnecessary system resources to be used.
I tried doing an if then, elif, but it never goes past the first if.
I need it to go "alright the first if is negative, try the next, go to the end, start over".
Here is the copy of my script and I forgot to say I was using Cygwin but that really doesn't change anything cause it seems to still use normal bash scripting just maybe different paths to start files. http://pastebin.com/s8ZdPQMn and yes the h. is not there I just can't seem to edit the pastebin.
My overall plan is check that first SRPro is running, check the next, etc, only triggering if it's detected one is not running.
EDIT: I solved it. Not exactly sure why but in my original single file per program, gawk printing $4 at the end gave me what I wanted, but for some reason when doing it this way, it turned to $5. So changing $4 to $5 made the script work.
EDIT: One really strange issue though is, it will work minutes on end, then all the sudden get confused at times, and start 7 copies of one program or something. Also it can be random on which it starts.
You might find the wait command (try help wait from a bash prompt) useful. It's unclear exactly what you want, but as an example, here's a basic respawn function:
$ respawn () {
> while true
> do
> "${#}" &
> wait ${!}
> echo "respawning ..."
> done
> }
$ respawn some_program arg1 arg2 etc

Resources