Bash run a function in background - linux

Have a relatively simple question here. I need to run a function in the background in bash. Normally I would do it just like so:
FUNCTION &
but things are a bit more complicated than that. I have the following line that runs the main function for each record in a text database. I cant really edit this code all that much without vastly changing the rest of the entire project, but im still open to new ideas.
cat databases/$WAN | grep -v \# | while read LINE; do MAIN; done
I want to spawn a new terminal in background for each record to do a sort of parallel type processing, making things go much faster. Main takes a minute to process for each record. This however does not work.
cat databases/$WAN | grep -v \# | while read LINE; do MAIN &; done
Any suggestions?
* UPDATE *
Thanks for all the responses. Let me see if I can answer some of those questions.
gniourf_gniourf - Yes I know using cat like this is wrong. This was early on, and critical code, so I have not updated it yet. I now read into the while loop for most things I do. I will fix it eventually. You may be right about syntax. When I break it up like so, things seem to work now:
cat databases/$WAN | grep -v \# | while read LINE
do
MAIN & > /dev/null 2>&1
done
So that fixes the background problem. I wonder what was messed up in my single line syntax. Thanks
chepner - I don't believe LINE is a variable. I could be wrong though. Some things about Bash still confuse me. Maybe it is and is a variable that the entire record from the database gets stored to prior to processing.
Bruce K - Waiting is exactly what I was trying to avoid. If I let it run in the same terminal one at a time, it will slowly process each record in order. If I push each record to a seperate terminal for processing, all records will be processed simultaneously (at least in our eyes). The additional overhead is intentional in order to speed up how quickly the loop through the database occurs.
Radix - Yes you're right. I'll read up on that. Thanks for the link.

This worked for me:
$ function testt(){ echo "lineee is <$lineee>";}
$ grep 5432 /etc/services|while read lineee;do testt&done
lineee is <postgres 5432/udp # POSTGRES>
lineee is <postgres 5432/tcp # POSTGRES>
If, for some reason, your MAIN function is not seeing a LINE variable, you can try:
"export" the LINE variable beforehand:
$ export LINE
$ # do your thing
Or, pass the line read as an argument to the function:
$ function testt(){ LINE="$1"; echo "LINE is <$LINE>";}
$ grep 5432 /etc/services|while read LINE;do testt "$LINE"&done

Related

Writing to the same file with awk and truncate

My system is Arch Linux and my window manager is DWM. I use dash as my shell interpreter.
I have written this extension shell script for my timer.
xev -root |
awk -F'[ )]+' '/^KeyPress/ { a[NR+2] }
NR in a {
if ($8 == "Return") {
exit 0;
} else if ($8 == "BackSpace") {
system("truncate -s-1 timer.txt");
} else if (length($8) == 1) {
printf "%s", $8;
fflush(stdout);
}
system("pkill -RTMIN+3 dwmblocks");
}' | tee timer.txt
The timer itself sits in dwmblocks status bar. I want to name my timers first and then let it start. But I don't think that's that important.
The purpose of this script - I want to input characters into the root window of DWM and have them appear in my status bar instantly. So, xev produces the key pressed information, then awk takes that information, finds the exact key (from all the information that xev outputs) and checks. If the key is "Return", awk exits (job done). If key is "BackSpace" awk calls truncate from the system. If it's a regular character key, then awk outputs it to timer.txt with tee (I could use "> timer.txt" too, I think, but I want to see the output in my terminal for debugging.
After every relevant keypress (single character) I fflush stdout. After all of that I finally call pkill so that dwmblocks knows that it should update. (dwmblocks issues cat operation on the file)
Okay, "Return" and character input works fine. But there's a problem with "BackSpace". I've read about it a bit (I'd say I'm still a Unix newbie even though I've been using Linux for two years now) and I found out that writing to the same file from different processes is bad news. Still. Could it be done somehow? The fact is that truncate only writes to the file when awk, doesn't, so, maybe, it wouldn't be that big of a deal?
This exact script worked earlier yesterday but now it doesn't. At first, I tried using sed instead of truncate and truncate seemed to let me delete characters from timer.txt but now truncate seems to not work anymore too. Well, it kinda works. I can input my characters and then I can delete them. BUT. After pressing Backspace I can not enter any more characters. If I try to enter a character Backspace stops working too.
So yeah. I'd have several questions. First - what the hell is the problem? As I've said, it used to work and now it doesn't. Am I wandering into undefined behavior in this script?
Second - could this be done - meaning - could I somehow write and delete from the same file. Maybe with some other tool, not awk?
Thanks in advance.
This probably isn't an answer but it's too much to go in a comment. I don't know the details of most of the tools you mention, nor do I really understand what it is you're trying to do but:
A shell is a tool to manipulate files and processes and schedule calls to other tools. Awk is a tool to manipulate text. You're trying to use awk like a shell - you have it sequencing calls to truncate and pkill and calling system to spawn a subshell each time you want to execute either of them. What you should be doing, for example, is just:
shell { truncate }
but what you're actually doing is:
shell { awk { system { shell { truncate } } } }
Can you take that role away from awk and give it back to your shell? It should make your overall script simpler, conceptually at least, and probably more robust.
Maybe try something like this (untested):
#!/usr/bin/env bash
while IFS= read -r str; do
case $str in
Return ) exit 0 ;;
BackSpace ) truncate -s-1 timer.txt ;;
? ) printf "%s" "$str" | tee -a timer.txt ;;
esac
pkill -RTMIN+3 dwmblocks
done < <(
xev -root |
awk -F'[ )]+' '/^KeyPress/{a[NR+2]} NR in a{print $8; fflush()}'
)
I moved the write to timer.txt inside the loop to make sure tees not trying to write to it while you're truncating it - that may not be necessary.

How do you append a string built with interpolation of vars and STDIN to a file?

Can someone fix this for me.
It should copy a version log file to backup after moving to a repo directory
Then it automatically appends line given as input to the log file with some formatting.
That's it.
Assume existence of log file and test directory.
#!/bin/bash
cd ~/Git/test
cp versionlog.MD .versionlog.MD.old
LOGDATE="$(date --utc +%m-%d-%Y)"
read -p "MSG > " VHMSG |
VHENTRY="- **${LOGDATE}** | ${VHMSG}"
cat ${VHENTRY} >> versionlog.MD
shell output
virufac#box:~/Git/test$ ~/.logvh.sh
MSG > testing script
EOF
EOL]
EOL
e
E
CTRL + C to get out of stuck in reading lines of input
virufac#box:~/Git/test$ cat versionlog.MD
directly outputs the markdown
# Version Log
## version 0.0.1 established 01-22-2020
*Working Towards Working Mission 1 Demo in 0.1 *
- **01-22-2020** | discovered faker.Faker and deprecated old namelessgen
EOF
EOL]
EOL
e
E
I finally got it to save the damned input lines to the file instead of just echoing the command I wanted to enter on the screen and not executing it. But... why isn't it adding the lines built from the VHENTRY variable... and why doesn't it stop reading after one line sometimes and this time not. You could see I was trying to do something to tell it to stop reading the input.
After some realizing a thing I had done in the script was by accident... I tried to fix it and saw that the | at the end of the read command was seemingly the only reason the script did any of what it did save to the file in the first place.
I would have done this in python3 if I had know this script wouldn't be the simplest thing I had ever done. Now I just have to know how you do it after all the time spent on it so that I can remember never to think a shell script will save time again.
Use printf to write a string to a file. cat tries to read from a file named in the argument list. And when the argument is - it means to read from standard input until EOF. So your script is hanging because it's waiting for you to type all the input.
Don't put quotes around the path when it starts with ~, as the quotes make it a literal instead of expanding to the home directory.
Get rid of | at the end of the read line. read doesn't write anything to stdout, so there's nothing to pipe to the following command.
There isn't really any need for the VHENTRY variable, you can do that formatting in the printf argument.
#!/bin/bash
cd ~/Git/test
cp versionlog.MD .versionlog.MD.old
LOGDATE="$(date --utc +%m-%d-%Y)"
read -p "MSG > " VHMSG
printf -- '- **%s** | %s\n' "${LOGDATE}" "$VHMSG" >> versionlog.MD

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

Update Bash commands every 2 seconds (without re-running code everytime)

for my first bash project I am developing a simple bash script that shows basic information about my system:
#!/bash/sh
UPTIME=$(w)
MHZ=$(cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq)
TEMP=$(cat /sys/class/thermal/thermal_zone0/temp)
#UPTIME shows the uptime of the device
#MHZ shows the overclocked specs
#TEMP shows the current CPU Temperature
echo "$UPTIME" #displays uptime
echo "$MHZ" #displays overclocked specs
echo "$TEMP" #displays CPU Temperature
MY QUESTION: How can I code this so that the uptime and CPU temperature refresh every 2seconds without re-generating the code new every time (I just want these two variables to update without having to enter the file path again and re-running the whole script).
This code is already working fine on my system but after it executes in the command line, the information isn't updating because it executed the command and is standing by for the next command instead of updating the variables such as UPTIME in real time.
I hope someone understands what I am trying to achieve, sorry about my bad wordings of this idea.
Thank you in advance...
I think it will help you. You can use the watch command for updating that for every two seconds without the loop.
watch ./filename.sh
It will give you the update of that command for every two second.
watch - execute a program periodically, showing output fullscreen
Not sure to really understand the main goal, but here's an answer to the basic question "How can I code this so that the uptime and CPU temperature refresh every two seconds ?" :
#!/bash/sh
while :; do
UPTIME=$(w)
MHZ=$(cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq)
TEMP=$(cat /sys/class/thermal/thermal_zone0/temp)
#UPTIME shows the uptime of the device
#MHZ shows the overclocked specs
#TEMP shows the current CPU Temperature
echo "$UPTIME" #displays uptime
echo "$MHZ" #displays overclocked specs
echo "$TEMP" #displays CPU Temperature
sleep 2
done
I may suggest some modifications.
For such simple job I may recommend no to use external utilities. So instead of $(cat file) you could use $(<file). This is a cheaper method as bash does not have to launch cat.
On the other hand if reading those devices returns only one line, you can use the bash built-in read like: read ENV_VAR <single_line_file. It is even cheaper. If there are more lines and for example you want to read the 2nd line, you could use sg like this: { read line_1; read line2;} <file.
As I see w provides much more information and as I assume you need only the header line. This is exactly what uptime prints. The external utility uptime reads the /proc/uptime pseudo file. So to avoid to call externals, you can read this pseudo file directly.
The looping part also uses the external sleep(1) utility. For this the timeout feature of the read internal could be used.
So in short the script would look like this:
while :; do
# /proc/uptime has two fields, uptime and idle time
read UPTIME IDLE </proc/uptime
# Not having these pseudo files on my system, the whole line is read
# Maybe some formatting is needed. For MHZ /proc/cpuinfo may be used
read MHZ </sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq
read TEMP </sys/class/thermal/thermal_zone0/temp
# Bash supports only integer arithmetic, so chomp off float
UPTIME_SEC=${UPTIME%.*}
UPTIME_HOURS=$((UPTIME_SEC/3600))
echo "Uptime: $UPTIME_HOURS hours"
echo $MHZ
echo $TEMP
# It reads stdin, so pressing an ENTER it returns immediately
read -t 2
done
This does not call any external utility and does not make any fork. So instead of executing 3 external utilities (using the expensive fork and execve system calls) in every 2 seconds this executes none. Much less system resources are used.
you could use while [ : ] and sleep 2
You need the awesome power of loops! Something like this should be a good starting point:
while true ; do
echo 'Uptime:'
w 2>&1 | sed 's/^/ /'
echo 'Clocking:'
sed 's/^/ /' /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq
echo 'Temperature:'
sed 's/^/ /' /sys/class/thermal/thermal_zone0/temp
echo '=========='
sleep 2
done
That should give you your three sections, with the data of each nicely indented.

How to do multiple if, else statements in this case in bash script?

Right now I am running a few programs with bash scripts (with Cygwin).
Basically what I am doing is after the program is starting, a loop is ran that checks that the program is still running.
I was doing:
while true
do
if [ "$(ps -W | grep -w name | gawk '{print $8,$9}' | gawk -F \\ '{print $4}')" == 'program' ];then
sleep 1
else
"start program" (whatever is needed here)
fi
done
But I started to realize having such a script multiple times is just causing unnecessary system resources to be used.
I tried doing an if then, elif, but it never goes past the first if.
I need it to go "alright the first if is negative, try the next, go to the end, start over".
Here is the copy of my script and I forgot to say I was using Cygwin but that really doesn't change anything cause it seems to still use normal bash scripting just maybe different paths to start files. http://pastebin.com/s8ZdPQMn and yes the h. is not there I just can't seem to edit the pastebin.
My overall plan is check that first SRPro is running, check the next, etc, only triggering if it's detected one is not running.
EDIT: I solved it. Not exactly sure why but in my original single file per program, gawk printing $4 at the end gave me what I wanted, but for some reason when doing it this way, it turned to $5. So changing $4 to $5 made the script work.
EDIT: One really strange issue though is, it will work minutes on end, then all the sudden get confused at times, and start 7 copies of one program or something. Also it can be random on which it starts.
You might find the wait command (try help wait from a bash prompt) useful. It's unclear exactly what you want, but as an example, here's a basic respawn function:
$ respawn () {
> while true
> do
> "${#}" &
> wait ${!}
> echo "respawning ..."
> done
> }
$ respawn some_program arg1 arg2 etc

Resources