Execute a command each time curl outputs a newline - linux

I'm trying to use curl to infinitely stream messages from a remote server.
I periodically receive notifications like this:
$ curl -s https://my-server.com/status
Backup in progress...
Backup completed.
Disk space is about to fill up.
I want send_notify to be executed each time curl outputs a new line.
I created the script below, but it did not work.
notify_pipe() {
read -r msg
notify-send "$msg"
}
# Does nothing, just a blinking cursor.
stdbuf -oL curl -s https://my-server.com/status | notify_pipe
Please help me understand why this is not working and how to make it work.
I'm a beginner, I would appreciate any help! (and please forgive my poor English)

You need to read the output lines in a loop:
stdbuf -oL curl -s https://my-server.com/status |
while read line ; do
notify-send Status "$line"
done

Related

Wait until curl command has finished

I'm using curl to grab a list of subscribers. Once this has been downloaded the rest of my script will process the file.
How could I make the script wait until the file has been downloaded and error if it failed?
curl "http://mydomain/api/v1/subscribers" -u
'user:pass' | json_pp >>
new.json
Thanks
As noted in the comment, curl will not return until requests is completed (or failed). I suspect you are looking for a way to identify errors in the curl, which currently are getting lost. Consider the following:
If you just need error status, you can use bash pipefail option set -o pipefail. This will allow you to check for failure in curl
set -o pipefail
if curl ... | json_pp >> new.json ; then
# All good
else
# Something wrong.
fi
Also, you might want to save the "raw" response, before trying to pretty-print it. Either using a temporary file, or using tee
set -o pipefail
if curl ... | tee raw.json | json_pp >> new.json ; then
# All good
else
# Something wrong - look into raw.json
fi

bash -- execute command on file change; doubling issue + how to skip loop until command completes

I'm a bash noob, and I am trying to set up a sort of "hot reload" functionality for a project I'm working on using inotifywait. Ubuntu 20.04 if that matters.
Here is what I hoped would have worked:
inotifywait -m -r ../.. -e modify,create,delete |
while read line; do
custom_command
done
I'm having two problems:
Issue #1 is that custom_command takes some time to work, and so if I make more changes to the directory in the meantime, custom command appears to "queue up" custom_command, where really I just want it to keep the most recent one and drop the others.
Issue #2 is that I'm getting some sort of "double output." So for example if I bash auto-exec.sh and auto-exec.sh looks like this:
inotifywait -m -r . -q -e modify,create,delete
Then each time a change registers, I get this as output (not a mistake that it's doubled -- I get two identical lines each time there is a modification):
./ MODIFY auto-exec-testfile.txt
./ MODIFY auto-exec-testfile.txt
I should note I've tried making changes both with Visual Code Studio and gedit, with the same results.
If I modify the bash file like so:
inotifywait -m -r . -q -e modify,create,delete |
while read line; do
echo "$line"
echo "..."
done
I get the following output each time there is a change:
./ MODIFY auto-exec-testfile.txt
...
./ MODIFY auto-exec-testfile.txt
...
If I modify bash_test.sh to the following:
inotifywait -m -r . -q -e modify,create,delete |
while read line; do
echo "help me..."
done
Then I get the following each time a change is made:
help me...
help me...
What happened to the the ./ MODIFY ... line?? Presumably there's something I don't understand about bash, stdout or similar /related concepts here?
And finally, if I change the .sh file to the following:
inotifywait -m -r . -q -q -e modify,create,delete |
while read _; do
echo "help me..."
done
Then I get no output at all. This one I think I understand, because the -q -q means that inotifywait is in "super silent" mode, so there is no log and therefore nothing to trigger the while.
What I'd love to do is just trigger the code once when something changes, and drop all but the most recent execution. I'm not sure doing this using a while is entirely necessary, but I tried inotifywait -m -r . -q -q -e modify,create,delete | echo "help me..", and the script printed "help me..." once at startup, then exited on modification.
Assistance very much appreciated.
EDIT - 20201-Mar-23
I removed -m and create from the inotifywait line, and it appears to work as expected, except that it doesn't stay "up" in monitor mode. So this at least only gives me one entry from inotifywait:
notifywait -r .. -q -e modify,delete |
while read line1; do
echo ${line1}
done
Related:
inotifywait - pause monitoring while executing command
https://unix.stackexchange.com/questions/140679/using-inotify-to-monitor-a-directory-but-not-working-100
inotifywait not performing the while loop in bash script
while inotifywait -e close_write,delete .; do
pkill custom_command
custom_command&
done

Multiple scripts making rest calls interfering

So I am running into a problem with unix scripts that use curl to make rest calls. I have one script, that runs two other scripts inside of it.
cat example.sh
FILE="file1.txt"
RECIP="wilfred#blamagam.com"
rm -f $FILE
./script1.sh > $FILE
mail -s "subject" $RECIP < $FILE
RECIP="bob#blamagam.com"
rm -f $FILE
./script2.sh > $FILE
mail -s "subject" $RECIP < $FILE
exit 0
Each script makes rest calls to the same service. It is my understanding that script1.sh should completely finish before script2.sh is ran, however that is not the case. In the logs for the rest service I see a rest call from the second script in the middle of the first one still executing. The second script then fails because of this (it does not get any data returned).
I am modifying this process so I am not the one who originally wrote it. I am not seeing any forked processes, or background processes at all and I have been banging my head against the wall.
I do know that script2.sh works. Whenever script1.sh takes under a minute script2.sh works just fine, but more often than not script1.sh takes over a min, causing the second script to fail.
This is ran by a cron, and the contents of the files are mailed out, so I cant just default to running them manually. Any suggestions for what to look into would be much appreciated!
EDIT: Here is a high pseudo code example
script1.sh
ITEMS=`/usr/bin/curl -m 10 -k -u userName:passWord -L https://server/rest-service/rest?where=clause=value;clause2=value2&sel=field 2>/dev/null | sed s/<\/\?Attribute[^>]*>/\n/g | grep -v '^<' | grep -v '^$' | sed 's/ //g'`
echo "\n Subject for these metrics"
echo "$ITEMS"
Both scripts have lots of entries like this. There are 2 or 3 for loops but they are simple and I do not see any background processes being called. Its a large script so I could only provide a snippet. Could the rest call into pipes be causing an issue?
Edit:
Just tested this on my system and it seems to work.
cat example.sh
FILE="file1.txt"
RECIP="wilfred#blamagam.com"
rm -f "$FILE"
(./script1.sh > "$FILE") &
procscript1=$!
wait "$procscript1"
mail -s "subject" "$RECIP" < "$FILE"
RECIP="bob#blamagam.com"
rm -f "$FILE"
(./script2.sh > "$FILE") &
procscript2=$!
wait "$procscript2"
mail -s "subject" "$RECIP" < "$FILE"
exit 0
Put the script executions in the background with the &.
Get the process id's for each script execution.
Use the wait command to block until the execution is done.

Is there a way to perform a "tail -f" from an url?

I currently use tail -f to monitor a log file: this way I get an autorefreshing console monitoring a web server.
Now, said webserver was moved to another host and I have no shell privileges for that.
Nevertheless I have a .txt network path, which in the end is a log file which is constantly updated.
So, I'd like to do something like tail -f, but on that url.
Would it be possible?In the end "in linux everything is a file" so..
You can do auto-refresh with help of watch combined with wget.
It won't show history, like tail -f, rather update screen like top.
Example of command, that shows content on file.txt on the screen, and update output every five seconds:
watch -n 5 wget -qO- http://fake.link/file.txt
Also, you can output n last lines, instead of the whole file:
watch -n 5 "wget -qO- http://fake.link/file.txt | tail"
In case if you still need behaviour like "tail -f" (with keeping history), I think you need to write a script that will download log file each time period, compare it to previous downloaded version, and then print new lines. Should be quite easy.
I wrote a simple bash script to fetch URL content each 2 seconds and compare with local file output.txt then append the diff to the same file
I wanted to stream AWS amplify logs in my Jenkins pipeline
while true; do comm -13 --output-delimiter="" <(cat output.txt) <(curl -s "$URL") >> output.txt; sleep 2; done
don't forget to create empty file output.txt file first
: > output.txt
view the stream :
tail -f output.txt
original comment : https://stackoverflow.com/a/62347827/2073339
UPDATE:
I found better solution using wget here:
while true; do wget -ca -o /dev/null -O output.txt "$URL"; sleep 2; done
https://superuser.com/a/514078/603774
I've made this small function and added it to the .*rc of my shell. This uses wget -c, so it does not re-download the whole page:
# Poll logs continuously over HTTP
logpoll() {
FILE=$(mktemp)
echo "———————— LOGPOLLING TO $FILE ————————"
tail -f $FILE &
tail_pid=$!
bg %1
stop=0
trap "stop=1" SIGINT SIGTERM
while [ $stop -ne 1 ]; do wget -co /dev/null -O $FILE "$1"; sleep 2; done
echo "——————————— LOGPOLL DONE ————————————"
kill $tail_pid
rm $FILE
trap - SIGINT SIGTERM
}
Explanation:
Create a temporary logfile using mktemp and save its path to $FILE
Make tail -f output the logfile continuously in the background
Make ctrl+c set stop to 1 instead of exiting the function
Loop until stop bit is set, i.e. until the user presses ctrl+c
wget given URL in a loop every two seconds:
-c - "continue getting partially downloaded file", so that wget continues instead of truncating the file and downloading again
-o /dev/null - wget's log messages shall be thrown into the void
-O $FILE - output the contents to the temp logfile we've created
Clean up after yourself: kill the tail -f, delete the temporary logfile, unset the signal handlers.
The proposed solutions periodically download the full file.
To avoid that I've created a package and published in NPM that does a HEAD request ( getting the size of the file ) and requesting only the last bytes.
Check it out and let me know if you need any help.
https://www.npmjs.com/package/#imdt-os/url-tail

How do I pipe or redirect the output of curl -v?

For some reason the output always gets printed to the terminal, regardless of whether I redirect it via 2> or > or |. Is there a way to get around this? Why is this happening?
add the -s (silent) option to remove the progress meter, then redirect stderr to stdout to get verbose output on the same fd as the response body
curl -vs google.com 2>&1 | less
Your URL probably has ampersands in it. I had this problem, too, and I realized that my URL was full of ampersands (from CGI variables being passed) and so everything was getting sent to background in a weird way and thus not redirecting properly. If you put quotes around the URL it will fix it.
The answer above didn't work for me, what did eventually was this syntax:
curl https://${URL} &> /dev/stdout | tee -a ${LOG}
tee puts the output on the screen, but also appends it to my log.
If you need the output in a file you can use a redirect:
curl https://vi.stackexchange.com/ -vs >curl-output.txt 2>&1
Please be sure not to flip the >curl-output.txt and 2>&1, which will not work due to bash's redirection behavior.
Just my 2 cents.
The below command should do the trick, as answered earlier
curl -vs google.com 2>&1
However if need to get the output to a file,
curl -vs google.com > out.txt 2>&1
should work.
I found the same thing: curl by itself would print to STDOUT, but could not be piped into another program.
At first, I thought I had solved it by using xargs to echo the output first:
curl -s ... <url> | xargs -0 echo | ...
But then, as pointed out in the comments, it also works without the xargs part, so -s (silent mode) is the key to preventing extraneous progress output to STDOUT:
curl -s ... <url> | perl -ne 'print $1 if /<sometag>([^<]+)/'
The above example grabs the simple <sometag> content (containing no embedded tags) from the XML output of the curl statement.
The following worked for me:
Put your curl statement in a script named abc.sh
Now run:
sh abc.sh 1>stdout_output 2>stderr_output
You will get your curl's results in stdout_output and the progress info in stderr_output.
This simple example shows how to capture curl output, and use it in a bash script
test.sh
function main
{
\curl -vs 'http://google.com' 2>&1
# note: add -o /tmp/ignore.png if you want to ignore binary output, by saving it to a file.
}
# capture output of curl to a variable
OUT=$(main)
# search output for something using grep.
echo
echo "$OUT" | grep 302
echo
echo "$OUT" | grep title
Solution = curl -vs google.com 2>&1 | less
BUT, if you want to redirect the output to a file and the output is still on the screen, then the URL response contains a newline char \n which messed up your shell.
To avoit this put everything in a variable:
result=$(curl -v . . . . )

Resources