Get output of script without alternate screen in bash - linux

I have a script that I can't edit that I call regularly via ssh. The script use an alternate screen (tput rmcup) to display the progress of the script and at the end switch back, and gives the results in output console.
I'm trying to automate the launch of that script and data collection but I can't find a way to collect in a variable or a file only the resulted output.
For example here's what I get as output in the console:
SFTP:
Server Result
S01 OK
S02 OK
But in the actual file:
^[[?1049h^[[22;0;0t^[[3J^[[H^[[2J^[[1;1H^[[3J^[[H^[[2J^[[1;1H
FTP:
Server Result
S01 Waiting
S02 Waiting
^[[?1049l^[[23;0;0t
FTP:
Server Result
S01 OK
S02 OK
I understand that it writes everything including the commands that update the screen since cat "file.txt" displays that file perfectly. But is there a way to get a parsed/clean output without changing the source script?

For what you have provided as output file (OUT.txt), I suggest you try the following:
tac OUT.txt |
awk '{ print $0 ; if( $1 == "FTP:" ){ exit } ; }' |
tac
tac is like cat, but provides each line input in reverse order.

Related

How do you append a string built with interpolation of vars and STDIN to a file?

Can someone fix this for me.
It should copy a version log file to backup after moving to a repo directory
Then it automatically appends line given as input to the log file with some formatting.
That's it.
Assume existence of log file and test directory.
#!/bin/bash
cd ~/Git/test
cp versionlog.MD .versionlog.MD.old
LOGDATE="$(date --utc +%m-%d-%Y)"
read -p "MSG > " VHMSG |
VHENTRY="- **${LOGDATE}** | ${VHMSG}"
cat ${VHENTRY} >> versionlog.MD
shell output
virufac#box:~/Git/test$ ~/.logvh.sh
MSG > testing script
EOF
EOL]
EOL
e
E
CTRL + C to get out of stuck in reading lines of input
virufac#box:~/Git/test$ cat versionlog.MD
directly outputs the markdown
# Version Log
## version 0.0.1 established 01-22-2020
*Working Towards Working Mission 1 Demo in 0.1 *
- **01-22-2020** | discovered faker.Faker and deprecated old namelessgen
EOF
EOL]
EOL
e
E
I finally got it to save the damned input lines to the file instead of just echoing the command I wanted to enter on the screen and not executing it. But... why isn't it adding the lines built from the VHENTRY variable... and why doesn't it stop reading after one line sometimes and this time not. You could see I was trying to do something to tell it to stop reading the input.
After some realizing a thing I had done in the script was by accident... I tried to fix it and saw that the | at the end of the read command was seemingly the only reason the script did any of what it did save to the file in the first place.
I would have done this in python3 if I had know this script wouldn't be the simplest thing I had ever done. Now I just have to know how you do it after all the time spent on it so that I can remember never to think a shell script will save time again.
Use printf to write a string to a file. cat tries to read from a file named in the argument list. And when the argument is - it means to read from standard input until EOF. So your script is hanging because it's waiting for you to type all the input.
Don't put quotes around the path when it starts with ~, as the quotes make it a literal instead of expanding to the home directory.
Get rid of | at the end of the read line. read doesn't write anything to stdout, so there's nothing to pipe to the following command.
There isn't really any need for the VHENTRY variable, you can do that formatting in the printf argument.
#!/bin/bash
cd ~/Git/test
cp versionlog.MD .versionlog.MD.old
LOGDATE="$(date --utc +%m-%d-%Y)"
read -p "MSG > " VHMSG
printf -- '- **%s** | %s\n' "${LOGDATE}" "$VHMSG" >> versionlog.MD

Redirect program output to a file but not at the end of the runtime

i want to Redirect the output from a program to a file in Linux console. I found the soultion to use the ">" command . But it doesn`t work for me.
I need the output during the runtime of the program not at the end of the runtime. Because the program streams the Progress (Percent) to stdout. I have no possibility to install new Tools to the Linux System.
The program does a update function:
**The expected Output is:
( The Perecent is moving during the program ((from 0 -100 %) )
# erase
# load file XXX
# 100%
# erase
# load file XXX
# 100%
only 100 Percent Information at the end of the program is recorded. Not while the program is running. For me its important to get the percent information while the program is runing and at the end. I want to visualize the Percent in a detached GUI.
Have You tried piping to tee ? here are some examples.
echo "text" | tee /home/yourusername/file
bash somescript.sh | tee /home/yourusername/somescript.log
How about:
$ nohup program > output.txt ; tail -f output.txt
nohup runs program in the background redirecting it's output to a file
tail -f streams the end of the file on the screen
This of course means that you can't interact with the program while it's running.

capture line and post it

there is a log file that I need to capture specific lines in, and send a specific word out of it to a url
This line does the job of tracing that log file and finding that word
tail -f /var/log/mail.log | awk '/status=bounced/ { sub(/^to=</,"",$7); sub(/>,$/,"",$7); print $7}'
Now, I need the result of $7 to be sent to some url, I'm assuming by using curl.
Assuming that this log file will only get bigger and that this script will need to run endlessly in the background..
What's the best way of putting a bash script that will answer those needs?
Thanks!

Using AWK and setting results to bash variables/arrays?

I have a file that replicates the results from show processlist command from mySQL.
The file looks like this:
*************************** 1. row ***************************
Id: 1
User: system user
Host:
db: NULL
Command: Connect
Time: 1030455
State: Waiting for master to send event
Info: NULL
*************************** 2. row ***************************
Id: 2
User: system user
Host:
db: NULL
Command: Connect
Time: 1004
State: Has read all relay log; waiting for the slave
I/O thread to update it
Info: NULL
And it keeps going on for a few more times in the same structure.
I want to use AWK to only get these parameters: Time,ID,Command and State, and store every one of these parameters into a different variable or array so that I can later use / print them in my bash shell.
The problem is, I am pretty bad with AWK, I dont know how to both seperate the parameters I want from the file and also set them as a bash variable or array.
Many thanks in advance for the help!
EDIT: Here is my code so far
echo "Enter age"
read age
cat data | awk 'BEGIN{ RS="row"
FS="\n"
OFS="\n"}
{ print $2,$7}
' | awk 'BEGIN{ RS="Id"}
{if ($4 > $age){print $2}}'
The file 'data' contains blocks like I have pasted above. The code should, if the 'age' entered is smaller than the Time parameter in the data file (which is $4 in my awk code), return the ID parameter, but it returns nothing.
If I remove the if statement and print $4 instead of $2 this is my output
Enter age
1
1030455
1004
2144
2086
0
So I was thinking maybe that blank line is somehow messing up my AWK print? Is there a simple way to ignore that blank line while keeping my other data?
This is how you'd use awk to produce the values you want as a set of tab-separated fields on each line per "row" block from the input:
$ cat tst.awk
BEGIN {
RS="[*]+ [[:digit:]]+[]. row [*]+\n"
FS="\n"
OFS="\t"
}
NR>1 {
sub(/\n$/,"") # remove the trailing newline
gsub(/\n\s+/," ") # compress all multi-line fields into single lines
gsub(OFS," ") # ensure the only OFS in the output IS between fields
delete n2v
for (i=1; i<=NF; i++) {
name = gensub(/:.*/,"","",$i)
value = gensub(/^[^:]+:\s+/,"","",$i)
n2v[name] = value
}
if (n2v["Time"]+0 > age) { # force a numeric comparison
print n2v["Time"], n2v["Id"], n2v["Command"], n2v["State"]
}
}
$ awk -v age=2000 -f tst.awk file
1030455 1 Connect Waiting for master to send event
If the target age is already stored in a shell variable just init the awk variable from the shell variable of the same name:
$ age="2000"
$ awk -v age="$age" -f tst.awk file
The above uses GNU awk for multi-char RS (which you already had), gensub(), \s, and delete array.
When you say "and store every one of these parameters into a different variable or array" it could mean one of several things so I'll leave that part up to you but you might be looking for something like:
arr=( $(awk '...') )
or
awk '...' |
while IFS="\t" read -r Time Id Command State
do
<do something with those 4 vars>
done
but by far the most likely situation is that you don't want to use shell at all but instead just stay inside awk.
Remember - every time you write a loop in shell just to manipulate text you have the wrong approach. UNIX shell is an environment from which to call UNIX tools and the UNIX tool for general text manipulation is awk.
Until you edit your question to tell us more about your problem though, we can't guess what the right solution is from this point on.
At the first level you have your shell which you use to run any other child process. It's impossible to modify parents environment from within child process. When you run your bash script file (which has +x right) it's spawned as new process (child). It can set it's own environment but when it ends its live you'll get back to the original (parent).
You can set some variables on bash and export them to it's environment. It'll be inherited by it's children. However it can't be done in opposite direction (parent can't inherit from its child).
If you wish to execute some commands from the script file in the current bash's context you can source the script file. source ./your_script.sh or . ./your_script.sh will do that for you.
If you need to run awk to filter some data for you and keep results in the bash you can do:
awk ... | read foo
This works as read is shell buildin function rather than external process (check type read, help, help read, man bash to check it by yourself).
or:
foo=`awk ....`
There are many other constructions you can use. Whatever bash script you do please compare your code with bash pitfalls webpage.

egrep command with piped variable in ssh throwing No Such File or Directory error

Ok, here I'm again, struggling with ssh. I'm trying to retrieve some data from remote log file based on tokens. I'm trying to pass multiple tokens in egrep command via ssh:
IFS=$'\n'
commentsArray=($(ssh $sourceUser#$sourceHost "$(egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))
echo ${commentsArray[0]}
echo ${commentsArray[1]}
commax=${#commentsArray[#]}
echo $commax
where $v is something like below but it's length is dynamic. Meaning it can have many file names seperated by pipe.
UserComments/propagateBundle-2013-10-22--07:05:37.jar|UserComments/propagateBundle-2013-10-22--07:03:57.jar
The output which I get is:
oracle#172.18.12.42's password:
bash: UserComments/propagateBundle-2013-10-22--07:03:57.jar/New: No such file or directory
bash: line 1: UserComments/propagateBundle-2013-10-22--07:05:37.jar/nouserinput: No such file or directory
0
Thing worth noting is that my log file data has spaces in it. So, in the code piece I've given, the actual comments which I want to extract start after the jar file name like : UserComments/propagateBundle-2013-10-22--07:03:57.jar/
The actual comments are 'New Life Starts here' but the logs show that we are actually getting it till 'New' and then it breaks at space. I tried giving IFS but of no use. Probably I need to give it on remote but I don't know how should I do that.
Any help?
Your command is trying to run the egrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log on the local machine, and pass the result of that as the command to run via SSH.
I suspect that you meant for that command to be run on the remote machine. Remove the inner $() to get that to happen (and fix the quoting):
commentsArray=($(ssh $sourceUser#$sourceHost "egrep '$v' '/$INSTALL_DIR/$PROP_BUNDLE.log'"))
You should use fgrep to avoid regex special interpretation from your input:
commentsArray=($(ssh $sourceUser#$sourceHost "$(fgrep "$v" /$INSTALL_DIR/$PROP_BUNDLE.log)"))

Resources