How to turn command output to a live status update? - linux

I am curious if there's a tool or an easy way to continually execute a certain command in given intervals, and reprint its output into same place on the screen.
The example that prompted me to think about it is 'dropbox-cli status'. Manually executed:
$> dropbox-cli status
Syncing (6,762 files remaining)
Indexing 3,481 files...
$> dropbox-cli status
Syncing (5,162 files remaining)
Indexing 2,681 files...
I am looking for:
$> tracker --interval=1s "dropbox-cli status"
Syncing (6,743 files remaining)
Indexing 3,483 files
The imaginary command 'tracker' would block and the two output lines would be continually reprinted over every second, rather than creating an appending log output.

You can use watch
watch -n1 dropbox-cli status
Specify the time in seconds with param -n.

Related

Linux: How to cat stdout of a process until the end, but stop and return a non-zero code if certain string appears?

We have a tool that runs tests, but does return an error code if they fail.
The tool runs the tests after starting logging in through SSH to a custom console (not bash) and issuing a command. All tests run at once within that invocation
The logging of the tests goes to a file.
The output of the tool is roughly:
test1 [ok]
test2 Some message based on the failure
...
To stop the build, we need to look for certain strings in the output.
The output appears as the tests run.
I could capture the whole output into a file and fail at the end. But it would save quite some time to fail once the first test fails.
Therefore, I would like something like tee, but it would also kill the execution if it finds that failure string. Or, at least, it should print the output as it comes, and return non-zero if a string is found.
Is this doable with the standard Linux toolkit?
The only solution I can think of is:
Start you build process and cat its output to an output file.
Start another script that monitors this file: a loop which iterates every X seconds in search for your, lets say, forbidden words in the file. As soon as they appear, kill the build process (you may need a way to identify your build process, such as a pid file or something like this) and clear the file.
You can even put this 2 processes in a single shellscript and make them both start and stop when needed.

How to stream log files content that is constantly changing file names in perl?

I a series of applications on Linux systems that I need to basically constantly 'stream' out or even just 'tail' out but the challenge is the filenames are constantly rolling and changing.
The are all date encoded (dates being in different formats) and each then have different incremented formats.
Most of them start with one and increase, but one doesn't have an extension and then adds an extension past the first file and the other increments a number but once hitting 99 rolls to increment a alpha and returns the numeric to 01 and then up again as it rolls so quickly.
I just have the OS level shell scripting, OS command line utilities, and perl available to me to handle this situation for another application to pickup and read these logs.
The new files are always created right when it starts writing to the new file and groups of different logs (some I am reading some I am not) are being written to the same directory so I cannot just pickup anything hitting the directory.
If I simply 'tail -n 1000000 -f |' them today this works fine for the reader application I am using until the file changes and I cannot setup file lists ranges within the reader application, but can pre-process them so they basically appear as a continuous stream to the reader vs. the reader directly invoking commands to read them. A simple Perl log reader like this also work fine for a static filename but not for dynamic ones. It is critical I don't re-process any logs lines and just capture new lines being written to the logs.
I admit I am not any form a Perl guru, and the best answers / clue I've been able to find so far is the use of Perl's Glob function to possibly do this but the examples I've found basically reprocess all of the files on each run then seem to stop.
Example File Names I am dealing with across multiple apps I am trying to handle..
appA_YYMMDD.log
appA_YYMMDD_0001.log
appA_YYMMDD_0002.log
WS01APPB_YYMMDD.log
WS02APPB_YYMMDD.log
WS03AppB_YYMMDD.log
APPCMMDD_A01.log
APPCMMDD_B01.log
YYYYMMDD_001_APPD.log
As denoted above the files do not have the same inode and simply monitoring the directory for change is not possible as a lot of things are written there. On the dev system it has more than 50 logs being written to the directory and thousands of files and I am only trying to retrieve 5. I am seeing if multitail can be made available to try that suggestion but it is not currently available and installing any additional RPMs in the environment is generally a multi-month battle.
ls -i
24792 APPA_180901.log
24805 APPA__180902.log
17011 APPA__180903.log
17072 APPA__180904.log
24644 APPA__180905.log
17081 APPA__180906.log
17115 APPA__180907.log
So really the root of what I am trying to do is simply a continuous stream regardless if the file name changes and not have to run the extract command repeatedly nor have big breaks in the data feed while some script figures out that the file being logged to has changed. I don't need to parse the contents (my other app does that).. Is there an easy way of handling this changing file name?
How about monitoring the log directory for changes with Linux inotify, e.g. Linux::inotify2? Then you could detect when new log files are created, stop reading from the old log file and start reading from the new log file.
Try tailswitch. I created this script to tail log files that are rotated daily and have YYYY-MM-DD on their names. To use this script, you just say:
% tailswitch '*.log'
The quoting prevents the shell from interpreting the glob pattern. The script will perform glob pattern from time to time to switch to a newer file based on its name.

How can I run two bash scripts simultaneously and without repetition of the same action?

I'm trying to write a script that automatically runs a data analysis program. The data analysis takes a file, analyzes it, and puts all the outputs into a folder. The program can be run on two terminals simultaneously (each analyzing a different subject file).
I wrote a script that can do all the inputs automatically. However, I can only get my script to run one automatically. If I run my script simultaneously it will analyze the same subject twice (useless)
Currently, my script looks like:
for name in `ls [file_directory]`
do
[Data analysis commands]
done
If you run this on two terminals, it will start from the top of the directory containing all the data files. This is a problem, so I tried to do checks for duplicates but they weren't very effective.
I tried a name comparison with the if command (didn't work because all the output files except one were of a unique name, so it would check the first outfput folder at the top of the directory and say the name was different even though an output folder further down had the same name). It looked something like..
for name in `ls <file_directory>`
do
for output in `ls <output directory>`
do
If [ name==output ]
then
echo "This file has already been analyzed."
else
<Data analyis commands>
fi
done
done
I thought this was the right method but apparently not. I would need to check all the names before some decision was made (rather one by one which that does)
Then I tried moving completed data files with the mv command (didn't work because "name" in the for statement stored all the file names so it went down the list regardless of what was in the folder at present). I remember reading something about how shell scripts do not do things in "real time" so it makes sense that this didn't work.
My thought was looking for some sort of modification to that if statement so it does all the name checks before I make a decision (how?)
Also are there any other commands I could possibly be missing that I could possibly try?
One pattern I use often is to use split command.
ls <file_directory> > file_list
split -d -l 10 file_list file_list_part
This will create files like file_list_part00 to file_list_partnn
You can then feed these file names to you script.
for file_part in `ls file_list_part*`
do
for file_name in `cat file_part | tr '\n' ' '`
do
data_analysis_command file_name
done
done
Never use "ls" in a "for" (http://mywiki.wooledge.org/ParsingLs)
I think you should use a fifo (see mkfifo)
As a follow-on from the comments, you can install GNU Parallel with homebrew:
brew install parallel
Then your command becomes:
parallel analyse ::: *.dat
and it will process all your files in parallel using as many CPU cores as you have in your Mac. You can also add in:
parallel --dry-run analyse ::: *.dat
to get it to show you the commands it would run without actually running anything.
You can also add in --eta (Estimated Time of Arrival) for an estimate of when the jobs will be done, and -j 8 if you want to run, say 8, jobs at a time. Of course, if you specifically want the 2 jobs at a time you asked for, use -j 2.
You can also have GNU Parallel simply distribute jobs and data to any other machines you may have available via ssh access.

Head on svn log doesn't always stop

Consider this:
svn log -r HEAD:1 --search $pattern | head -4
Sometimes this command finds the necessary amount of lines (e.g. 4) and stops. But sometimes it just keeps searching (i.e. hangs) even after having found the necessary amount of lines.
I don't know on what it depends (whether it keeps searching or stops). I would like to know the reason and I would like to know how to modify my command so it always stops right after having found the necessary amount of lines (I don't want the svn log to search the entire svn history as this might take forever).
Plain svn log will always continue showing the revision history from HEAD revision to 0 relevant to your query unless you kill the process (assuming that you don't use the --limit switch or specified some subtree like /branches/myfeature). Adjust your script to kill the process once it shows the required number of log messages.

Monitor console output

I have a program, lets call if 'foo'
Foo works fine for a random amount of time during which it announces its progress on the console.
But after sometimes it stops giving any output. At this point I have to manually close the program (ctrl + c) and start it again.
I would like to know if there is a way to monitor console output of a program and in case there is no output for a certain duration of time take some action.
Platform is linux.
I've found this on Internet about a command called watch.
Name
watch - execute a program periodically, showing output fullscreen
Synopsis
watch [-dhvt] [-n ] [--differences[=cumulative]] [--help] [--interval=] [--no-title] [--version]
Description
watch runs command repeatedly, displaying its output (the first screenfull). This allows you to watch the program output change over time. By default, the program is run every 2 seconds; use -n or --interval to specify a different interval.
The -d or --differences flag will highlight the differences between successive updates. The --cumulative option makes highlighting "sticky", presenting a running display of all positions that have ever changed. [...]
watch will run until interrupted.

Resources