Monitoring the System Log File via Bash Script - linux

I am currently using the following to read the system log:
journalctl -u <service name> | tail -n1
However I have a need to monitor the system log live, as changes come in (in a bash script), instead of just looping through the log file.
What is the best way to do this? I did some research to where the journalctl command is reading from, and it seems that the system logs are unreadable (or at least when I attempted with cat.
Any suggestions would be much appreciated!

journalctl tool has a -f flag which enables printing the contents of log file as soon as it is changed. Use it like this:
$ journalctl -u <service name> -f

Related

Showing progress of scp during cloud-init process

When I spin up a new server I use a cloud-init script to automate the process. I follow progress of this process with:
$ tail -f /var/log/cloud-init-output.log
When it comes to fetching a backup file using scp and piping it through tar -zx I tried the following in the cloud-init script:
$ scp myuser#123.45.6.789:path/to/file.tar.gz /dev/stdout | tar -zx
Whilst this command works, the handy progress indications that scp outputs do not appear in the tail -f output... i.e. I do not see progress like:
file.tar.gz 57% 52MB 25.2MB/s 00:02
I have also tried bash process substitution like so:
$ scp myuser#123.45.6.789:path/to/file.tar.gz >(tar -zx)
And still the progress indications do not appear in the tail -f output.
How do I preserve the progress indications in the tail -f output? Particularly to see progress when fetching the larger backup files, it would be really handy to preserve these indications.
Note that when I run both of the above in a bash script directly (with set -x at the top), progress does show for the 'bash process substitution' variant but not the 'piping' variant... but when tailing the cloud-init logs, progress shows for neither variant.
It looks like cloud-init just sends both stdout and stderr from all cloud-init stages to /var/log/cloud-init-output.log (see here).
So we can re-create the cloud-init process (for the purposes of this question) with the following call:
$ scp myuser#123.45.6.789:path/to/file.tar.gz >(tar -zx) >output.log 2>&1
and we then follow this log file separately with:
$ tail -f output.log
Strange thing is... no progress status ever appears in this output.log file. And yet progress status is definitely being sent... the only change in the following is to direct to /dev/stdout rather than output.log:
$ scp myuser#123.45.6.789:path/to/file.tar.gz >(tar -zx) >/dev/stdout 2>&1
file.tar.gz 57% 52MB 25.2MB/s 00:02
So why do we see progress status on /dev/stdout but not in output.log when we direct stdout to output.log?
Like some other tools, scp checks if its output is going to a TTY (by use of isatty), disabling the progress meter output if it is not (you can find similar cases e.g., ls --color=auto emits color codes only when output goes to a terminal).
You can trick it into thinking it is outputting to a TTY by running it under script (as shown in this answer), or any other tool (e.g expect) that runs a program with its output connected to a PTY.

Conclusion of result of work of the Linux service?

Is automatically started as service at start of system a script. For example: myscript.service
Question - how to look through work (conclusion) of my script in real time. By analogy as though I have started the script from the terminal and I see a conclusion of work of a script.
service myscript status removes several lines and doesn't update a conclusion
journalctl -f -u myscript
Assuming you have used the systemd capabilities.
Either redirect> logs to the file and the tail -f command

Can we save the execution log when we run a command using PuTTY/Plink

I am using Plink to run a command on remote machine. In order to fully automate the process I
need to save the execution log somewhere. I am using a bat file:
C:\Ptty\plink.exe root#<IP> -pw <password> -m C:\Ptty\LaunchFile.txt
The C:\Ptty\LaunchFile.txt contains my command that i want to run.
./Launch.sh jobName=<job name> restart.mode=false
Is there a way to save the execution log so that I can monitor it later... ?
The plink is a console application. Actually that's probably it's only purpose. As such, its output can be redirected to a file as with any other command-line command/tool.
Following example redirects both standard and error output to a file output.log:
plink.exe -m script.txt username#example.com > output.log 2>&1
See also Redirect Windows cmd stdout and stderr to a single file.
This is the one of my way to log everything when I use putty.exe on Windows.

Cron / wget jobs intermittently not running - not getting into access log

I've a number of accounts running cron-started php jobs hourly.
The generic structure of the command is this:
wget -q -O - http://some.site.com/cron.php
Now, this used to be running just fine.
Lately, though, on a number of accounts it has started playing up - but only on this one server. Once or twice a day the php file is not run.
The access log is missing the relevant entry.
While the cron log shows that the job was run.
We've added a bit to the command to log things out (-o /tmp/logfile) but it shows nothing.
I'm at a loss, really. I'm looking for ideas what can be wrong, or how to sidestep this issue as it has started taking up way too much of my time.
Has anyone seen anything remotely like this?
Thanks in advance!
Try this command
wget -d -a /tmp/logfile -O - http://some.site.com/cron.php
With -q you turn off wget's output. With -d you turn on debug output (maybe -v for verbose output is already enough). With -a you append logging messages to /tmp/logfile instead of always creating a new file.
You can also use curl:
curl http://some.site.com/cron.php

Capture nethogs output in log file

I want to check the network bandwidth used by my process.
For this i found that nethogs tool is useful. Using this tool i can see which process is eating up a network bandwidth and process behaviour.
But how do I capture data from nethogs for a my process and store it into log file ?
You can run nethogs in background in tracemode and write output to a file like this:
sudo nethogs -t eth1 &> /var/tmp/nethogs.log &
Download and build the nethogs-parser as described here.
Then after you have accumulated enough data you can run the parser to see the results:
./hogs -type=pretty /var/tmp/nethogs.log
Make sure to kill the running nethogs process when you are done collecting data.
More info here on automating the task.
I dont know when these options got implemented but you can use nethogs -t or nethogs -b, the pid and user are strangely placed at the end of the pid command string, but easy enough to parse.
I think you need to use the latest cvs version 0.8.1-SNAPSHOT
You can use this command to capture output:
nethogs -d 5 | sed 's/[^[:print:][:cntrl:]]//g' > output.txt
The right command of nethogs is
nethogs -d 1 eth0 > output.txt
You need to specify the network interface otherwise, the default interface eth0 will be used. Sometime, nethogs might not show the proper output because of the network interface. It is always better to provide the network interface and generate some traffic during the experimentation. You can print the output to a file by adding > output.txt
-d argument specifies how frequently the output will be shown. Here, I gave 1, this indicates that the output will be shown per second.
Hope this might be useful.

Resources