How to cut the log file? - node.js

Im using pm2 to create log file and it is very big(about 1.2GB, and it is still increasing).
How to cut a big log file to multiple small log files?
Has pm2 support anyway to cut the log file automatically?

In general, you do not have to worry if pm2 allows rotating log files because you can do that on a linux based system using the logrotate utility.
More details can be found at the following:
https://www.digitalocean.com/community/tutorials/how-to-manage-log-files-with-logrotate-on-ubuntu-12-10
http://www.z-car.com/blog/programming/how-to-rotate-logs-using-pm2-process-manager-for-node-js
https://github.com/Unitech/pm2/issues/114

As an example:
var file = fs.readFileSync('logfile.log')
if (file.length > 1024) { // 1KB
fs.writeFileSync('logfile.log', file.slice(-1024))
}

Related

Adapt command to creating csv file from storage content including date(time) & file size also

According to thread:
Linux: fast creating of formatted output file (csv) from find command
there is a suggested bash command, including awk (which I don't understand):
find /mnt/sda2/ | awk 'BEGIN{FS=OFS="/"}!/.cache/ {$2=$3=""; new=sprintf("%s",$0);gsub(/^\/\/\//,"",new); printf "05;%s;/%s\n",$NF,new }' > $p1"Seagate-4TB-S2-BTRFS-1TB-Dateien-Verzeichnisse.csv"
With this command, I am able to create a csv file containing "05;file name;full path and file name" of the directory and file content of my device mounted on /mnt/sda2. Thanks again to -> tink
How must I adapt the above command to receive date(&time) and file size also?
Thank you in advance,
-Linuxfluesterer

How do I use Nagios to monitor a log file that generates a random ID

This the log file that I want to monitor:
/test/James-2018-11-16_15215125111115-16.15.41.111-appserver0.log
I want Nagios to read it this log file so I can monitor a specific string.
The issue is with 15215125111115 this is the random id that gets generated
Here is my script where the Nagios is checking for the Logfile path:
Veriables:
HOSTNAMEIP=$(/bin/hostname -i)
DATE=$(date +%F)
..
CHECK=$(/usr/lib64/nagios/plugins/check_logfiles/check_logfiles
--tag='failorder' --logfile=/test/james-${date +"%F"}_-${HOSTNAMEIP}-appserver0.log
....
I am getting the following output in nagios:
could not find logfile /test/James-2018-11-16_-16.15.41.111-appserver0.log
15215125111115 This number is always generated randomly but I don't know how to get nagios to identify it. Is there a way to add a variable for this or something? I tried adding an asterisk "*" but that didn't work.
Any ideas would be much appreciated.
--tag failorder --type rotating::uniform --logfile /test/dummy \
--rotation "james-${date +"%F"}_\d+-${HOSTNAMEIP}-appserver0.log"
If you add a "-v" you can see what happens inside. Type rotating::uniform tells check_logfiles that the rotation scheme makes no difference between current log and rotated archives regarding the filename. (You frequently find something like xyz..log). What check_logfile does is to look into the directory where the logfiles are supposed to be. From /test/dummy it only uses the directory part. Then it takes all the files inside /test and compares the filenames with the --rotation argument. Those files which match are sorted by modification time. So check_logfiles knows which of the files in question was updated recently and the newest is considered to be the current logfile. And inside this file check_logfiles searches the criticalpattern.
Gerhard

how to filter the huge log file to just contain the useful messages

I have a huge log files. Every time open the file will cause the system not responsive. I only need to check the log messages that contains certain strings.
Is there an simple way to do it?
$cat testlogfile.txt | grep --color=auto TRACE > newlogfile.txt
For example, your huge log file called testlogfile.txt. You only need check the log messages that contains "TRACE".
try this command under linux terminal and go to where the huge log is.
You can open the newlogfile.txt that only contains lines with "TRACE"
If you would like to exclude the lines with "TRACE", try -v option:
$cat testlogfile.txt | grep --color=auto -v TRACE > newlogfile.txt

Regarding lz4mt compression and linux buffering issue

I am using lz4mt multi-threaded version of lz4 and in my workflow I am sending thousands of large size files (620 MB) from client to server and when file reaches on server my rule will trigger and compress file using lz4mt and then remove uncompressed file. The problem is sometimes when I remove uncompressed file, I am not able to get compressed file of right size its because lz4mt returns immediately before sending output to disk.
So is there any way lz4mt will remove uncompressed file itself after compressing as done by bzip2.
Input: bzip2 uncompress_file
Output: Compressed file only
whereas
Input: lz4mt uncompress_file
Output: (Uncompressed + Compressed) file
Below script sync command also not working properly I think.
The script which execute as my rule triggers is:
script.sh
/bin/lz4mt uncompressed_file output_file
/bin/sync
/bin/rm uncompressed_file
Please tell me how to solve above issue.
Thanks a lot
Author here. You could try the following methods
Concatenate commands with && or ;.
Add lz4mt command line option -q (suppress prompt), and -f (force overwrite).
Try it with original lz4.

linux - export output from apachetop to file

Is it possible to export output from apachetop to file? Something like this: "apachetop > file", but because apachetop is running "forever", so this command is also running forever. I just need to obtain actual output from this program and handle it in my GTK# application.
Every answer will be very appreciated.
Matej.
This might work:
{ apachetop > file 2>&1 & sleep 1; kill $! ; }
but no guarantees :)
Another way using linux is to find out the /dev/vcsN device that is being used when running the program and reading from that file directly. It contains a copy of the screen data for a given VT; I'm not sure if there is a applicable device for a pty.
Well indirectly apachetop is using the access.log file to get it's data.
Look at
/var/log/apache2/access.log
You'll simply have to parse the file to get the info you're looking for!/var/log/apache2/access.log

Resources