Tail command to display matched pattern - linux

I have a log file which logs network activities. I want to view the log file, but I want to see matched pattern. I mean I want to see the content of my choice. The log file is in this format: Nov 7 12:00:00 ......... How can I view content of a specific date or specific time. I mean if I want to see only 3:00 to 5:00 on Nov 7 how can I use the tail command to do that?

There are multiple ways to do what you want. One of them using grep is
grep '^Nov 7 0[3-5]:' network.log | less

Related

How to push complex legacy logs into logstash?

I'd like to use ELK to analyze and visualize our GxP Logs, created by our stoneold LIMS system.
At least the system runs on SLES but the whole logging structure is some kind of a mess.
I try to give you an impression:
Main_Dir
| Log Dir
| Large number of sub dirs with a lot of files in them of which some may be of interest later
| Archive Dir
| [some dirs which I'm not interested in]
| gpYYMM <-- subdirs created automatically each month: YY = Year ; MM = Month
| gpDD.log <-- log file created automatically each day.
| [more dirs which I'm not interested in]
Important: Each medical examination, that I need to track, is completely logged in the gpDD.log file that represents the date of the order entry. The duration of the complete examination varies between minutes (if no material is available), several hours or days (e.g. 48h for a Covid-19 examination) or even several weeks for a microbiological sample. Example: All information about a Covid-19 sample, that reached us on December 30th is logged in ../gp2012/gp30.log even if the examination was made on January 4th and the validation / creation of report was finished on January 5th.
Could you please provide me some guidance of the right beat to use ( I guess either logbeat or filebeat) and how to implement the log transfer?
Logstash file input:
input {
file {
path => "/Main Dir/Archive Dir/gp*/gp*.log"
}
}
Filebeat input:
- type: log
paths:
- /Main Dir/Archive Dir/gp*/gp*.log
In both cases the path is possible, however if you need further processing of the lines, I would suggest using at least Logstash as a passthrough (using beats input if you do not want to install Logstash on the source itself, which can be understood)

NLog log-rotation/archiving inconsistent behavior

I have a project that uses NLog to create and maintain log files. This includes the use of log-rotation/archiving of old log files. However, I've seen that the archiving settings of NLog are not always respected, especially regarding the ArchiveEvery configuration option. Based on this stackoverflow answer, I assume the library checks the last write time for a file to check if it has to archive the current file and start a new one, but not until a new log message is passed to the library.
In my project I have the library configured to archive every minute. This should be fine, as my project logs messages every few seconds, and I expect to see an archived file every minute because the log messages keep coming. However I see inconsistent behavior, with sometimes multiple minutes in between different, but subsequent, archived log files. For example, I currently have the following files on my disk:
Filename | Last write time
----------------------+------------------
Log.01-06-2017.2.csv | 1-6-2017 10:42
Log.01-06-2017.3.csv | 1-6-2017 10:44
Log.01-06-2017.4.csv | 1-6-2017 10:46
Log.01-06-2017.5.csv | 1-6-2017 10:47
Log.01-06-2017.6.csv | 1-6-2017 10:48
Log.01-06-2017.7.csv | 1-6-2017 10:52
Log.01-06-2017.8.csv | 1-6-2017 11:01
Log.01-06-2017.9.csv | 1-6-2017 11:04
Log.01-06-2017.20.csv | 1-6-2017 11:43
Log.01-06-2017.csv | 1-6-2017 11:46
As you can see, the archived files are not created every minute. As for my NLog config at the moment:
fileTarget.ArchiveNumbering = ArchiveNumberingMode.DateAndSequence;
fileTarget.ArchiveEvery = FileArchivePeriod.Minute;
fileTarget.KeepFileOpen = true;
fileTarget.AutoFlush = true;
fileTarget.ArchiveDateFormat = "dd-MM-yyyy";
fileTarget.ArchiveOldFileOnStartup = true;
I am struggling to get this to work "properly". I write this in parentheses as I don't have much experience with NLog and don't really know how the library behaves. I had hoped to find more information on the NLog wiki page over at GitHub, but I couldn't find the information I needed over there.
Edit
fileTarget.FileName is comprised of a base-folder (storage.Folder.FullName = "C:\ProgramData\\"), a subfolder (LogFolder = "AuditLog"), and the filename (LogFileName = "Log.csv"):
fileTarget.FileName = Path.Combine(storage.Folder.FullName, Path.Combine(LogFolder, LogFileName));
The fileTarget.ArchiveFileName is not set, so I imagine it being the default one.
Could it be that specifying the complete path for the FileName is screwing things up? If so, is there a different way to specify a specific folder to put the log files in?

In Bash, How would you only read lines in a log past a certain timestamp?

So right now I'm trying to do the following:
grep "string" logfile.txt
which is going fine, but there's a lot of "string" in logfile.txt; I really only want to see the last hour's worth. In pseudo-code I want to do...
grep "string" logfile.txt OnlyShowThoseInTheLastHour
Is there any way to easily accomplish this in bash? In the logfile the lines look like this:
13:27:50 string morestuff morestuff morestuff
edit: sorry I forgot to mention it, but seeing logs from similar hours on past days is not an issue as these logs are refreshed/archived daily.
This should do it:
awk 'BEGIN { tm = strftime("%H:%M:%S", systime()-3600) } /string/ && $1 >= tm' logfile.txt
Replace string by the pattern you're interested in.
It works by first building a string holding time information from 1 hour ago in HH:MM:SS format, and then selecting those lines that only match string and have the first field (timestamp) lexicographically greater than or equal to the timestamp string just built.
Note that it has its limitations, for example, if you do this at 00:30, log entries from 23:30 through 23:59 will not match. In general, running this command anytime between 00:00 and 00:59 will possibly omit log entries from 23:00 through 23:59. However, this shouldn't be an issue for you, since you mentioned (in the comments) that logs archive and start fresh every midnight.
Also, leap seconds are not dealt with, but this is probably not a problem unless you need 100% precise results. And again - since logs start afresh at midnight, in your specific case this is not a problem at all.

Bash Script Efficient For Loop (Different Source File)

First of all i'm a beginner in bash scripting. I usually code in Java but this certain task requires me to create some bash scripts in Linux. FYI i've already made a working script but I think its not efficient enough because of the large files I'm dealing with.
The problem is simple I have 2 logs that I need to compare and make some correction on one of the logs... ill call it logA and logB. This 2 logs contains different format here is an example:
01/04/2015 06:48:59|9691427842139609|601113090635|PR|30.00|8.3|1.7| <- log A
17978712 2015-04-01 06:48:44 601113090635 SUCCESS DealerERecharge.<-log B
17978714 2015-04-01 06:48:49 601113090635 SUCCESS DealerERecharge.<-log B
As you can see there is a gap in time stamp. The actual logs that will match with log A is the one with the ID 17978714 because it is the closest time from it. The highest time gap I've seen is 1 minute. I cant use the RANGE logic because if there are more than one line on log B that is within the 1 minute range then all of the line will show in my regenerated log.
The script I made contains a for loop which iterate the timestamp of log A until it hits something in log B (The first one it hits is the closest)
Inside the for loop I have this line of code which makes the loop slow.
LINEOUTPUT=$(less $File2 | grep "Validation 1" | grep "Validation 2" | grep "Timestamp From Log A")
I've read some sample using SED but the problem is I have 2 more validation to consider before matching it with the time stamp.
The validation works as a filter to narrow down the exact match for log A and B.
Additional Info: I tried doing some benchmark test for the script I made by performing some loop. One thing I've noticed is that even though I only use 1 pipe for that script the loop tick is still slow.

Extract Date from text file

We have to identify and extract the date from given samplese
Oct 4 07:44:45 cli[1290]: PAPI_Send: To: 7f000001:8372 Type:0x4 Timed out.
Oct 4 08:16:01 webui[1278]: USER:admin#192.168.100.205 COMMAND:<wlan ssid-profile "MFI-SSID" > -- command executed successfully
Oct 4 08:16:01 webui[1278]: USER:admin#192.168.100.205 COMMAND:<wlan ssid-profile "MFI-SSID" opmode opensystem > -- command executed successfully
Here the main problem is, Date format is versatile. it may be "oct 4 2004" or "oct/04/2004" etc
parsing is best way to handle such a problems.
so learn about parsing techniques then use them on your project and enjoy. Appropriate design pattern for an event log parser?

Resources