How to filter Remote Syslog messages on Red Hat? - linux

I'm using a unified log on a server running Red Hat 6, receiving directed log messages from others servers and managing them with RSyslog. Until now, the /etc/rsyslog.conf have this rule:
if $fromhost-ip startswith '172.20.' then /var/log/mylog.log
But I don't want to log messages that contains "kernel" and "dnat", so I want to filter all messages, enhancing the rule.
How can I do that?

This looks like a question better suitable for Unix & Linux. Having appropriately notified that this is not the right place, I'll go and break the rules by answering it anyway.
Depending a bit on the version of Red Hat you're using, you can use rsyslogd's conditional filters or RainerScript in various ways to express a combination of several logical rules. On Red Hat 6 you could say something like this to accomplish what you want using a conditional filter:
if ( $fromhost-ip startswith '172.20.' and \
$syslog-facility-text != 'kern' ) then /var/log/mylog.log
You can find more examples from the Rsyslog v5 manual.

Related

I have a pcap with two MPLS headers . i observe the match criteria for every field in both the MPLS headers are similar . How do I differentiate? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 months ago.
The community reviewed whether to reopen this question 3 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have a pcap with two MPLS headers . i observe the match criteria for every field in both the MPLS headers are similar . How do I differentiate between the two MPLS headers ? Is it possible to achieve this via Wireshark or tshark ? If it is possible to achieve via tshark , please share the linux cmd.
For example , i am trying to filter using -
mpls.exp==7 && mpls.bottom == 0
but with the above match filter criteria , even those packets where mpls.exp==7 (in header1) and mpls.bottom==0 (in header 2) are matched. Attaching pcap snip for your reference.
above match criteria matching exp from header 1 and bottom of stack from header 1
above match criteria matching exp from header 2 and bottom of stack from header 1
TIA.
Tried to filter this using tsahrk in linux . Still not able to get the desired result -
Expected result - only the first 8 packets only should be matched
Observed result - 16 packets are matched
Tshark cmd :
tshark -r capture2_11-17-2022_11-15-15.pcap -T fields -E header=y -e mpls.exp -e mpls.bottom mpls.bottom==0 and mpls.exp==7
tshark output table
2nd EDIT: I thought of an alternative solution, which I'll now describe here. (Note that I would have provided this alternative solution, which involves programming in the form of a Lua script, as a separate answer, but it seems folks were a little trigger-happy in closing this question, so I have no choice but to supply it here. If the question is reopened, which I've voted to do, I can make this a separate answer.)
What you can do is create an MPLS Lua postdissector that adds new mpls_post.exp and mpls_post.bottom fields to an MPLS postdissector tree. You can then use those new fields in your filter to accomplish your goal. As an example, consider the following Lua postdissector:
local mpls_post = Proto("MPLSPost", "MPLS Postdissector")
local pf = {
expbits = ProtoField.uint8("mpls_post.exp", "MPLS Experimental Bits", base.DEC),
bottom = ProtoField.uint8("mpls_post.bottom", "MPLS Bottom of Label Stack", base.DEC)
}
mpls_post.fields = pf
local mpls_exp = Field.new("mpls.exp")
local mpls_bottom = Field.new("mpls.bottom")
function mpls_post.dissector(tvbuf, pinfo, tree)
local mpls_exp_ex = {mpls_exp()}
local mpls_bottom_ex = {mpls_bottom()}
if mpls_exp_ex == nil or mpls_bottom_ex == nil then
return
end
local mpls_post_tree = tree:add(mpls_post)
mpls_post_tree:add(pf.expbits, mpls_exp_ex[1].range, mpls_exp_ex[1].value)
mpls_post_tree:add(pf.bottom, mpls_bottom_ex[1].range, mpls_bottom_ex[1].value)
end
register_postdissector(mpls_post)
If you save this to a file, e.g. mpls_post.lua and place that file in your Wireshark Personal Lua Plugins directory, which you can find from "Help -> About Wireshark -> Folders" or from tshark -G folders, then [re]start Wireshark, you will be able to apply a filter such as follows:
mpls_post.exp==7 && mpls_post.bottom == 0
You can also use tshark to do the same, e.g.:
tshark -r capture2_11-17-2022_11-15-15.pcap -Y "mpls_post.exp==7 && mpls_post.bottom==0" -T fields -E header=y -e mpls_post.exp -e mpls_post.bottom
(NOTE: The tshark command, as written, will simply print out what you already know, namely 7 and 0, so presumably you want to print more than just that, but this is the idea.)
I think this is probably the best that can be done for now until the Wireshark MPLS dissector is modified so that layer operators work as expected for this protocol, but there are no guarantees that any changes to the MPLS dissector will ever be made in this regard.
EDIT: I'm sorry to say that the answer I provided doesn't actually work for MPLS. It doesn't work because the MPLS dissector is only called once and it then loops through all labels as long as bottom of stack isn't true, but it doesn't call itself recursively, which is what would be needed in this case in order for the second label to be considered another layer. The layer syntax does work for other protocols such as IP (in the case of tunneled traffic or ICMP error packets) and others though, so it's a good thing to keep in mind, but unfortunately it won't be of much use for MPLS, at least not in the Wireshark MPLS dissector's current state. I suppose I'll leave the answer up [for now] in case the dissector is ever changed in the future to allow for the layer syntax to work as one might intuitively expect it to work. And unfortunately, I can't think of an alternative solution to this problem at this time.
With Wireshark >= version 4.0, you can use the newly introduced syntax for matching fields from specific layers. So, rather than specifying mpls.exp==7 && mpls.bottom == 0 as the filter, which matches fields from any layer, use the following syntax instead, which will only match against fields from the first layer:
mpls.exp#1 == 7 && mpls.bottom#1 == 0
Refer to the Wireshark 4.0.0 Release Notes for more details about this new syntax as well as for other display filter changes, and/or to the wireshark-filter man page.
NOTE: You can also achieve this with tshark, although you can't [yet] selectively choose which field is displayed. For example:
tshark -r capture2_11-17-2022_11-15-15.pcap -Y "mpls.exp#1 == 7 && mpls.bottom#1 == 0" -T fields -E header=y -e mpls.exp -e mpls.bottom
To be clear, you can't [yet] specify -e mpls.exp#1 and -e mpls.bottom#1.

Automatic detection of types of logs in logstash

I am new to logstash, elasticsearch and kibana (ELK).
I know that I can create filters that parse specific logs and extract some fields from them. It looks like for each type of log I have to configure a specific filter. As I have around 20 different services, each writing around a hundred of different types of log this looks too difficult to me.
For type of logs I mean logs that have a specific template with parameters that change
This is a example of some logs:
Log1: User Peter has logged in
Log2: User John has logged in
Log3: Message "hello" sent by Peter
Log4: Message "bye" sent by John
I want ELK to discover automatically that here we have two types of log
Type1: User %1 has logged in
Type2: Message "%1" sent by %2
Is that possible? Is there any example to do that? I don't want to write manually the template for each type of log, I want it to be discovered automatically.
Then also extract the parameters. This is what I wold like to see in the index
Log1: Type1, params: Peter
Log2: Type1, params: John
Log3: Type2, params: hello, Peter
Log4: Type2, params: bye, John
After that I would like ELK to scan again my index and discover that param %1 of Type1 is usually param %2 in Type2 (the user name). Also it should discover that Log1 and Log3 are related (same user).
The last thing it should do is finding unusual sequences of actions (logins without the corresponding logout, for example)
Is any of this possible without having to manually configure all types of logs? If not, can you point me to some example of this multipass indexing even if it involves manual configuration?
Logstash has no discovery like this, you'll have to do the language parsing yourself. It's tedious and repetitive, but it gets the job done. You have a few options here, depending on your ability to influence other areas:
If the format of those logs is changeable, consider pushing for an authentication-logging standard. That way you only need one pattern.
Consider a modular approach to generating your filter pipeline. Log1 patterns go in one module, Log2 in another. It makes maintainability easier.
You have my sympathy with this problem. I've had to integrate Logstash with the authentication-logging of many systems by now, and each one describes what they're doing somewhat differently, all based on the whim of the developer who wrote it (which may have happened 25 years ago in some cases).
For the products we develop, I can at least influence how the logging looks. Moving away from a natural language grok format to something else, such as kv or even json goes a long way towards simplifying the parsing problem or me. The trick is convicing people that we only look at the logs through Kibana anyway, why do we need:
User %{user} logged into application %{app} in zone %{zone}
When we can have
user="%{user}" app="%{app}" zone=%{zone}
Or even:
{ "user": %{user}, "app": %{app}, "zone": %{zone} }
Since that's what it'll be when Logstash is done with it anyway.

Is there a printk-style log parser?

The journald of systemd supports kernel-style logging. So, the service can write on stderr the messages starting with "<6>", and they'll be parsed like info, "<4>" - warning.
But while developing the service it's launched outside of systemd. Is there any ready-to-use utilities to convert these numbers into readable colored strings? (it would be nice if that doesn't complicate the gdb workflow)
Don't want to roll my own.
There is no tool to convert the output but a simple sed run would do the magic.
As you said journal would strip off <x> token from the beginning of your log message and convert this to log level. What I would do is check for some env. variable in the code. For ex:
if (COLOR_OUTPUT_SET)
printf ("[ WARNING ] - Oh, snap\n");
else
printf ("<4> Oh, snap\n");

Perl program structure for parsing

I've got question about program architecture.
Say you've got 100 different log files with different formats and you need to parse and put that info into an SQL database.
My view of it is like:
use general config file like:
program1->name1("apache",/var/log/apache.log) (modulename,path to logfile1)
program2->name2("exim",/var/log/exim.log) (modulename,path to logfile2)
....
sqldb->configuration
use something like a module (1 file per program) type1.module (regexp, logstructure(somevariables), sql(tables and functions))
fork or thread processes (don't know what is better on Linux now) for different programs.
So question is, is my view of this correct? I should use one module per program (web/MTA/iptablat)
or there is some better way? I think some regexps would be the same, like date/time/ip/url. What to do with that? Or what have I missed?
example: mta exim4 mainlog
2011-04-28 13:16:24 1QFOGm-0005nQ-Ig
<= exim#mydomain.org.ua** H=localhost
(exim.mydomain.org.ua)
[127.0.0.1]:51127 I=[127.0.0.1]:465
P=esmtpsa
X=TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32
CV=no A=plain_server:spam S=763
id=1303985784.4db93e788cb5c#mydomain.org.ua T="test" from
<exim#exim.mydomain.org.ua> for
test#domain.ua
everything that is bold is already parsed and will be putted into sqldb.incoming table. now im having structure in perl to hold every parsed variable like $exim->{timstamp} or $exim->{host}->{ip}
my program will do something like tail -f /file and parse it line by line
Flexability: let say i want to add supprot to apache server (just timestamp userip and file downloaded). all i need to know what logfile to parse, what regexp shoud be and what sql structure should be. So im planning to have this like a module. just fork or thread main process with parameters(logfile,filetype). Maybe further i would add some options what not to parse (maybe some log level is low and you just dont see mutch there)
I would do it like this:
Create a config file that is formatted like this: appname:logpath:logformatname
Create a collection of Perl class that inherit from a base parser class.
Write a script which loads the config file and then loops over its contents, passing each iteration to its appropriate handler object.
If you want an example of steps 1 and 2, we have one on our project. See MT::FileMgr and MT::FileMgr::* here.
The log-monitoring tool wots could do a lot of the heavy lifting for you here. It runs as a daemon, watching as many log files as you could want, running any combination of perl regexes over them and executing something when matches are found.
I would be inclined to modify wots itself (which its licence freely allows) to support a database write method - have a look at its existing handle_* methods.
Most of the hard work has already been done for you, and you can tackle the interesting bits.
I think File::Tail is a nice fit.
You can make an array of File::Tail objects and poll them with select like this:
while (1) {
($nfound,$timeleft,#pending)=
File::Tail::select(undef,undef,undef,$timeout,#files);
unless ($nfound) {
# timeout - do something else here, if you need to
} else {
foreach (#pending) {
# here you can handle log messages depending on filename
print $_->{"input"}." (".localtime(time).") ".$_->read;
}
(from perl File::Tail doc)

TCL (thermal control language) [printer protocol] references

I'm working on supporting of the TCL (thermal control protocol, stupid name, its a printer protocol of futurelogic) but i cannot find resources about this protocol, how it is, how it works, nothing, on theirs site i only found this mention http://www.futurelogic-inc.com/trademarks.aspx
any one had worked with it? does any one knows where can i find the data sheet?
The protocol is documented on their website http://www.futurelogic-inc.com/support/downloads/
If you are targetting the PSA66ST model it supports a number of protocols TCL, which is quite nice for delivering templated tickets and, line printing using the Epson ESC/P protocol.
This is all explained in the protocol document.
Oops, these links are incorrect and only correspond to marketing brochures. You will need to contact Futurelogic for the protocol documents. Probably also need to sign an NDA. Anyway, the information may guide you some more.
From what I can gather, it seems the FutureLogic thermal printers do not support general printing, but only printing using predefined templates stored in the printer's firmware. The basic command structure is a caret ^ followed by a one or two character command code, with arguments delimited using a pipe |, and the command ended with another caret ^. I've been able to reverse-engineer a few commands:
^S^ - Printer status
^Se^ - Extended printer status
^C|x|^ - Clear. Known arguments:
a - all
j - jam
^P|x|y0|...|yn|^ - Print fields y0 through yn using template x.
Data areas are defined in the firmware using a similar command format, command ^D|x|y0|...|yn|^, and templates are defined from data areas using command ^T|z|x0|...|xn|^.

Resources