Fail2ban custom regular expresion - linux

Trying to add custom rule on regular expression in order to block the below log.
Mar 17 18:46:52 s21409974 named[1577]: client #0x7g246c107030 1.1.1.1#8523 (.): query (cache) './ANY/IN' denied
I did tried with online tools like this one (https://www.regextester.com) but on the fail2ban-regex test command does display like it miss it.
Any suggestion about the rule or about how to better troubleshoot?
Thank in advance

Why do you try to write a custom regex? This message is pretty well matching with original fail2ban filter named-refused:
$ msg="Mar 17 18:46:52 s21409974 named[1577]: client #0x7g246c107030 1.1.1.1#8523 (.): query (cache) './ANY/IN' denied"
$ fail2ban-regex "$msg" named-refused
Running tests
=============
Use failregex filter file : named-refused
Use single line : Mar 17 18:46:52 s21409974 named[1577]: client #0x7...
Results
=======
Prefregex: 1 total
| ^(?:\s*\S+ (?:(?:\[\d+\])?:\s+\(?named(?:\(\S+\))?\)?:?|\(?named(?:\(\S+\))?\)?:?(?:\[\d+\])?:)\s+)?(?: error:)?\s*client(?: #\S*)? (?:\[?(?:(?:::f{4,6}:)?(?P<ip4>(?:\d{1,3}\.){3}\d{1,3})|(?P<ip6>(?:[0-9a-fA-F]{1,4}::?|::){1,7}(?:[0-9a-fA-F]{1,4}|(?<=:):)))\]?|(?P<dns>[\w\-.^_]*\w))#\S+(?: \([\S.]+\))?: (?P<content>.+)\s(?:denied|\(NOTAUTH\))\s*$
`-
Failregex: 1 total
|- #) [# of hits] regular expression
| 1) [1] ^(?:view (?:internal|external): )?query(?: \(cache\))?
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [1] {^LN-BEG}(?:DAY )?MON Day %k:Minute:Second(?:\.Microseconds)?(?: ExYear)?
`-
Lines: 1 lines, 0 ignored, 1 matched, 0 missed
[processed in 0.01 sec]
But if you need it, here you go (regex interpolated from fail2ban's pref- & failregex):
^\s*\S+\s+named\[\d+\]: client(?: #\S*)? <ADDR>#\S+(?: \([\S.]+\))?: (?:view (?:internal|external): )?query(?: \(cache\))? '[^']+' denied
replace <ADDR> with <HOST> if your fail2ban version is smaller than 0.10.

Related

SED style Multi address in Python?

I have an app that parses multiple Cisco show tech files. These files contain the output of multiple router commands in a structured way, let me show you an snippet of a show tech output:
`show clock`
20:20:50.771 UTC Wed Sep 07 2022
Time source is NTP
`show callhome`
callhome disabled
Callhome Information:
<SNIPET>
`show module`
Mod Ports Module-Type Model Status
--- ----- ------------------------------------- --------------------- ---------
1 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
2 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
3 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
4 52 16x10G + 32x10/25G + 4x100G Module N9K-X96136YC-R ok
21 0 Fabric Module N9K-C9504-FM-R ok
22 0 Fabric Module N9K-C9504-FM-R ok
23 0 Fabric Module N9K-C9504-FM-R ok
<SNIPET>
My app currently uses both SED and Python scripts to parse these files. I use SED to parse the show tech file looking for a specific command output, once I find it, I stop SED. This way I don't need to read all the file (these can get to be very big files). This is a snipet of my SED script:
sed -E -n '/`show running-config`|`show running`|`show running config`/{
p
:loop
n
p
/`show/q
b loop
}' $1/$file
As you can see I am using a multi address range in SED. My question specifically is, how can I achieve something similar in python? I have tried multiple combinations of flags: DOTALL and MULTILINE but I can't get the result I'm expecting, for example, I can get a match for the command I'm looking for, but python regex wont stop until the end of the file after the first match.
I am looking for something like this
sed -n '/`show clock`/,/`show/p'
I would like the regex match to stop parsing the file and print the results, immediately after seeing `show again , hope that makes sense and thank you all for reading me and for your help
You can use nested loops.
import re
def process_file(filename):
with open(filename) as f:
for line in f:
if re.search(r'`show running-config`|`show running`|`show running config`', line):
print(line)
for line1 in f:
print(line1)
if re.search(r'`show', line1):
return
The inner for loop will start from the next line after the one processed by the outer loop.
You can also do it with a single loop using a flag variable.
import re
def process_file(filename):
in_show = False
with open(filename) as f:
for line in f:
if re.search(r'`show running-config`|`show running`|`show running config`', line):
in_show = True
if in_show
print(line)
if re.search(r'`show', line1):
return

Behave: Writing a Scenario Outline with dynamic examples

Gherkin / Behave Examples
Gherkin syntax features test automation using examples:
Feature: Scenario Outline (tutorial04)
Scenario Outline: Use Blender with <thing>
Given I put "<thing>" in a blender
When I switch the blender on
Then it should transform into "<other thing>"
Examples: Amphibians
| thing | other thing |
| Red Tree Frog | mush |
| apples | apple juice |
Examples: Consumer Electronics
| thing | other thing |
| iPhone | toxic waste |
| Galaxy Nexus | toxic waste |
The test suite would run four times, once for each example, giving a result similar to:
My problem
How can I test using confidential data in the Examples section? For example, I would like to test an internal API with user ids or SSN numbers, without keeping the data hard coded in the feature file.
Is there a way to load the Examples dynamically from an external source?
Update: Opened a github issue on the behave project.
I've come up with another solution (behave-1.2.6):
I managed to dynamically create examples for a Scenario Outline by using before_feature.
Given a feature file (x.feature):
Feature: Verify squared numbers
Scenario Outline: Verify square for <number>
Then the <number> squared is <result>
Examples: Static
| number | result |
| 1 | 1 |
| 2 | 4 |
| 3 | 9 |
| 4 | 16 |
# Use the tag to mark this outline
#dynamic
Scenario Outline: Verify square for <number>
Then the <number> squared is <result>
Examples: Dynamic
| number | result |
| . | . |
And the steps file (steps/x.step):
from behave import step
#step('the {number:d} squared is {result:d}')
def step_impl(context, number, result):
assert number*number == result
The trick is to use before_feature in environment.py as it has already parsed the examples tables to the scenario outlines, but hasn't generated the scenarios from the outline yet.
import behave
import copy
def before_feature(context, feature):
features = (s for s in feature.scenarios if type(s) == behave.model.ScenarioOutline and
'dynamic' in s.tags)
for s in features:
for e in s.examples:
orig = copy.deepcopy(e.table.rows[0])
e.table.rows = []
for num in range(1,5):
n = copy.deepcopy(orig)
# This relies on knowing that the table has two rows.
n.cells = ['{}'.format(num), '{}'.format(num*num)]
e.table.rows.append(n)
This will only operate on Scenario Outlines that are tagged with #dynamic.
The result is:
behave -k --no-capture
Feature: Verify squared numbers # features/x.feature:1
Scenario Outline: Verify square for 1 -- #1.1 Static # features/x.feature:8
Then the 1 squared is 1 # features/steps/x.py:3
Scenario Outline: Verify square for 2 -- #1.2 Static # features/x.feature:9
Then the 2 squared is 4 # features/steps/x.py:3
Scenario Outline: Verify square for 3 -- #1.3 Static # features/x.feature:10
Then the 3 squared is 9 # features/steps/x.py:3
Scenario Outline: Verify square for 4 -- #1.4 Static # features/x.feature:11
Then the 4 squared is 16 # features/steps/x.py:3
#dynamic
Scenario Outline: Verify square for 1 -- #1.1 Dynamic # features/x.feature:19
Then the 1 squared is 1 # features/steps/x.py:3
#dynamic
Scenario Outline: Verify square for 2 -- #1.2 Dynamic # features/x.feature:19
Then the 2 squared is 4 # features/steps/x.py:3
#dynamic
Scenario Outline: Verify square for 3 -- #1.3 Dynamic # features/x.feature:19
Then the 3 squared is 9 # features/steps/x.py:3
#dynamic
Scenario Outline: Verify square for 4 -- #1.4 Dynamic # features/x.feature:19
Then the 4 squared is 16 # features/steps/x.py:3
1 feature passed, 0 failed, 0 skipped
8 scenarios passed, 0 failed, 0 skipped
8 steps passed, 0 failed, 0 skipped, 0 undefined
Took 0m0.005s
This relies on having an Examples table with the correct shape as the final table, in my example, with two rows. I also don't fuss with creating new behave.model.Row objects, I just copy the one from the table and update it. For extra ugliness, if you're using a file, you can put the file name in the Examples table.
Got here looking for something else, but since I've been in similar situation with Cucumber before, maybe someone will also end up at this question, looking for a possible solution. My approach to this problem is to use BDD variables that I can later handle at runtime in my step_definitions. In my python code I can check what is the value of the Gherkin variable and map it to what's needed.
For this example:
Scenario Outline: Use Blender with <thing>
Given I put "<thing>" in a blender
When I switch the blender on
Then it should transform into "<other thing>"
Examples: Amphibians
| thing | other thing |
| Red Tree Frog | mush |
| iPhone | data.iPhone.secret_key | # can use .yaml syntax here as well
Would translate to such step_def code:
#given('I put "{thing}" in a blender')
def step_then_should_transform_into(context, other_thing):
if other_thing == BddVariablesEnum.SECRET_KEY:
basic_actions.load_secrets(context, key)
So all you have to do is to have well defined DSL layer.
Regarding the issue of using SSN numbers in testing, I'd just use fake SSNs and not worry that I'm leaking people's private information.
Ok, but what about the larger issue? You want to use a scenario outline with examples that you cannot put in your feature file. Whenever I've run into this problem what I did was to give a description of the data I need and let the step implementation either create the actual data set used for testing or fetch the data set from an existing test database.
Scenario Outline: Accessing the admin interface
Given a user who <status> an admin has logged in
Then the user <ability> see the admin interface
Examples: Users
| status | ability |
| is | can |
| is not | cannot |
There's no need to show any details about the user in the feature file. The step implementation is responsible for either creating or fetching the appropriate type of user depending on the value of status.

Force lshosts command to return megabytes for "maxmem" and "maxswp" parameters

When I type "lshosts" I am given:
HOST_NAME type model cpuf ncpus maxmem maxswp server RESOURCES
server1 X86_64 Intel_EM 60.0 12 191.9G 159.7G Yes ()
server2 X86_64 Intel_EM 60.0 12 191.9G 191.2G Yes ()
server3 X86_64 Intel_EM 60.0 12 191.9G 191.2G Yes ()
I am trying to return maxmem and maxswp as megabytes, not gigabytes when lshosts is called. I am trying to send Xilinx ISE jobs to my LSF, however the software expects integer, megabyte values for maxmem and maxswp. By doing debugging, it appears that the software grabs these parameters using the lshosts command.
I have already checked in my lsf.conf file that:
LSF_UNIT_FOR_LIMTS=MB
I have tried searching the IBM Knowledge Base, but to no avail.
Do you use a specific command to specify maxmem and maxswp units within the lsf.conf, lsf.shared, or other config files?
Or does LSF force return the most practical unit?
Any way to override this?
LSF_UNIT_FOR_LIMITS should work, if you completely drained the cluster of all running, pending, and finished jobs. According to the docs, MB is the default, so I'm surprised.
That said, you can use something like this to transform the results:
$ cat to_mb.awk
function to_mb(s) {
e = index("KMG", substr(s, length(s)))
m = substr(s, 0, length(s) - 1)
return m * 10^((e-2) * 3)
}
{ print $1 " " to_mb($6) " " to_mb($7) }
$ lshosts | tail -n +2 | awk -f to_mb.awk
server1 191900 159700
server2 191900 191200
server3 191900 191200
The to_mb function should also handle 'K' or 'M' units, should those pop up.
If LSF_UNIT_FOR_LIMITS is defined in lsf.conf, lshosts will always print the output as a floating point number, and in some versions of LSF the parameter is defined as 'KB' in lsf.conf upon installation.
Try searching for any definitions of the parameter in lsf.conf and commenting them all out so that the parameter is left undefined, I think in that case it defaults to printing it out as an integer in megabytes.
(Don't ask me why it works this way)

Search multiline error log for error code and then some of it's parameters on Linux

What command would give me the output I need for each instance of an error code in a very large log file? The file has records marked by a begin and end with number of characters. Such as:
SR 120
1414760452 0 1 Fri Oct 31 13:00:52 2014 2218714 4
GROVEMR2 scn
../SrxParamIF.m 284
New Exam Started
EN 120
The 5th field is the error code, 2218714 in previous example.
I thought of just grep'ing for the error code and outputting -A lines afterwards; then picking what I needed from that rather than parsing the entire file. That seems easy but my grep/awk/sed usage isn't to that level.
ONLY when error 2274021 is encountered as in the following example I'd like some output as shown.
Show me output such as: egrep ‘Coil:|Connector:|Channels faulted:| First channel:’ ERRORLOG|less
Part of input file of interest:
Mon Nov 24 13:43:37 2014 2274021 1
AWHMRGE3T NSP
SCP:RfHubCanHWO::RfBias 4101
^MException Class: Unknown Severity: Unknown
Function: RF: RF Bias
PSD: VIBRANT Coil: Breast SMI Scan: 1106/14
Coil Fault - Short Circuit
A multicoil bias fault was detected.
.
Connector: Port 1 (P1)
Channels faulted: 0x200
First channel: 10 of 32, counting from 1
Fault value: -2499 mV, Channel: 10->
Output:
Coil: Breast SMI
Connector: Port 1 (P1)
Channels faulted: 0x200
First channel: 10 of 32, counting from 1
Thanks in advance for any pointers!
Try the following (with the convenient adaptations)
#!/usr/bin/perl
use strict;
$/="\nEN "; # register separated by "\nEN "
my $error=2274021; # the error!
while(<>){ # for all registers
next unless /\b$error\b/; # ignore unless error
for my $line ( split(/\n/,$_)){
print "$line\n" if ($line =~ /Coil:|Connector:|Channels faulted:|First channel:/);
}
print "====\n"
}
Is this what you need?

List of SYNTAX for logstash's grok

The syntax for a grok pattern is %{SYNTAX:SEMANTIC}. How do i generate a list of all available SYNTAX keywords ? I know that I can use the grok debugger to discover patterns from text. But is there a list which i can scan through?
They are in GIT and included somewhere in the distribution. But it's probably just easiest to view it online:
https://github.com/elasticsearch/logstash/blob/v1.4.0/patterns/grok-patterns
The grok patterns files are now in the logstash-patterns-core repository.
Assuming you have a clone of it in the logstash-patterns-core directory on your filesystem, you can issue a command like this one to list all SYNTAX keywords:
$ find ./logstash-patterns-core/patterns -type f -exec awk '{print $1}' {} \; | grep "^[^#\ ]" | sort
As of commit 6655856, the output of the command (aka the list of SYNTAX keywords) looks like this (remember though that this list is not static):
BACULA_CAPACITY
BACULA_DEVICE
BACULA_DEVICEPATH
BACULA_HOST
BACULA_JOB
BACULA_LOG_ALL_RECORDS_PRUNED
BACULA_LOG_BEGIN_PRUNE_FILES
BACULA_LOG_BEGIN_PRUNE_JOBS
BACULA_LOG_CANCELLING
BACULA_LOG_CLIENT_RBJ
BACULA_LOG_DIFF_FS
BACULA_LOG_DUPLICATE
BACULA_LOG_ENDPRUNE
BACULA_LOG_END_VOLUME
BACULA_LOG_FATAL_CONN
BACULA_LOG_JOB
BACULA_LOG_JOBEND
BACULA_LOGLINE
BACULA_LOG_MARKCANCEL
BACULA_LOG_MAX_CAPACITY
BACULA_LOG_MAXSTART
BACULA_LOG_NEW_LABEL
BACULA_LOG_NEW_MOUNT
BACULA_LOG_NEW_VOLUME
BACULA_LOG_NO_AUTH
BACULA_LOG_NO_CONNECT
BACULA_LOG_NOJOBS
BACULA_LOG_NOJOBSTAT
BACULA_LOG_NOOPEN
BACULA_LOG_NOOPENDIR
BACULA_LOG_NOPRIOR
BACULA_LOG_NOPRUNE_FILES
BACULA_LOG_NOPRUNE_JOBS
BACULA_LOG_NOSTAT
BACULA_LOG_NOSUIT
BACULA_LOG_PRUNED_FILES
BACULA_LOG_PRUNED_JOBS
BACULA_LOG_READYAPPEND
BACULA_LOG_STARTJOB
BACULA_LOG_STARTRESTORE
BACULA_LOG_USEDEVICE
BACULA_LOG_VOLUME_PREVWRITTEN
BACULA_LOG_VSS
BACULA_LOG_WROTE_LABEL
BACULA_TIMESTAMP
BACULA_VERSION
BACULA_VOLUME
BASE10NUM
BASE16FLOAT
BASE16NUM
BIND9
BIND9_TIMESTAMP
BRO_CONN
BRO_DNS
BRO_FILES
BRO_HTTP
CATALINA_DATESTAMP
CATALINALOG
CISCO_ACTION
CISCO_DIRECTION
CISCOFW104001
CISCOFW104002
CISCOFW104003
CISCOFW104004
CISCOFW105003
CISCOFW105004
CISCOFW105005
CISCOFW105008
CISCOFW105009
CISCOFW106001
CISCOFW106006_106007_106010
CISCOFW106014
CISCOFW106015
CISCOFW106021
CISCOFW106023
CISCOFW106100
CISCOFW106100_2_3
CISCOFW110002
CISCOFW302010
CISCOFW302013_302014_302015_302016
CISCOFW302020_302021
CISCOFW304001
CISCOFW305011
CISCOFW313001_313004_313008
CISCOFW313005
CISCOFW321001
CISCOFW402117
CISCOFW402119
CISCOFW419001
CISCOFW419002
CISCOFW500004
CISCOFW602303_602304
CISCOFW710001_710002_710003_710005_710006
CISCOFW713172
CISCOFW733100
CISCO_INTERVAL
CISCOMAC
CISCO_REASON
CISCOTAG
CISCO_TAGGED_SYSLOG
CISCOTIMESTAMP
CISCO_XLATE_TYPE
CLOUDFRONT_ACCESS_LOG
COMBINEDAPACHELOG
COMMONAPACHELOG
COMMONMAC
CRON_ACTION
CRONLOG
DATA
DATE
DATE_EU
DATESTAMP
DATESTAMP_EVENTLOG
DATESTAMP_OTHER
DATESTAMP_RFC2822
DATESTAMP_RFC822
DATE_US
DAY
ELB_ACCESS_LOG
ELB_REQUEST_LINE
ELB_URI
ELB_URIPATHPARAM
EMAILADDRESS
EMAILLOCALPART
EXIM_DATE
EXIM_EXCLUDE_TERMS
EXIM_FLAGS
EXIM_HEADER_ID
EXIM_INTERFACE
EXIM_MSGID
EXIM_MSG_SIZE
EXIM_PID
EXIM_PROTOCOL
EXIM_QT
EXIM_REMOTE_HOST
EXIM_SUBJECT
GREEDYDATA
HAPROXYCAPTUREDREQUESTHEADERS
HAPROXYCAPTUREDRESPONSEHEADERS
HAPROXYDATE
HAPROXYHTTP
HAPROXYHTTPBASE
HAPROXYTCP
HAPROXYTIME
HOSTNAME
HOSTPORT
HOUR
HTTPD20_ERRORLOG
HTTPD24_ERRORLOG
HTTPDATE
HTTPD_COMBINEDLOG
HTTPD_COMMONLOG
HTTPDERROR_DATE
HTTPD_ERRORLOG
HTTPDUSER
INT
IP
IPORHOST
IPV4
IPV6
ISO8601_SECOND
ISO8601_TIMEZONE
JAVACLASS
JAVACLASS
JAVAFILE
JAVAFILE
JAVALOGMESSAGE
JAVAMETHOD
JAVASTACKTRACEPART
JAVATHREAD
LOGLEVEL
MAC
MAVEN_VERSION
MCOLLECTIVE
MCOLLECTIVEAUDIT
MCOLLECTIVEAUDIT
MINUTE
MONGO3_COMPONENT
MONGO3_LOG
MONGO3_SEVERITY
MONGO_LOG
MONGO_QUERY
MONGO_SLOWQUERY
MONGO_WORDDASH
MONTH
MONTHDAY
MONTHNUM
MONTHNUM2
NAGIOS_CURRENT_HOST_STATE
NAGIOS_CURRENT_SERVICE_STATE
NAGIOS_EC_DISABLE_HOST_CHECK
NAGIOS_EC_DISABLE_HOST_NOTIFICATIONS
NAGIOS_EC_DISABLE_HOST_SVC_NOTIFICATIONS
NAGIOS_EC_DISABLE_SVC_CHECK
NAGIOS_EC_DISABLE_SVC_NOTIFICATIONS
NAGIOS_EC_ENABLE_HOST_CHECK
NAGIOS_EC_ENABLE_HOST_NOTIFICATIONS
NAGIOS_EC_ENABLE_HOST_SVC_NOTIFICATIONS
NAGIOS_EC_ENABLE_SVC_CHECK
NAGIOS_EC_ENABLE_SVC_NOTIFICATIONS
NAGIOS_EC_LINE_DISABLE_HOST_CHECK
NAGIOS_EC_LINE_DISABLE_HOST_NOTIFICATIONS
NAGIOS_EC_LINE_DISABLE_HOST_SVC_NOTIFICATIONS
NAGIOS_EC_LINE_DISABLE_SVC_CHECK
NAGIOS_EC_LINE_DISABLE_SVC_NOTIFICATIONS
NAGIOS_EC_LINE_ENABLE_HOST_CHECK
NAGIOS_EC_LINE_ENABLE_HOST_NOTIFICATIONS
NAGIOS_EC_LINE_ENABLE_HOST_SVC_NOTIFICATIONS
NAGIOS_EC_LINE_ENABLE_SVC_CHECK
NAGIOS_EC_LINE_ENABLE_SVC_NOTIFICATIONS
NAGIOS_EC_LINE_PROCESS_HOST_CHECK_RESULT
NAGIOS_EC_LINE_PROCESS_SERVICE_CHECK_RESULT
NAGIOS_EC_LINE_SCHEDULE_HOST_DOWNTIME
NAGIOS_EC_PROCESS_HOST_CHECK_RESULT
NAGIOS_EC_PROCESS_SERVICE_CHECK_RESULT
NAGIOS_EC_SCHEDULE_HOST_DOWNTIME
NAGIOS_EC_SCHEDULE_SERVICE_DOWNTIME
NAGIOS_HOST_ALERT
NAGIOS_HOST_DOWNTIME_ALERT
NAGIOS_HOST_EVENT_HANDLER
NAGIOS_HOST_FLAPPING_ALERT
NAGIOS_HOST_NOTIFICATION
NAGIOSLOGLINE
NAGIOS_PASSIVE_HOST_CHECK
NAGIOS_PASSIVE_SERVICE_CHECK
NAGIOS_SERVICE_ALERT
NAGIOS_SERVICE_DOWNTIME_ALERT
NAGIOS_SERVICE_EVENT_HANDLER
NAGIOS_SERVICE_FLAPPING_ALERT
NAGIOS_SERVICE_NOTIFICATION
NAGIOSTIME
NAGIOS_TIMEPERIOD_TRANSITION
NAGIOS_TYPE_CURRENT_HOST_STATE
NAGIOS_TYPE_CURRENT_SERVICE_STATE
NAGIOS_TYPE_EXTERNAL_COMMAND
NAGIOS_TYPE_HOST_ALERT
NAGIOS_TYPE_HOST_DOWNTIME_ALERT
NAGIOS_TYPE_HOST_EVENT_HANDLER
NAGIOS_TYPE_HOST_FLAPPING_ALERT
NAGIOS_TYPE_HOST_NOTIFICATION
NAGIOS_TYPE_PASSIVE_HOST_CHECK
NAGIOS_TYPE_PASSIVE_SERVICE_CHECK
NAGIOS_TYPE_SERVICE_ALERT
NAGIOS_TYPE_SERVICE_DOWNTIME_ALERT
NAGIOS_TYPE_SERVICE_EVENT_HANDLER
NAGIOS_TYPE_SERVICE_FLAPPING_ALERT
NAGIOS_TYPE_SERVICE_NOTIFICATION
NAGIOS_TYPE_TIMEPERIOD_TRANSITION
NAGIOS_WARNING
NETSCREENSESSIONLOG
NONNEGINT
NOTSPACE
NUMBER
PATH
POSINT
POSTGRESQL
PROG
QS
QUOTEDSTRING
RAILS3
RAILS3FOOT
RAILS3HEAD
RAILS3PROFILE
RCONTROLLER
REDISLOG
REDISMONLOG
REDISTIMESTAMP
RPROCESSING
RT_FLOW1
RT_FLOW2
RT_FLOW3
RT_FLOW_EVENT
RUBY_LOGGER
RUBY_LOGLEVEL
RUUID
S3_ACCESS_LOG
S3_REQUEST_LINE
SECOND
SFW2
SHOREWALL
SPACE
SQUID3
SYSLOG5424BASE
SYSLOG5424LINE
SYSLOG5424PRI
SYSLOG5424PRINTASCII
SYSLOG5424SD
SYSLOGBASE
SYSLOGBASE2
SYSLOGFACILITY
SYSLOGHOST
SYSLOGLINE
SYSLOGPAMSESSION
SYSLOGPROG
SYSLOGTIMESTAMP
TIME
TIMESTAMP_ISO8601
TOMCAT_DATESTAMP
TOMCATLOG
TTY
TZ
UNIXPATH
URI
URIHOST
URIPARAM
URIPATH
URIPATHPARAM
URIPROTO
URN
USER
USERNAME
UUID
WINDOWSMAC
WINPATH
WORD
YEAR
If you have installed Logstash as a package, they can be found at /opt/logstash/patterns/grok-patterns.
You can view using these commands:
# find / -name patterns
/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns
/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/lib/logstash/patterns
Just browse to the directory
# cd /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.5/patterns
And here you have a whole list of patterns
aws exim haproxy
linux-syslog mongodb rails
bacula firewalls java mcollective nagios redis
bro grok-patterns junos mcollective-patterns postgresql ruby

Resources