Is it possiable to change gridview pagination button count globally in yii2? - pagination

this is working perfect. but, I want to globalize this pager setting
GridView::widget([
'dataProvider' => $dataProvider,
'pager' => [
'maxButtonCount' => 5,
'options' => [
'tag' => 'ul',
'class' => 'pagination pagination-sm',
]
],
'columns' => .....
]);
my code is this in component section. But, It is not working.
'components' => [
.....
'pager' => [
'class' => yii\widgets\LinkPager::class,
'maxButtonCount' => 5,
'options' => [
'tag' => 'ul',
'class' => 'pagination pagination-sm',
]
],
],

I found a solution to handle pagination button count globally
$config = [
....
'container' => [
'definitions' => [
'yii\widgets\LinkPager' => [
'maxButtonCount' => 6,
'options' => [
'tag' => 'ul',
'class' => 'pagination', //pagination-sm
]
],
],
],
'components' => [
...
],
];

Related

Creating a custom GROK pattern

currently, I'm trying to create a grok pattern for this log
2020-03-11 05:54:26,174 JMXINSTRUMENTS-Threading [{"timestamp":"1583906066","label":"Threading","ObjectName":"java.lang:type\u003dThreading","attributes":[{"name":"CurrentThreadUserTime","value":18600000000},{"name":"ThreadCount","value":152},{"name":"TotalStartedThreadCount","value":1138},{"name":"CurrentThreadCpuTime","value":20804323112},{"name":"PeakThreadCount","value":164},{"name":"DaemonThreadCount","value":136}]}]
At the moment I can match correctly until the JMXINTRUMENTS-Threading by using this pattern:
%{TIMESTAMP_ISO8601:timestamp} (?<instrument>[^\ ]*) ?%{GREEDYDATA:log_message}
But I can not seem to match all the values after this. Has anybody got an idea as to what pattern I should use?
It worked for me after defining a different source and target in the JSON filter. Thanks for the help!
filter {
if "atlassian-jira-perf" in [tags] {
grok {
match => { "message" =>"%{TIMESTAMP_ISO8601:timestamp} (?<instrument>[^\ ]*) ?%{GREEDYDATA:log_message_raw}" }
tag_on_failure => ["no_match"]
add_tag => ["bananas"]
}
if "no_match" not in [tags] {
json {
source => "log_message_raw"
target => "parsed"
}
}
mutate {
remove_field => ["message"]
}
}
}
i'm trying your pattern in https://grokdebug.herokuapp.com/ (which is the official debugger for logstash) and it does match everything after "JMXINTRUMENTS-Threading" with your pattern in a big field called log message, in this way:
{
"timestamp": [
[
"2020-03-11 05:54:26,174"
]
],
"YEAR": [
[
"2020"
]
],
"MONTHNUM": [
[
"03"
]
],
"MONTHDAY": [
[
"11"
]
],
"HOUR": [
[
"05",
null
]
],
"MINUTE": [
[
"54",
null
]
],
"SECOND": [
[
"26,174"
]
],
"ISO8601_TIMEZONE": [
[
null
]
],
"instrument": [
[
"JMXINSTRUMENTS-Threading"
]
],
"log_message": [
[
"[{"timestamp":"1583906066","label":"Threading","ObjectName":"java.lang:type\\u003dThreading","attributes":[{"name":"CurrentThreadUserTime","value":18600000000},{"name":"ThreadCount","value":152},{"name":"TotalStartedThreadCount","value":1138},{"name":"CurrentThreadCpuTime","value":20804323112},{"name":"PeakThreadCount","value":164},{"name":"DaemonThreadCount","value":136}]}]"
]
]
}
if you wish to match all the field contained in log message you should use a json filter in your logstash pipeline filter section, just right below your grok filter:
For example:
grok {
match => { "message" =>"%{TIMESTAMP_ISO8601:timestamp} (?<instrument>[^\ ]*) ?%{GREEDYDATA:log_message}" }
tag_on_failure => ["no_match"]
}
if "no_match" not in [tags] {
json {
source => "log_message"
}
}
In that way your json will be splitted in key: value and parsed.
EDIT:
You could try to use a kv filter instead of json, here the docs: https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html
grok {
match => { "message" =>"%{TIMESTAMP_ISO8601:timestamp} (?<instrument>[^\ ]*) ?%{GREEDYDATA:log_message}" }
tag_on_failure => ["no_match"]
}
if "no_match" not in [tags] {
kv {
source => "log_message"
value_split => ":"
include_brackets => true #remove brackets
remove_char_key => "\""
remove_char_value => "\""
field_split => ","
}
}

Logstash is sending a log twice. Repeating logs Issue

I am parsing logs of a file of my server and sending only info, warning and error level logs to my API but problem is that I am receiving a log two times. In output I am mapping parsed logs values to on my JSON fields and I am send that json to my API but I am receiving that mapping of json twice.
I am analyzing my logstash log file but a log entry is only appeared once in log file.
{
"log_EventMessage" => "Unable to sendViaPost to url[http://ubuntu:8280/services/TestProxy.TestProxyHttpSoap12Endpoint] Read timed ",
"message" => "TID: [-1234] [] [2017-08-11 12:03:11,545] INFO {org.apache.axis2.transport.http.HTTPSender} - Unable to sendViaPost to url[http://ubuntu:8280/services/TestProxy.TestProxyHttpSoap12Endpoint] Read time",
"type" => "carbon",
"TimeStamp" => "2017-08-11T12:03:11.545",
"tags" => [
[0] "grokked",
[1] "loglevelinfo",
[2] "_grokparsefailure"
],
"log_EventTitle" => "org.apache.axis2.transport.http.HTTPSender",
"path" => "/home/waqas/Documents/repository/logs/carbon.log",
"#timestamp" => 2017-08-11T07:03:13.668Z,
"#version" => "1",
"host" => "ubuntu",
"log_SourceSystemId" => "-1234",
"EventId" => "b81a054e-babb-426c-b0a0-268494d14a0e",
"log_EventType" => "INFO"
}
Following are my configuration.
Need help. Unable to figure out the reason that why this is happening.
input {
file {
path => "LOG_FILE_PATH"
type => "carbon"
start_position => "end"
codec => multiline {
pattern => "(^\s*at .+)|^(?!TID).*$"
negate => false
what => "previous"
auto_flush_interval => 1
}
}
}
filter {
#***********************************************************
# Grok Pattern to parse Single Line Log Entries
#**********************************************************
if [type] == "carbon" {
grok {
match => [ "message", "TID:%{SPACE}\[%{INT:log_SourceSystemId}\]%{SPACE}\[%{DATA:log_ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:log_EventType}%{SPACE}{%{JAVACLASS:log_EventTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:log_EventMessage}" ]
add_tag => [ "grokked" ]
}
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
if "grokked" in [tags] {
grok {
match => ["log_EventType", "INFO"]
add_tag => [ "loglevelinfo" ]
}
grok {
match => ["log_EventType", "ERROR"]
add_tag => [ "loglevelerror" ]
}
grok {
match => ["log_EventType", "WARN"]
add_tag => [ "loglevelwarn" ]
}
}
#*****************************************************
# Grok Pattern in Case of Failure
#*****************************************************
if !( "_grokparsefailure" in [tags] ) {
grok{
match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
add_tag => [ "grokked" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
timezone => "UTC"
}
}
}
#*******************************************************************
# Grok Pattern to handle MultiLines Exceptions and StackTraces
#*******************************************************************
if ( "multiline" in [tags] ) {
grok {
match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
add_tag => [ "multiline" ]
tag_on_failure => [ "multiline" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
}
}
}
filter {
uuid {
target => "EventId"
}
}
output {
if [type] == "carbon" {
if "loglevelerror" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Error Messages to API
#*******************************************************************
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","High","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
if [type] == "carbon" {
if "loglevelinfo" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Info Messages to API
#*******************************************************************
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","Low","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
if [type] == "carbon" {
if "loglevelwarn" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Warn Messages to API
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","Medium","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
}

Logstash: TestResult comes out as an array

The generated results of running the config below show the TestResult section as an array. I am trying to get rid of that array to make sending the data to Elasticsearch.
I have the following XML file:
<tem:SubmitTestResult xmlns:tem="http://www.example.com" xmlns:acs="http://www.example.com" xmlns:acs1="http://www.example.com">
<tem:LabId>123</tem:LabId>
<tem:userId>123</tem:userId>
<tem:TestResult>
<acs:CreatedBy>123</acs:CreatedBy>
<acs:CreatedDate>123</acs:CreatedDate>
<acs:LastUpdatedBy>123</acs:LastUpdatedBy>
<acs:LastUpdatedDate>123</acs:LastUpdatedDate>
<acs1:Capacity95FHigh>123</acs1:Capacity95FHigh>
<acs1:Capacity95FHigh_AHRI>123</acs1:Capacity95FHigh_AHRI>
<acs1:CondensateDisposal_AHRI>123</acs1:CondensateDisposal_AHRI>
<acs1:DegradationCoeffCool>123</acs1:DegradationCoeffCool>
</tem:TestResult>
</tem:SubmitTestResult>
And I am using this config:
input {
file {
path => "/var/log/logstash/test3.xml"
}
}
filter {
multiline {
pattern => "<tem:SubmitTestResult>"
negate => "true"
what => "previous"
}
if "multiline" in [tags] {
mutate {
gsub => ["message", "\n", ""]
}
mutate {
replace => ["message", '<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>%{message}']
}
xml {
source => "message"
target => "SubmitTestResult"
}
mutate {
remove_field => ["message", "#version", "host", "#timestamp", "path", "tags", "type"]
remove_field => ["[SubmitTestResult][xmlns:tem]","[SubmitTestResult][xmlns:acs]","[SubmitTestResult][xmlns:acs1]"]
}
mutate {
replace => [ "[SubmitTestResult][LabId]", "%{[SubmitTestResult][LabId]}" ]
replace => [ "[SubmitTestResult][userId]", "%{[SubmitTestResult][userId]}" ]
}
mutate {
replace => [ "[SubmitTestResult][TestResult][0][CreatedBy]", "%{[SubmitTestResult][TestResult][0][CreatedBy]}" ]
replace => [ "[SubmitTestResult][TestResult][0][CreatedDate]", "%{[SubmitTestResult][TestResult][0][CreatedDate]}" ]
replace => [ "[SubmitTestResult][TestResult][0][LastUpdatedBy]", "%{[SubmitTestResult][TestResult][0][LastUpdatedBy]}" ]
replace => [ "[SubmitTestResult][TestResult][0][LastUpdatedDate]", "%{[SubmitTestResult][TestResult][0][LastUpdatedDate]}" ]
replace => [ "[SubmitTestResult][TestResult][0][Capacity95FHigh]", "%{[SubmitTestResult][TestResult][0][Capacity95FHigh]}" ]
replace => [ "[SubmitTestResult][TestResult][0][Capacity95FHigh_AHRI]", "%{[SubmitTestResult][TestResult][0][Capacity95FHigh_AHRI]}" ]
replace => [ "[SubmitTestResult][TestResult][0][CondensateDisposal_AHRI]", "%{[SubmitTestResult][TestResult][0][CondensateDisposal_AHRI]}" ]
replace => [ "[SubmitTestResult][TestResult][0][DegradationCoeffCool]", "%{[SubmitTestResult][TestResult][0][DegradationCoeffCool]}" ]
}
}
}
output {
stdout {
codec => "rubydebug"
}
}
The result is:
"SubmitTestResult" => {
"LabId" => "123",
"userId" => "123",
"TestResult" => [
[0] {
"CreatedBy" => "123",
"CreatedDate" => "123",
"LastUpdatedBy" => "123",
"LastUpdatedDate" => "123",
"Capacity95FHigh" => "123",
"Capacity95FHigh_AHRI" => "123",
"CondensateDisposal_AHRI" => "123",
"DegradationCoeffCool" => "123"
}
]
}
As you can see, TestResult has the "[0]" array in there. Is there some config change I can do to make sure that it doesn't come out as an array? I want to send this to Elasticsearch and want the data correct.
I figured this out. After the last mutate block, I added one more mutate block. All I had to do was rename the field and that did the trick.
mutate {
rename => {"[SubmitTestResult][TestResult][0]" => "[SubmitTestResult][TestResult]"}
}
The result now looks proper:
"SubmitTestResult" => {
"LabId" => "123",
"userId" => "123",
"TestResult" => {
"CreatedBy" => "123",
"CreatedDate" => "123",
"LastUpdatedBy" => "123",
"LastUpdatedDate" => "123",
"Capacity95FHigh" => "123",
"Capacity95FHigh_AHRI" => "123",
"CondensateDisposal_AHRI" => "123",
"DegradationCoeffCool" => "123"
}
}

Docusign Checkbox Require at least 1

I have a group of checkboxes. At least 1 of these 4 has to be checked. Is there any way to make that happen?
$checkboxTabs[] = array(
"anchorYOffset" => "-2",
"anchorXOffset" => "-5",
"anchorString" => "{bk1}",
"selected" => false,
"tabLabel" => "bk1"
);
$checkboxTabs[] = array(
"anchorYOffset" => "-2",
"anchorXOffset" => "-5",
"anchorString" => "{bk2}",
"selected" => false,
"tabLabel" => "bk2"
);
$checkboxTabs[] = array(
"anchorYOffset" => "-2",
"anchorXOffset" => "-5",
"anchorString" => "{bk3}",
"selected" => false,
"tabLabel" => "bk3"
);
$checkboxTabs[] = array(
"anchorYOffset" => "-2",
"anchorXOffset" => "-5",
"anchorString" => "{bk4}",
"selected" => false,
"tabLabel" => "bk4"
);
There is no support for mandatory checkboxes. The alternative available to you are radio buttons.

grokparsefailure with multiple if [type] - logstash config

Ok - I've been racking my head over this config file for days with little success (I'm very new to logstash/ELK stack). The problem I'm having is when I place two logstash configs in the same directory I get a grok error on the second config. Meaning, 001 will work and 002 will produce the error. If I run logstash with only one config (doesn't matter which one) everything runs great. When combined, one works the other fails. I have combined the two conf files into a single conf file but the same issue persists. Below is the combined version of the config and a sample of the syslogs. Any assistance would be greatly appreciated!
input {
file {
path => ["/var/log/pantraffic.log"]
#start_position => "beginning"
type => "pantraffic"
}
file {
path => ["/var/log/panthreat.log"]
#start_position => "beginning"
type => "panthreat"
}
}
filter {
if [type] == "pantraffic" {
grok {
#patterns_dir => "/opt/logstash/patterns"
match => [ "message_traffic", "%{TIMESTAMP_ISO8601:#timestamp} % {HOSTNAME:syslog_host} %{GREEDYDATA:traffic_message}"]
}
syslog_pri { }
}
csv {
source => "traffic_message"
columns => [ "PaloAltoDomain","ReceiveTime","SerialNum","Type","Threat- ContentType","ConfigVersion","GenerateTime","SourceAddress","DestinationAddress","NATSourceIP","NATDestinationIP","Rule","SourceUser","DestinationUser","Application","VirtualSystem","SourceZone","DestinationZone","InboundInterface","OutboundInterface","LogAction","TimeLogged","SessionID","RepeatCount","SourcePort","DestinationPort","NATSourcePort","NATDestinationPort","Flags","IPProtocol","Action","Bytes","BytesSent","BytesReceived","Packets","StartTime","ElapsedTimeInSec","Category","Padding","seqno","actionflags","SourceCountry","DestinationCountry","cpadding","pkts_sent","pkts_received","sessionEndReason" ]
}
date {
#timezone => "America/Chicago"
match => [ "GenerateTime", "YYYY/MM/dd HH:mm:ss" ]
}
mutate {
convert => [ "Bytes", "integer" ]
convert => [ "BytesReceived", "integer" ]
convert => [ "BytesSent", "integer" ]
convert => [ "ElapsedTimeInSec", "integer" ]
convert => [ "geoip.area_code", "integer" ]
convert => [ "geoip.dma_code", "integer" ]
convert => [ "geoip.latitude", "float" ]
convert => [ "geoip.longitude", "float" ]
convert => [ "NATDestinationPort", "integer" ]
convert => [ "NATSourcePort", "integer" ]
convert => [ "Packets", "integer" ]
convert => [ "pkts_received", "integer" ]
convert => [ "pkts_sent", "integer" ]
convert => [ "seqno", "integer" ]
gsub => [ "Rule", " ", "_",
"Application", "( |-)", "_" ]
remove_field => [ "message_traffic", "traffic_message" ]
}
if [SourceAddress] and [SourceAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6- 9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "SourceAddress"
target => "SourceGeo"
}
if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
mutate {
replace => [ "SourceGeo.location", "" ]
}
}
}
if [DestinationAddress] and [DestinationAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "DestinationAddress"
target => "DestinationGeo"
}
if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") {
mutate {
replace => [ "DestinationAddress.location", "" ]
}
}
}
if [SourceAddress] and [DestinationAddress] {
fingerprint {
concatenate_sources => true
method => "SHA1"
key => "logstash"
source => [ "SourceAddress", "SourcePort", "DestinationAddress", "DestinationPort", "IPProtocol" ]
}
}
###########################################################################
if [type] == "panthreat" {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:#timestamp} % {HOSTNAME:syslog_host} %{GREEDYDATA:threat_message}"]
}
syslog_pri { }
}
csv {
source => "threat_message"
columns => [ "Domain","ReceiveTime","Serial","Type","ThreatContentType","ConfigVersion","GenerateTime","SourceAddress","DestinationAddress","NATSourceIP","NATDestinationIP","Rule","SourceUser","DestinationUser","Application","VirtualSystem","SourceZone","DestinationZone","InboundInterface","OutboundInterface","LogAction","TimeLogged","SessionID","RepeatCount","SourcePort","DestinationPort","NATSourcePort","NATDestinationPort","Flags","IPProtocol","Action","URL","ThreatContentName","Category","Severity","Direction","seqno","actionflags","SourceCountry","DestinationCountry","cpadding","contenttype","pcap_id","filedigest","cloud","url_idx","user_agent","filetype","xff","referer","sender","subject","recipient","reportid" ]
}
date {
#timezone => "America/Chicago"
match => [ "GenerateTime", "YYYY/MM/dd HH:mm:ss" ]
}
mutate {
#convert => [ "Bytes", "integer" ]
#convert => [ "BytesReceived", "integer" ]
#convert => [ "BytesSent", "integer" ]
#convert => [ "ElapsedTimeInSec", "integer" ]
convert => [ "geoip.area_code", "integer" ]
convert => [ "geoip.dma_code", "integer" ]
convert => [ "geoip.latitude", "float" ]
convert => [ "geoip.longitude", "float" ]
convert => [ "NATDestinationPort", "integer" ]
convert => [ "NATSourcePort", "integer" ]
#convert => [ "Packets", "integer" ]
#convert => [ "pkts_received", "integer" ]
#convert => [ "pkts_sent", "integer" ]
#convert => [ "seqno", "integer" ]
gsub => [ "Rule", " ", "_",
"Application", "( |-)", "_" ]
remove_field => [ "message", "threat_message" ]
}
if [SourceAddress] and [SourceAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6- 9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "SourceAddress"
target => "SourceGeo"
}
if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
mutate {
replace => [ "SourceGeo.location", "" ]
}
}
}
if [DestinationAddress] and [DestinationAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "DestinationAddress"
target => "DestinationGeo"
}
if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") {
mutate {
replace => [ "DestinationAddress.location", "" ]
}
}
}
if [SourceAddress] and [DestinationAddress] {
fingerprint {
concatenate_sources => true
method => "SHA1"
key => "logstash"
source => [ "SourceAddress", "SourcePort", "DestinationAddress", "DestinationPort", "IPProtocol" ]
}
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
Log Samples:
panthreat log:
015-11-13T04:53:28-06:00 PA-200 1,2015/11/13 04:53:28,0011122223333,THREAT,vulnerability,1,2015/11/13 04:53:28,73.222.111.1,4.4.4.4,0.0.0.0,0.0.0.0,rule1,,,dns,vsys1,trust,untrust,ethernet1/2,ethernet1/1,Default_Forwarder,2015/11/13 04:53:28,3602,1,34830,53,0,0,0x0,udp,drop-all-packets,"",Test(41000),0,any,high,client-to-server,37,0x0,US,US,0,,0,,,0,,,,,,,
pantraffic log:
2015-11-13T07:34:22-06:00 PA-200 1,2015/11/13 07:34:21,001112223334,TRAFFIC,end,1,2015/11/13 07:34:21,73.22.111.1,4.3.2.1,0.0.0.0,0.0.0.0,rule1,,,facebook-base,vsys1,trust,untrust,ethernet1/2,ethernet1/1,Default_Forwarder,2015/11/13 07:34:21,6385,1,63121,443,0,0,0x53,tcp,allow,6063,2285,3778,29,2015/11/13 07:34:05,2,social-networking,0,15951,0x0,US,IE,0,17,12,tcp-fin
I think you messed up your closing brackets. Check this block (your first if)
for instance:
if [type] == "pantraffic" {
grok {
#patterns_dir => "/opt/logstash/patterns"
match => [ "message_traffic", "%{TIMESTAMP_ISO8601:#timestamp} % {HOSTNAME:syslog_host} %{GREEDYDATA:traffic_message}"]
}
syslog_pri { }
}
The last closing bracket is likely wrong here. You don't want to close your if block here but just before you start your "panthreat" block further down. The "panthreat" if block has the same problem.

Resources