Logstash configuration error and I think this a dumb question - logstash

I am trying to get logstash to work (well I have gotten it to work but I want to try growing my skill set) and this is my config file setup...
input {
file {
path => "C:/temp/Machine Learning/Dash.txt"
start_position => "beginning"
sincedb_path => "/tmp/since.txt"
}
}
filter {
json {
source => "message"
target => "message"
}
}
output {
file {path => "/tmp/OutPut.txt"}
}
What I want to do is parse out the message field and look at its constituent pieces, but this config doesn't work. I get this when I run it in debug...
Missing a required setting for the json filter plugin:
filter {
json {
source => # SETTING MISSING
...
}
}
[2019-12-19T10:32:44,655][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Something is wrong with your configuration.", :backtrace=>["c:/Logstash/logstash/logstash-core/lib/logstash/config/mixin.rb:86:in config_init'", "c:/Logstash/logstash/logstash-core/lib/logstash/filters/base.rb:126:ininitialize'", "org/logstash/plugins/PluginFactoryExt.java:70:in filter_delegator'", "org/logstash/plugins/PluginFactoryExt.java:244:inplugin'", "org/logstash/plugins/PluginFactoryExt.java:181:in plugin'", "c:/Logstash/logstash/logstash-core/lib/logstash/pipeline.rb:71:inplugin'", "(eval):64:in <eval>'", "org/jruby/RubyKernel.java:994:ineval'", "c:/Logstash/logstash/logstash-core/lib/logstash/pipeline.rb:49:in initialize'", "c:/Logstash/logstash/logstash-core/lib/logstash/pipeline.rb:90:ininitialize'", "c:/Logstash/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:42:in block in execute'", "c:/Logstash/logstash/logstash-core/lib/logstash/agent.rb:92:inblock in exclusive'", "org/jruby/ext/thread/Mutex.java:148:in synchronize'", "c:/Logstash/logstash/logstash-core/lib/logstash/agent.rb:92:inexclusive'", "c:/Logstash/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38:in execute'", "c:/Logstash/logstash/logstash-core/lib/logstash/agent.rb:317:inblock in converge_state'"]}
And I am not sure what to do about that as it looks like I have set up the filter right according to this documentation:
https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html#plugins-filters-json-target
I am on windows 10 which I think is important info.

Found the problem. Like the professional that I am I made a back up config file to revert back to incase something went awry, great idea honestly. Then like the idiot I am I started making updates and changes to the backup file which was not the actually config file I was testing against.

Related

Creating a custom grok pattern in Logstash

I'm trying to add a custom pattern to Logstash in order to capture data from this kind of log line:
[2017-11-27 12:08:22] production.INFO: {"upload duration":0.16923}
I followed the instructions on Logstash guide for grok and created a directory called patterns with a file in it called extra that contain:
POSTFIX_UPLOAD_DURATION upload duration
and added the path to the config file:
grok {
patterns_dir => ["./patterns"]
match => { "message" => "\[%{TIMESTAMP_ISO8601:timestamp}\] %{POSTFIX_UPLOAD_DURATION: upload_duration} %{DATA:log_env}\.%{LOGLEVEL:severity}: %{GREEDYDATA:log_message}" }
}
However, I'm getting this error message:
Pipeline aborted due to error {:exception=>#<Grok::PatternError: pattern %{POSTFIX_UPLOAD_DURATION: upload_duration} not defined>
Also, some log lines don't contain the 'upload duration' field, will this break the pipeline?
You are able to use relative directories, as long as they are relative to the current working directory of where the process starts, not relative to the conf file or to Logstash itself.
I found out that there is better and more efficint way to capture data using the json plugin.
I've add "log_payload:" in my logs and insert the data I need to capture in a json object.
Then I've used this pipeline to capture it.
if ("log_payload:" in [log_message]) {
grok{
match => {"log_message" => 'log_payload:%{DATA:json_object}}%{GREEDYDATA}'}
}
mutate{
update => ["json_object", "%{[json_object]}}"]
}
json {
source => "json_object"
}
}
mutate {
remove_field => ["log_message", "json_object"]
}
}

Logstash Filter not working when something has a period in the name

So I need to write a filter that changes all the periods in field names to underscores. I am using mutate, and I can do some things and not other things. For reference here is my current output in Kibana.
See those fields that say "packet.event-id" and so forth? I need to rename all of those. Here is my filter that I wrote and I do not know why it doesn't work
filter {
json {
source => "message"
}
mutate {
add_field => { "pooooo" => "AW CMON" }
rename => { "offset" = "my_offset" }
rename => { "packet.event-id" => "my_packet_event_id" }
}
}
The problem is that I CAN add a field, and the renaming of "offset" WORKS. But when I try and do the packet one nothing changes. I feel like this should be simple and I am very confused as to why only the one with a period in it doesn't work.
I have refreshed the index in Kibana, and still nothing changes. Anyone have a solution?
When they show up in dotted notation in Kibana, it's because there is structure to the document you originally loaded in json format.
To access the document structure using logstash, you need to use [packet][event-id] in your rename filter instead of packet.event-id.
For example:
filter {
mutate {
rename => {
"[packet][event-id]" => "my_packet_event_id"
}
}
}
You can do the JSON parsing directly in Filebeat by adding a few lines of config to your filebeat.yml.
filebeat.prospectors:
- paths:
- /var/log/snort/snort.alert
json.keys_under_root: true
json.add_error_key: true
json.message_key: log
You shouldn't need to rename the fields. If you do need to access a field in Logstash you can reference the field as [packet][length] for example. See Logstash field references for documentation on the syntax.
And by the way, there is a de_dot for replacing dots in field names, but that shouldn't be applied in this case.

How can logstash identify and parse the newly created logfiles?

I'm new in ELK and currently I'm facing the following issue.
I want logstash to parse some server logfiles. Everyday, a new logfile is created which it has the following naming format: file160629.log (where 160629=current date)
Here's my config input:
input {
file {
path => "C:\LogFiles\u_ex%d.log"
start_position => beginning
}
}
But as it seems, it doesn't recognize the new logfiles..
Can someone tell me what am I doing wrong?
Thank you in advance.
For all the log files inside LogFiles folder you can use :
input {
file {
path => "C:\LogFiles\*.log"
}
}
It will tail files by default.

Using glob on logstash server machine?

We have a separate server for logstash and logs are on a remote machine.
We ship these same logs from a remote machine to logstash server using lumberjack's plugin for logstash.
I tried this:
Client config (where logs are present):
input {
file{
path => "/home/Desktop/Logstash-Input/**/*_log"
}
}
output {
lumberjack {
hosts => ["xx.xx.xx.xx"]
port => 4545
ssl_certificate => "./logstash.pub"
}
I want to extract fields from my file input's path variable, so that accordingly for different fields values different parsing patterns can be applied.
Eg: Something like this
grok {
match => ["path", "/home/Desktop/Logstash-Input/(?<server>[^/]+)/(?<logtype>[^/]+)/(?<logdate>[\d]+.[\d]+.[\d]+)/(?<logfilename>.*)_log"]
}
Here server, logtype are directories names which i want in my fields to apply different parsing patterns like:
filter{
if [server] == "Server2" and [logtype] == "CronLog" {
grok........
}
if [server] == "Server3" and [logtype] == "CronLog" {
grok............
}
}
How shall I be able apply the above on my logstash-server config, as file input is on the client machine from which I want to extract fields from path ???
Lumberjack succesfully ships logs to server.
I tried applying the grok on client:
grok {
match => ["path", "/home/Desktop/Logstash-Input/(?<server>[^/]+)/(?<logtype>[^/]+)/(?<logdate>[\d]+.[\d]+.[\d]+)/(?<logfilename>.*)_log"]
}
I checked on client console it adds fields like server, logtype to the logs but on logstsh-server console the fields are not added.
How should I be able to achieve the above????
Two options:
Set the fields when they are originally shipped. The full logstash and logstash-forwarder (aka lumberjack) allow you to do this.
grok the information from the file path, which my documents have in a field called "file". Check your documents to find the actual field name.

Logstash conditional to check if tag exists?

Is there any way in logstash to use a conditional to check if a specific tag exists?
For example,
grok {
match => [
"message", "Some expression to match|%{GREEDYDATA:NOMATCHES}"
]
if NOMATCHES exists Do something.
How do I verify if NOMATCHES tag exists or not?
Thanks.
Just so we're clear: the config snippet you provided is setting a field, not a tag.
Logstash events can be thought of as a dictionary of fields. A field named tags is referenced by many plugins via add_tag and remove_tag operations.
You can check if a tag is set:
if "foo" in [tags] {
...
}
But you seem to want to check if a field contains anything:
if [NOMATCHES] =~ /.+/ {
...
}
The above will check that NOMATCHES exists and isn't empty.
Reference: configuration file overview.
The following test for existence also works [tested in Logstash 1.4.2], although it may not validate non-empty:
if [NOMATCHES] {
...
}

Resources