Logstash file input not reparsing file - logstash

I have the following problem, I need logstash to reparse already parsed files:
Scenario that doesn't work but should:
upload file to watched folder
logstash processes it, saves to elastic, removes it (file_completed_action => "log_and_delete"), great
I upload the same file again, same name, same content.
logstash doesnt do anything, I want it to process it again
Here is my file input config:
file {
mode => "read"
exclude => "*.tif"
path => ["/home/xmls/*.xml"]
file_completed_action => "log_and_delete"
file_completed_log_path => "/var/log/logstash/completed.log"
sincedb_path => "/dev/null"
start_position => "beginning"
codec => multiline {
pattern => ".*"
what => "previous"
max_lines => 100000
max_bytes => "200 MiB"
}
type => "my-custom-type-1"
}
sincedb_path is set to /dev/null, it should not remember processed files, also tried setting ignore_older to 0, didn't help.
Also tried messing with queue settings in logstash.yml, changed it to persistent, didn't work ...
I'm using logstash version 7.5, logstash-input-file (4.1.11), running in linux machine.
When I restart logstash, then the unprocessed files get processed and cleaned up.
I need it to work without restarting.

Related

How to get log filename in codec plugin inside of file input plugin logstash

The below is the code that I want to ask.
input {
file {
path => "directory/*.log"
start_position => "beginning"
codec => my_own_codec_plugin {
....
}
sincedb_path => "/dev/null"
}
}
I have some log files in same directory. I can reach out them with using * in path. I have created "my_own_codec_plugin" for file input plugin.
I want to pass the log filename to "my_own_codec_plugin".
I mean if path reaches the logfile1.log send the name to codec plugin, then it reaches logfile2.log send the filename to the codec plugin again.
How can i do this ? Thanks for answering
In your custom codec, you're receiving the event and the event should have a path field with the actual path of the file that you can use.

Logstash - Issue parsing json_lines format

Probably a n00b issue trying to get the json_lines codec to read data from a file.
Here's what my config file looks like
input {
file {
path => ['C:/dev/logstash-5.1.2/data/sample.log']
start_position => "beginning"
sincedb_path => 'C:/dev/logstash-5.1.2/data/.sincedb'
codec => "json_lines"
}
}
output {
file {
path => ['C:/dev/logstash-5.1.2/data/sample-output.log']
flush_interval => 0
}
}
Here's what my super simple input file looks like
{"id":1,"name":"A green door","price":12.50,"tags":["home","green"]}
{"id":2,"name":"A red door","price":12.50,"tags":["home","red"]}
When I switch the codec to plain the file gets read and output gets written as expected. But no matter what I do I'm unable to get the json_lines codec to read and write this data.
I am pretty new to logstash, so this might just be something simple that I'm just not able to wrap my head around. Any help would be most appreciated!
Cheers!
On the json_lines documentation it has this warning:
NOTE: Do not use this codec if your source input is line-oriented JSON, for example, redis or file inputs. Rather, use the json codec. More info: This codec is expecting to receive a stream (string) of newline terminated lines. The file input will produce a line string without a newline. Therefore this codec cannot work with line oriented inputs.
Use the json codec instead.

puppet code deleted a file, instead of replacing

I am having issues with puppet modules, and this modules should replace /etc/ssh/sshd_config file based on Redhat version. So the issue is, after applying the code, puppet deleted the file, instead of replacing it.
someone please suggest any wrong with my code.
here is the my puppet manifest file;
class os_vul_ssh {
case $::operatingsystemmajrelease {
'6':{$sshconfigfile = 'sshd_config.rhel6'}
'7':{$sshconfigfile = 'sshd_config.rhel7'}
}
package { "openssh-server":
ensure => installed,
}
service { 'sshd':
ensure => "running",
enable => true,
require => Package["openssh-server"],
}
file { "/etc/ssh/sshd_config":
owner => root,
group => root,
mode => '0644',
source => "puppet:///modules/os_vul/${::sshconfigfile}",
require => Package["openssh-server"],
notify => Service["sshd"],
}
}
file { "/etc/ssh/sshd_config":
ensure => file, <----- this is missing
owner => root,
group => root,
mode => '0644',
source => "puppet:///modules/os_vul/${::sshconfigfile}",
require => Package["openssh-server"],
notify => Service["sshd"],
}
Might be more going on here, but this is the first issue that jumps out at me.
By the way, you can cleanup your code with this:
file { "/etc/ssh/sshd_config":
ensure => file,
owner => root,
group => root,
mode => '0644',
source => "puppet:///modules/os_vul/sshd_config.rhel${::operatingsystemmajrelease}",
require => Package["openssh-server"],
notify => Service["sshd"],
}
and if you are using Facter 3 then consider changing your fact to:
$facts['operatingsystemmajrelease']
and note that your sshconfigfile is a local variable and should be included in your file resource as a local variable $sshconfigfile and not global $::sshconfigfile.

Index not creating from couchdb with logstash Logstash

I am using this config file to create index/import data from couchdb.
input {
couchdb_changes {
db => "roles"
host => "localhost"
port => 5984
}
}
output {
elasticsearch {
document_id => "%{[#metadata][_id]}"
document_type => "%{[#metadata][type]}"
host => "localhost"
index => "roles_index"
protocol => "http"
host => localhost
port => 9200
}
}
I was able to run logstash with this config file and import data
once. I closed command prompt to shutdown logstash and reran cmd prompt
and stash with the config file again. but now I cannot see any index
created. Is there anything that I might be doing wrong here. I am using
ctl+c to kill logstash in cmd prompt. Will appreciate any help.
Thanks.
in case someone comes here looking for the answer of same thing...I set sequence_path => "my_couchdb_seq" in couchdb_changes { } section of my config file and it worked. Each time i want to run logstash to create index, value in this file should be replaced with 0. Got to link: https://discuss.elastic.co/t/index-not-creating-from-couchdb-with-logstash/27848/9 for details

Logstash log ftp input

Hithere,
My log files is stored in remote server where the directory is only accessible via browser.
Each day if there is a new log files uploaded in the server, it will be stored like this,
fttp://serverip.com/logs/2014/10/08/log.txt
ftttpp://serverip.com/logs/2014/10/08/log2.txt
fffttpp://serverip.com/logs/2014/10/08/log.xml
fffttttppp://serverip.com/logs/2014/10/08/log.xlx
the timestamp would be the time its uploaded to the server(i can use curl to see its timestamp)
input {
exec {codec => plain { }
command => "curl ftp://serverip.com/logs/2014/10/08/" #this list the dir
interval => 3000}
}
output {
stdout { codec => rubydebug }
#elasticsearch {embedded => true}
}
the problem is how can i combine/link the timestamp with the event file in the directories, because there is no timestamp in the log files.

Resources