Logstash can't write output file - logstash

I want to write output file on logstash, but logstash can't write file. file is empty and i can see logs on Kibana Dashboard.
My output.conf file ;
output {
file {
path => "/home/freed/example.txt"
codec => line { format => "custom format: %{message}"}
}
}
I want to help ?

I suspect that you have problems accessing (permission) the file for the logstash.
Check you log: /var/log/logstash/logstash-plain.log
In you example logstash must have accessing to /home/freed and be the owner file example.txt

Related

How to automatically stop logstash process instance after its read the doc

I want to ask that, below is my code. With the below code logstash reads the file until its end. Then it is stops reading but process is still alive. I want that process stops when it finishes the reading. How can i do this ?
file {
path => "directory/*.log"
start_position => "beginning"
mode => "read"
}
Thanks for answering
try using the stdin input plugin instead of file input, and passing the file as input in the command for starting logstash.
e.g.
bin/logstash -f readFileFromStdin.conf < /path_to_file/test.log
For multiple files you could do
bin/logstash -f readFileFromStdin.conf < cat /path_to_file/*.log
or
cat /path_to_file/*.log > /tmp/myLogs
bin/logstash -f readFileFromStdin.conf < /tmp/myLogs

input file start_position => "beginning" doesn't work even after deleting .sincedb files

Version: ElasticSearch-5.2.1/Logstash-5.2.1/Kibana-5.2.1
OS: Windows 2008
I've just started working on the ELK Stack & am facing some problems loading data
I've got the following .json code
input {
file {
path => "D:\server.log"
start_position => beginning
}
}
filter {
grok {
match => ["message","\[%{TIMESTAMP_ISO8601:timestamp}\] %{GREEDYDATA:log_message}"]
}
}
output {
elasticsearch {
hosts => "localhost:9200"
}
}
I've deleted the .sincedb files
And yet when I extract log info in Kibana, I can see data starting only since I first parsed. I've got data worth 2-3 months in my log file.
What if you have your file input as such, where you're missing out the ignore older which actually will stop you re-reading the old files plus you're missing out the since db path property I believe. You could have a look up on this answer by #Steve Shipway for a better explanation on having these two properties within your file input.
So your input could look something like this:
input {
file {
path => "D:\server.log"
start_position => "beginning" <-- you've missed out the quotes here
ignore_older => 0
sincedb_path => "/dev/null"
}
}
Note that setting sincedb_path to /dev/null will make the files read from the beginning, every time which isn't a good solution at all. But then deleting the .sincedb file should work I reckon. If you really want to pick up lines from where you left off, you really need the .sincedb file to hold into the last position which got updated lastly. You could have a look on this for a detailed illustration.
Hope this helps!
in my case, when you enter systemctl restart logstash, even if you have deleted the sincedb file, logstash before the process closes save a new sincedb file and then closes.
if you want really read file from beginning, you should:
stop the logstash service: sudo systemctl stop logstash
delete sincedb file from /var/lib/logstash/plugins/inputs/file or /usr/share/logstash/data/plugins/input/file directory
start the logstash service: sudo systemctl start logstash

Logstash not running

I've a logstash instance, version 2.3.1 which isn't running using the command
sudo service logstash start
Whenever I run this command, it returns logstash started and after a few moments when I check the status, I find that logstash isn't running. Although, when I start the logstash from opt to get output on the terminal, it runs without any error.
Note that logstash.err and logstash.stdout files are empty and logstash.log file isn't anywhere to be found. I've also set LS_GROUP to adm in init.d which caused the same issue on another instance, but even that doesn't seem to work now. Any help would be appreciated!
On an Ubuntu system, this behavior can be seen by logstash. To get around it, you can change the logstash user group in /etc/init.d/logstash to adm which stands for admin and you're good to go.
This is normal behaviour of Logstash.
Can you test if your Logstash instance is working correctly?
Windows:
Go to your bin folder of logstash
and type logstash
Linux:
Enter this command in the prompt (bin folder of your logstash instance)
/opt/logstash/bin/logstash
Both:
If you get No command given ... you're logstash instance has the correct setup.
You can always run your Logstash instance with this command
logstash -e 'input { stdin { } } output { stdout {} }'
After this you can enter some text values and they will output to your console.
If this all works you can be sure that your Logstash instance is running correctly.
You may ask yourself why is this? This is because Logstash waits to start untill it gets a config to run with or another option.
If you want to start Logstash automatically on startup. You need to use this command.
sudo update-rc.d logstash defaults 96 9
Actually,you should read the guide of logstash.In the "getting started section",The official documentation has the corret way for you to start a logstash work.
First,you should write a configure file such as "std.conf",look like this:
input {
stdin {
}
}
output{
stdout{
codec=>rubydebug
}
}
Then,start your logstash:
bin/logstash -f conf/std.conf
If you want this work can run in the background(such as get some log files into elasticsearch),you may also need add "&" in the end of the command,like this:
bin/logstash -f conf/getlog.conf &
with this file(std.conf) and this command,your logstash will start up and if you type any word in you terminal,it will print out in the terminal,like this:
{
"message" => "hello",
"#version" => "1",
"#timestamp" => "2016-08-06T19:47:36.543Z",
"host" => "bag"
}
Now,you have got the normal operation of logstah,you may need more information,from there:The official documentation of logstash
Try this,and keep going,it`s easy for you~

logstash file input glob not working

I'm starting to explore logstash and this is probably a newbie question, but as far as I have studied this should be working and it isn't.
I have a very simple configuration that just reads log files and dump them to the stdout. It works for a single file and for a list (array) of files, but if I use a glob that matches the same files, nothing happens.
I've tested the glob with a short ruby script and it lists the correct files.
Here is my configuration:
input {
file {
path => "/home/lpacheco/*.log"
start_position => "beginning"
}
}
output {
stdout {}
}
If I run this with --verbose I get:
{:timestamp=>"2015-09-23T11:26:47.008000-0300", :message=>"Registering file input", :path=>["/home/lpacheco/*.log"], :level=>:info}
{:timestamp=>"2015-09-23T11:26:47.068000-0300", :message=>"No sincedb_path set, generating one based on the file path", :sincedb_path=>"/home/.sincedb_6da9e0c63851aa9d5840ba19efd196cb", :path=>["/home/lpacheco/*.log"], :level=>:info}
{:timestamp=>"2015-09-23T11:26:47.089000-0300", :message=>"Pipeline started", :level=>:info}
Nothing else happens.
I'm using:
logstash 1.5.4
OpenJDK Runtime Environment (IcedTea 2.5.6)
(7u79-2.5.6-0ubuntu1.14.04.1)
ruby 1.9.3p484 (2013-11-22 revision
43786) [i686-linux]
You are apparently confronted with a sincedb-issue. Logstash saves the last position of a logfile in a file called sincedb. The sincedb is based on the inode of the log file so that renaming or using globs doesn't have any effect.
Try this input for testing:
input {
file {
path => "/home/lpacheco/*.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
From latest docs:
Path of the sincedb database file (keeps track of the current position
of monitored log files) that will be written to disk. The default will
write sincedb files to some path matching $HOME/.sincedb* NOTE: it
must be a file path and not a directory path
For more information, take a look at related questions like this.

Why is Logstash not excluding it's own log file?

According to the logstash docs, this should work; but logstash keeps causing a recursion by logging it's own stdout log to itself...
What is incorrect about my exclude config?
input {
file {
path => "/var/log/**/*"
exclude => ["**/*.gz", "logstash/*"]
}
}
output {
tcp {
host => "1.2.3.4"
port => 1234
mode => client
codec => json_lines
}
stdout { codec => rubydebug }
}
I see results with the path set to /var/log/logstash/logstash.stdout when it should be ignoring them.
(I've tested this by completely deleting the logs in the /var/log/logstash dir and restarting)
I've tried these in the array for exclusion:
logstash/*
**/logstash/*
/var/log/logstash/* #This is incorrect according to docs
Exclusion patterns for Logstash's file input are, as documented, matched against the bare filename of encountered files, so the three patterns in the question won't ever match anything. To exclude Logstash log files and gzipped files use logstash.* and *.gz as exclusion patterns.

Resources