sourceloader - No configuration found in the configured sources - logstash

when I run the command sudo bin/logstash -f logstash.conf it appears this error
[ERROR] 2018-04-05 10:22:32.872 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] sourceloader - No configuration found in the configured sources.
I don't know what to do anymore. Logstash reinstalled did not help. Changing the logstash config also did not help.
Who can tell me how to fix this problem?
Here is my current config:
input {
file {
type => "rails logs"
path => "/home/user/apps/demo/log/logstash_development.log"
codec => json {
charset => "UTF-8"
}
}
}
output {
# Print each event to stdout.
stdout {
codec => rubydebug
}
elasticsearch {
# Setting 'embedded' will run a real elasticsearch server inside logstash.
# This option below saves you from having to run a separate process just
# for ElasticSearch, so you can get started quicker!
embedded => true
}
}

The logstash.conf file is inside the config folder, not in bin.
Use a command like W:\logstash\bin>logstash -f .\..\config\logstash.conf.

Related

Logstash throws unexpected error: <ArgumentError: Setting "" hasn't been registered>,

I followed instruction: https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jmx.html
to setup cassandra jmx metric monitoring.
My logstash.yml is as follows:
input {
jmx {
path => "/home/foo/elastic/logstash"
polling_frequency => 15
type => "jmx"
nb_thread => 4
}
}
output {
stdout { codec => rubydebug }
}
Under /home/foo/elastic/logstash, I define a jmx.conf file with following info:
//Required, JMX listening host/ip
"host" : "192.168.1.139",
//Required, JMX listening port
"port" : 7199,
//Optional, the username to connect to JMX
"username" : "foo",
//Optional, the password to connect to JMX
"password": "foo",
//Optional, use this alias as a prefix in the metric name. If not set use <host>_<port>
"alias" : "cassandra",
Then I run logstash in the command:
sudo bin/logstash -f /etc/logstash/logstash.yml --path.settings /etc/logstash --debug
I get the following error:
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[FATAL] 2017-10-28 22:57:23.812 [main] runner - An unexpected error occurred! {:error=>#, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/settings.rb:32:in get_setting'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:64:inset_value'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:83:in merge'", "org/jruby/RubyHash.java:1342:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:83:in merge'", "/usr/share/logstash/logstash-core/lib/logstash/settings.rb:135:invalidate_all'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:243:in execute'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:67:inrun'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:204:in run'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/clamp-0.6.5/lib/clamp/command.rb:132:inrun'", "/usr/share/logstash/lib/bootstrap/environment.rb:71:in `(root)'"]}
I figure it out myself.
There are 2 types of settings for Logstash, one is for Logstash, one for pipeline. Typically the one in /etc/logstash is for Logstash, and pipeline one is in /etc/logstash/conf.d. I used pipeline one for Logstash, which creates all errors.
The Problem
The error message is not pretty clear but the logstash.yml file should contains data in YAML format. And here it doesn't (Your data input { jmx { path => ... is clearly not on YAML format.
Why does it matter ?
Because Logstash has two types of configuration files:
pipeline configuration files with the .conf extension which define the Logstash processing pipeline and
settings files, which specify options that control Logstash startup and execution. ( logstash.yml, pipelines.yml, jvm.options, etc...). This means :
If You want to configure data input, filter or output, you should put your configuration in a logstash.conf file (or in a file having a .conf extension)
If you want to configure logstash and you do not want to pass options or flags at the command line level, you can set them here in logstash.yml.
How to fix it ?
You can ignore (or delete) the file /etc/logstash/logstash.yml if you do not have any option to pass to set logstash startup behaviour.
Set the content of the file /etc/logstash/pipelines.yml to the following
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html
- pipeline.id: main
path.config: "/etc/logstash/conf.d/*.conf"
Put your logstash data configuration pipeline in the file /etc/logstash/conf.d/logstash.conf as following
input {
jmx {
path => "/home/foo/elastic/logstash"
polling_frequency => 15
type => "jmx"
nb_thread => 4
}
}
output {
stdout { codec => rubydebug }
}
Et voilĂ  :=)
I believe that you did mistake in config file - it should be the JSON object, but I don't see the opening { at beginning of config - see example of config file in article that you refer.
I tried following steps and it worked for me:
The input and output section specified by you under logstash.yml file should be in conf file e.g. jmx.conf
Comment everything under logstash.yml so that default settings are loaded by logstash.
It resolved the issue.

Logstash HTTP input plugin configuration error: Expected one of #

I am attempting to configure the logstash HTTP input plugin following the official documentation. I have the following config saved in the 10-syslog.conf
input {
port => 8080
user => elkadmin
password => "xxxx"
ssl => off
}
output {
elasticsearch {
host => "127.0.0.1"
codec => "json"
index => "logstash-%{+YYYY.MM.dd}"
protocol => "http"
}
stdout { codec => rubydebug }
}
Logstash does start successfully when using the following command:
sudo service logstash restart
I have ran the configuration check on the configuration file for the input plugin using the
/opt/logstash/bin/logstash -configtest -f 10-syslog.conf
The config check came back with the following error:
Error: Expected one of #, { at line 2, column 9 (byte 18) after input {
Looking at the logstash log I can see that it may be caused by permissions:
{:timestamp=>"2016-10-16T20:41:30.900000+0000", :message=>"The error reported is: \n Permission denied - /etc/logstash/conf.d/10-syslog.conf"}
I am very much unsure how to proceed here and any help and/or guidance would be more than appreciated.
You're simply missing the name of the http input plugin in your input section (the typo comes from the blog article you linked to)
input {
http { <--- add this
port => 8080
user => elkadmin
password => "xxxx"
ssl => off
}
}
Also note that in your elasticsearch output plugin, host should read hostsand protocol is not supported anymore.
Your link is old (from 2015), you should preferably use the latest documentation at https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html

Logstash reparses file if it is just renamed

I am parsing files using logstash and storing it in mongodb. I donot want logstash to reparse the file if the file is just renamed. How can i achieve this?
I included the sincedp_path field and my command is like this.
input {
file {
path => "/file.log"
sincedb_path => "/logstash"
}
}
output {
mongodb {
collection ="collect" database => "db" uri => "mongodb://localhost"
}
}
This gives the following error:
A plugin had an unrecoverable error. Will restart this plugin.
Error: Permission denied - /logstash.13562.1005.292789 or /logstash {:level=>:error}
Errno::EACCES: Permission denied - /logstash.13562.1005.545871 or /logstash
rename at org/jruby/RubyFile.java:987
atomic_write at /logstash/vendor/bundle/jruby/1.9/gems/filewatch- 0.6.4/lib/filewatch/helper.rb:39
_sincedb_write at /logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.4/lib/filewatch/tail.rb:236
sincedb_write at /logstash/vendor/bundle/jruby/1.9/gems/filewatch-0.6.4/lib/filewatch/tail.rb:206
teardown at /logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-1.0.0/lib/logstash/inputs/file.rb:157
inputworker at /logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/pipeline.rb:203
synchronize at org/jruby/ext/thread/Mutex.java:149
inputworker at /logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/pipeline.rb:203
start_input at /logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/pipeline.rb:171
How can i solve this?
The default behavior of logstash is to track files by inode in the sincedb. This would handle files being renamed properly. If you are experiencing it not working, then chances are your sincedb is set to a directory/file that logstash can't write to. You can explicitly say where your sincedb: http://logstash.net/docs/1.4.2/inputs/file#sincedb_path
file {
sincedb_path => '/some/writable/directory'
}

Logstash: since_db not getting created

Was playing with the since_db option and it appears that the sincedb file isn't getting created. Below is my Logstash File configuration. I have verified that I can create the file manually so there is no permission issue. Would appreciate if anyone can throw more light on the same.
input {
file {
path => "/home/tom/fileData/*.log"
type => "log"
sincedb_path => "/home/tom/sincedb"
start_position => beginning
}
}
Can the user running logstash write to the sincedb_path location, if not that is what needs to get fixed. Other than that, your logstash configuration should work fine.

Use regex as input file path in Logstash

I would like to parse a directory of logs files with logstash.
When the logs are formatted like this :
server-20140604.log
server-20140603.log
server-20140602.log
There is no problem, I am using globs like this :
input {
file {
path=>["D:/*.log"]
}
}
But my logs are formatted like this :
server.log
server.log.1
server.log.2
client.log
client.log.1
client.log.2
So I would like to know how to tell to logstash to parse in the folder all the files starting with "server" expression in their names. I really need to do it like that, because I have other files in the folder (i.e client logs) that I don't want to parse but also cannot remove from the folder.
With this configuration I can only parse all the log files start with prefix server.
input {
file {
path => ["D:/server*"]
}
}
output {
stdout {
codec => rubydebug
}
}
I think the possible problem you have meet is the start_position config. It means that where does logstash start to read the logs. Please refer to here. Remember this option only modifies first contact situations where a file is new and not seen before. If a file has already been seen before, this option has no effect.
When you stop logstash, logstash will save a .sincedb* in your home directory. Next time you start it, logstash will start read the file according to .sindb*. If you do not input new logs to server.log, logstash will never parse the old logs.
What you can try to do is delete all the .sincedb before you start logstash and add start_posistion to your config. In your comment you have say if you overwrite the server.log logstash can parse the file from beginning, it is because logstash detect it as a new file and the .sincedb* do not save any information about this file. So logstash will parse it! You can try to find out your .sincedb and try to delete it.

Resources