Cannot run logstach 8.1.0 with the command - logstash

I have a problem about running defined configuration file in logstash though its command.
Here is my conf file shown below.
input {
file {
type => "userslog"
path => "C:\Users\aaa\Downloads\logstash-8.1.0\users-ms.log"
}
file {
type => "albumslog"
path => "C:\Users\aaa\Downloads\logstash-8.1.0\albums-ms.log"
}
}
output {
if[type]=="userslog"{
elasticsearch {
hosts => ["localhost:9200"]
index => "userslog-%{+YYYY.MM.dd}"
}
} else if[type]=="albumslog"{
elasticsearch {
hosts => ["localhost:9200"]
index => "albumslog-%{+YYYY.MM.dd}"
}
}
stdout {codec => rubydebug}
}
Here is the result shown below.
C:\Users\aaa\Desktop\logstash-8.1.0\bin>logstash logstash-simple.conf
"Using bundled JDK: ."
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
[FATAL] 2022-03-17 12:03:32.267 [main] Logstash - Logstash stopped processing because of an error: (NameError) missing class name (`org.apache.http.─▒mpl.client.StandardHttpRequestRetryHandler')
org.jruby.exceptions.NameError: (NameError) missing class name (`org.apache.http.─▒mpl.client.StandardHttpRequestRetryHandler')
When changed 'Java::OrgApacheHttpImplClient::StandardHttpRequestRetryHandler' to 'Java::OrgApacheHttp.impl.client::StandardHttpRequestRetryHandler', it didn't work.
How can I fix it?

Related

Logstash 7.10.02 failed to start on windows

I tried starting the logstash with the below command
logstash-7.10.2\logstash -f logstash.conf
logstash.conf
input{
file{
path => "D://server.log" start_position=> "beginning" type => "logs"
}
}
filter {
grok {
match => {"message" => "%{TIMESTAMP_ISO8601:logtime} \[%{NOTSPACE:thread}\] \[%{LOGLEVEL:loglevel}\] %{GREEDYDATA:line}"
}
}
}
output {
if "ERROR" in [loglevel]
{
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash"
}
}
}
command prompt displayed the below text and did not start logstash.
Using JAVA_HOME defined java: C:\Program Files\Java\jdk1.8.0_221;
WARNING, using JAVA_HOME while Logstash distribution comes with a bundled JDK
warning: ignoring JAVA_OPTS=-Xms64m -Xmx128m -XX:NewSize=64m -XX:MaxNewSize=64m -XX:PermSize=64m -XX:MaxPermSize=64m; pass JVM parameters via LS_JAVA_OPTS
No error logs were created.
Have you tried staring logstash in debug mode .
--log.level DEBUG
Pipeline looks okay. Can you try adding below output to see if you the pattern and log data matches. Just to rule out any grokparsefailures.
output {
stdout { codec => rubydebug }
if "ERROR" in [loglevel]
{
elasticsearch {
hosts => ["localhost:9200"]
index => "logstash"
}
}
}

Logstash doesnt read from configured input file

I am trying to configure my Logstash to read from a specified log file. When I configure it to read from stdin it works as expected, my input results in a message from Logstash and displays in my Kibana UI.
$ cat /tmp/logstash-stdin.conf
input {
stdin {}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
$./logstash -f /tmp/logstash-stdin.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
The stdin plugin is now waiting for input:
hellloooo
{
"#version" => "1",
"host" => "myhost.com",
"#timestamp" => 2017-11-17T16:05:41.595Z,
"message" => "hellloooo"
}
However, when I run Logstash with a file input I get no indication that the file is loaded into Logstash, and it does not show in Kibana.
$ cat /tmp/logstash-simple.conf
input {
file {
path => "/tmp/test_log.txt"
type => "syslog"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
$ ./logstash -f /tmp/logstash-simple.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Any suggestions of how I can troubleshoot why my Logstash is not ingesting the configured file?
By default the file input plugin starts reading at the end of the file, so only lines added after Logstash starts will be processed. To read all existing lines upon startup add the option "start_position" => "beginning" to the configuration, as explained in documentation.

logstash hangs with error sized_queue_timeout

We have a logstash pipeline in which numerous logstash-forwarders forward logs to a single logstash instance. Many times we have observed that the logstash hangs with the below error:-
[2016-07-22 03:01:12.619] WARN -- Concurrent::Condition: [DEPRECATED] Will be replaced with Synchronization::Object in v1.0.
called on: /opt/logstash-1.5.3/vendor/bundle/jruby/1.9/gems/logstash-input-lumberjack-1.0.2/lib/logstash/sized_queue_timeout.rb:16:in `initialize'
Exception in thread ">output" java.lang.UnsupportedOperationException
at java.lang.Thread.stop(Thread.java:869)
at org.jruby.RubyThread.exceptionRaised(RubyThread.java:1221)
at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:112)
at java.lang.Thread.run(Thread.java:745)
Our logstash config looks like below:-
input {
lumberjack {
port => 6782
codec => json {}
ssl_certificate => "/opt/logstash-1.5.3/cert/logstash-forwarder.crt"
ssl_key => "/opt/logstash-1.5.3/cert/logstash-forwarder.key"
type => "lumberjack"
}
}
filter {
if [env] != "prod" and [env] != "common" {
drop {}
}
if [message] =~ /^\s*$/ {
drop { }
}
}
output {
if "_jsonparsefailure" in [tags] {
file {
path => "/var/log/shop/parse_error/%{env}/%{app}/%{app}_%{host}_%{+YYYY-MM-dd}.log"
}
} else {
kafka {
broker_list => ["kafka:9092"]
topic_id => "logstash_logs2"
}
}
}
On restarting the logstash it starts working again. Can some one let me know why this problem comes and how can we get around this without restarting logstash everytime?

Logstash in EC2 can't send log data to AWS Elasticsearch service

In EC2 I have configured logstash as belows
input {
# beats{
# port => 5044
# }
file {
type => "adjustlog"
path => "/etc/logstash/conf.d/sample.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
if[type] == 'adjustlog'{
grok {
match => {
"message" => [
"%{TIMESTAMP_ISO8601:timestamp},(%{USERNAME:userId})?,%{USERNAME:setlkey},%{USERNAME:uniqueId},%{NUMBER:providerId},%{USERNAME:itemCode},%{USERNAME:voucherCode},%{USERNAME:samsCode},(%{USERNAME:serviceType})?"
]
}
}
}else {
drop{ }
}
}
output {
elasticsearch{
hosts => ["search-*.es.amazonaws.com:80"]
index => "test"
}
stdout {codec => rubydebug}
}
but logstash can't make index in AWS elasticsearch and
send log data.
(However, curl and wget commands are working well.
I can make index using curl command)
Error logs are
Attempted to send a bulk request to Elasticsearch configured at '["http://search-*.es.amazonaws.com/"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:error_message=>"search*.es.amazonaws.com:80 failed to respond", :error_class=>"Manticore::ClientProtocolException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/transport/http/manticore.rb:84:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/transport/base.rb:257:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/transport/http/manticore.rb:67:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/client.rb:128:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.17/lib/elasticsearch/api/actions/bulk.rb:88:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in `non_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:172:in `safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:101:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:86:in `retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:29:in `multi_receive'", "org/jruby/RubyArray.java:1653:in `each_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.7.0-java/lib/logstash/outputs/elasticsearch/common.rb:28:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/output_delegator.rb:114:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:301:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:301:in `output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:232:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.3-java/lib/logstash/pipeline.rb:201:in `start_workers'"], :client_config=>{:hosts=>["http://search*.es.amazonaws.com/"], :ssl=>nil, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"http", :user=>nil, :password=>nil, :port=>80}}, :level=>:error}
What is the check point for debug?
I found this when trying to fix a similar issue. AWS has changed how it implements Elasticsearch node discovery. It will work fine until logstash tries to discover more hosts at which point it breaks. Restarting logstash temporarily but inconsistently fixes the issue. curl and wget work fine too.
:message=>"Cannot get new connection from pool.", :class=>"Elasticsearch::Transport::Transport::Error", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:193:in `perform_request'",
ElasticSearch would work for a bit but then stop ingesting data.
Old config which failed
output {
elasticsearch {
hosts => ["https://search-*.us-east-1.es.amazonaws.com"]
sniffing => true
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
Logstash tries to get a list of hosts from Elasticsearch but AWS's implementation has changed the format of the data returned. For more details on the specifics. https://forums.aws.amazon.com/thread.jspa?threadID=222600
https://discuss.elastic.co/t/elasitcsearch-ruby-raises-cannot-get-new-connection-from-pool-error/36252/11
The working config.
output
{
elasticsearch {
hosts => ["https://search-*.us-east-1.es.amazonaws.com"]
manage_template => false
index => "%{[#metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
}
}
tomwj

Unable to push data from file to elastic search

I am trying to read a son file data and visualize it in Kibana.The following is my stack.
read json file --> logstash --> elastic search -> Kibana (UI)
I tried the following simple configuration and it works fine till it reaches kibana.
input { stdin { } }
output {
elasticsearch { host => localhost }
}
When I tried to read the data from file and push it to elastic.I am not able to see the output .
input {
stdin {
type => "stdin-type"
}
file {
type => "jsonlog"
# Wildcards work, here :)
path => [ "/Users/path/logstash-1.5.0/sample.json" ]
codec => json
}
}
output {
stdout { }
elasticsearch { embedded => true }
}
Output : It says "logstash started".But I could not see the results in elastic nor the stdout
Jun 10, 2015 4:32:10 PM org.elasticsearch.node.internal.InternalNode start
INFO: [logstash-MacBook-Pro.local-12298-9782] started
Logstash startup completed
Software Version :
Logstash -> 1.5.0
Elasticsearch -> 1.5.2
Thanks in advance !

Resources