logstash is not fetching data from log file - logstash

logstash is configured with elasticsearch which should store data coming from logstash. configuration has been done properly still not fetching.
input {
file {
path => "C:\Users\vishadub\Documents\elkstackTools\logs\error_log.log"
type => "error_logs"
start_position => beginning
sincedb_path => "C:\Users\vishadub\Documents\elkstackTools\sincedb-access"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "error_log"
}
}
this is written in my config file..
o/p is below====
C:\Users\vishadub\Documents\elkstackTools\logstash-6.4.2\bin>logstash -f logstash.conf
Sending Logstash logs to C:/Users/vishadub/Documents/elkstackTools/logstash-6.4.2/logs which is now configured via log4j2.properties [2018-10-30T11:35:39,167]
[WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-10-30T11:35:39,667][INFO ][logstash.runner] Starting Logstash {"logstash.version"=>"6.4.2"} [2018-10-30T11:35:41,645][INFO ][logstash.pipeline] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-10-30T11:35:42,020][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}} [2018-10-30T11:35:42,036][INFO][logstash.outputs.elasticsearch]
Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}[2018-10-30T11:35:42,208][WARN ][logstash.outputs.elasticsearch]
Restored connection to ES instance {:url=>"http://localhost:9200/"}[2018-10-30T11:35:42,286][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-10-30T11:35:42,301][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-10-30T11:35:42,348][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-10-30T11:35:42,380][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-10-30T11:35:42,426][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-10-30T11:35:42,861][INFO ][logstash.pipeline ]
Pipeline started successfully {:pipeline_id=>"main", :thread=>"3 <Thread:0x45c02cea run>"} [2018-10-30T11:35:42,908][INFO ][logstash.agent]
Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [2018-10-30T11:35:42,940][INFO ][filewatch.observingtail ] START,creating Discoverer, Watch with file and sincedb collections [2018-10-30T11:35:43,221][INFO ][logstash.agent ]
Successfully started Logstash API endpoint {:port=>9600}

Related

logstash attempting to connect to elasticsearch instead of loki as output

my logstash is running on kubernetes, logstash attempting to connect to elasticsearch instead of loki as output even if i used
XPACK_MONITORING_ENABLED: false in the env.
logstash.yml: |
http.host: "0.0.0.0"
log.level: debug
xpack.monitoring.enabled: false
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
file{
path => "/var/log/containers/*.log"
}
}
filter {
kubernetes {
source => "path"
target => "loki"
}
}
output {
stdout { codec => rubydebug}
loki {
url => "http://loki-loki-distributed-distributor.loki-benchmark.svc.cluster.local:3100/loki/api/v1/push"
}
}
console output
[2022-12-13T06:31:40,411][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:31:50,775][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:32:10,414][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:32:20,910][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:32:40,412][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:32:50,998][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2022-12-13T06:33:10,410][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-12-13T06:33:21,328][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
kubernetes config
image: "grafana/logstash-output-loki:1.0.1"
imagePullPolicy: "IfNotPresent"
command:
- '/bin/sh'
- '-c'
- 'logstash-plugin install --no-verify logstash-filter-kubernetes && logstash -f /usr/share/logstash/pipeline/logstash.conf'

logstash in docker error "Fail to execute action"

I'm new to the elastic stack and im trying to set it up with RabbitMQ using this guide(but in .NET):
https://piotrminkowski.com/2017/02/03/how-to-ship-logs-with-logstash-elasticsearch-and-rabbitmq/
When I startup Logstash I get the errors
[2020-11-14T09:51:50,997][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [0-9], [ \\t\\r\\n], \"#\", \"}\" at line 2, column 16 (byte 35) after input { rabbitmq {\nhost => 192.168", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:184:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:69:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:47:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:365:in `block in converge_state'"]}
[2020-11-14T09:51:51,296][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-11-14T09:51:56,179][INFO ][logstash.runner ] Logstash shut down.
[2020-11-14T09:51:56,209][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
I don't know what is wrong but I can see that the nhost is "192.168" which probably isnt right, my ip is 192.168.0.29
I'm thankfull for any help
The host option for a rabbitmq input takes a string. A string should be surrounded by double (or single) quotes.
The configuration compiler is quite forgiving, and in many places will accept a "bareword" in place of a string, so it would accept localhost, but you cannot have punctuation in a "bareword", so example.com would result in an error. Likewise, once it sees the periods in the IP address it throws an exception.
Try
host => "192.168.0.29"

Not getting the desired output in logstash

I am not able to get any output on the command prompt screen
E:\kibana\logstash-7.1.1\logstash-7.1.1>bin\logstash -f E:\kibana\logstash-7.1.1\logstash-7.1.1\config\pipeline.conf --config.reload.automatic
Java HotSpot(TM) 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.runtime.encoding.EncodingService (file:/E:/kibana/logstash-7.1.1/logstash-7.1.1/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.Console.cs
WARNING: Please consider reporting this to the maintainers of org.jruby.runtime.encoding.EncodingService
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to E:/kibana/logstash-7.1.1/logstash-7.1.1/logs which is now configured via log4j2.properties
[2019-06-14T12:33:19,407][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-06-14T12:33:19,427][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.1"}
[2019-06-14T12:33:22,210][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x6177c4b4 run>"}
[2019-06-14T12:33:23,035][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"E:/kibana/logstash-7.1.1/logstash-7.1.1/data/plugins/inputs/file/.sincedb_039f8a57349afd1e3fb106bf0e1c330b", :path=>["/E/kibana/logstash-7.1.1/logstash-7.1.1/data/event-data/apache_access.log"]}
[2019-06-14T12:33:23,119][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
[2019-06-14T12:33:23,189][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-06-14T12:33:23,198][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2019-06-14T12:33:23,479][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
I am only getting this and not the output. What could be going wrong?

Logstash - GELF output error

I've installed Graylog v2.1.1 as a virtual appliance inside VirtualBox on a Windows 7 PC.
I'm trying to read a simple log file and forward it to Graylog by using logstash v5.0.0 with the logstash-output-gelf-3.1.1 plugin, as described here: https://stackoverflow.com/a/31054064/4863804.
I've set up the following logstash.conf output:
input {
file {...}
}
output {
gelf {
host => "199.99.99.179"
port => 12203
}
}
But after running logstash -f logstash.conf I get the following error:
[2016-10-28T14:52:17,756][INFO ][logstash.pipeline ] Pipeline main started
[2016-10-28T14:52:17,817][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2016-10-28T14:52:18,594][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<NameError: no method 'debug' for arguments (org.jruby.RubyArray,org.jruby.RubyHash) on Java::OrgApacheLoggingLog4jCore::Logger
available overloads:
(org.apache.logging.log4j.Marker,java.lang.String,java.lang.Object[])
(org.apache.logging.log4j.Marker,java.lang.String,org.apache.logging.log4j.util.Supplier[])
(java.lang.String,org.apache.logging.log4j.util.Supplier[])
(java.lang.String,java.lang.Object[])>, :backtrace=>["C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/logging/logger.rb:41:in `debug'", "C:/SDKs/logstash-5.0.0/vendor/bundle/jruby/1.9/gems/logstash-output-gelf-3.1.1/lib/logstash/outputs/gelf.rb
:190:in `receive'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "C:/S
DKs/logstash-5.0.0/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:19:in `multi_receive'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/output_delegator.rb:42:in `multi_receive'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logst
ash/pipeline.rb:297:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/pipeline.rb:296:in `output_batch'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/pipeline.rb:252:in `worker_loo
p'", "C:/SDKs/logstash-5.0.0/logstash-core/lib/logstash/pipeline.rb:225:in `start_workers'"]}
Update:
It seems to be caused by a version mismatch between logstash and the logstash-output-gelf as the same configuration works fine with logstash-2.4.0.
Perhaps the output plugin needs to be updated for 5.0.0.

puppet-acl module on Windows throws transactionstore.yaml corrupt error

Trying out puppet-acl module on Windows Server 2016, Preview5. I'm getting the weirdest error on the second puppet run. If i remove the trnsactionstore.yaml file, and re-run the puppet agent, the behavior is repeatable. Im running puppet4 with latest agent version.
This is my codeblock
acl { "c:/temp":
permissions => [
{ identity => 'Administrator', rights => ['full'] },
{ identity => 'Users', rights => ['read','execute'] }
],
}
This is the output from the puppet-run.
PS C:\ProgramData\PuppetLabs\puppet\cache\state> puppet agent -t
Info: Using configured environment 'local'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Info: Caching catalog for opslowebf02n02.local
Error: Transaction store file C:/ProgramData/PuppetLabs/puppet/cache/state/transactionstore.yaml is corrupt (wrong number of arguments (0 for 1..2)); replacing
Error: Transaction state file C:/ProgramData/PuppetLabs/puppet/cache/state/transactionstore.yaml is valid YAML but not returning a hash. Check the file for corruption, or remove it before continuing.
Info: Applying configuration version '1471436916'
Notice: /Stage[main]/platform_base_system::Role::Windows/Exec[check-powershell-exection-policy]/returns: executed successfully
Notice: /Stage[main]/configs_iis::Profile::Default/Exec[check-iis-global-anonymous-authentication]/returns: executed successfully
Notice: Applied catalog in 7.42 seconds
In the transactionstore.yaml file, this is the error section:
Acl[c:/temp]:
parameters:
permissions:
system_value:
- !ruby/hash:Puppet::Type::Acl::Ace {}
- !ruby/hash:Puppet::Type::Acl::Ace {}
inherit_parent_permissions:
system_value: :true
This has been resolved by dowwngrading the puppet agent to 4.5.3.
Behavior of the 4.6.0 version must have changed.
With 4.5.3 i still see the error in the logfile, but the puppetrun does not fail
I'll try to talk to the people at puppet about this.
Acl[c:/temp]:
parameters:
permissions:
system_value:
- !ruby/hash:Puppet::Type::Acl::Ace {}
- !ruby/hash:Puppet::Type::Acl::Ace {}
inherit_parent_permissions:
system_value: :true
This is being tracked as https://tickets.puppetlabs.com/browse/PUP-6629. It's almost coincidental that you created https://tickets.puppetlabs.com/browse/PUP-6630 right afterwards.

Resources