How to restart multiple conf files in logstash - logstash

I have 16 conf files and all of them scheduled to run every day at 09:05 am. Today these files could not run at intended time. After i fix the problem tried to restart logstash but conf files are not able to generate indices.
Example dash_KPI_1.conf file:
input {
jdbc {
jdbc_driver_library => "/var/OJDBC-Full/ojdbc6.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#a/b"
jdbc_user => "KIBANA"
jdbc_password => "pass"
statement => "
SELECT /*+ PARALLEL(16) */
* from
dual"
# jdbc_paging_enabled => "true"
# jdbc_page_size => "50000"
type => "dash_kpi_1"
schedule => "05 09 * * *"
}
}
output { if [type]=="dash_kpi_1"{
# stdout { codec => rubydebug }
elasticsearch {
hosts => ["http://XX.XX.XX.XXX:9200","http://XX.XX.XX.XXX:9200","http://XX.XX.XX.XXX:9200"]
index => "dash_kpi_1-%{+YYYY.ww}"
user => "elastic"
password => "pass2"
}
}
}
How i start and stop logstash:
systemctl stop logstash.service
systemctl start logstash.service -r
What i have tried:
/usr/share/logstash/bin/logstash -f dash_KPI_1.conf
How can i restart these 16 conf files and make them generate indices as intended in the first place ?

I see you are creating index weekly. If you want to create it daily, you need to change the index pattern to "dash_kpi_1-%{+YYYY.MM.dd}".

Related

Logstash jdbc input does not retry as setup

I am using logstash to transfer data from postgresql to mysql.
There are 10 configure files in /etc/logstash/conf.d and I run logstash as a service by the command systemctl start logstash.
I have setup the retry configure connection_retry_attempts and connection_retry_attempts_wait_time (retry 10 times every 2 mins) as shown below:
input {
jdbc {
jdbc_driver_library => "/etc/logstash/lib/postgresql-42.2.14.jar"
jdbc_driver_class => "org.postgresql.Driver"
jdbc_paging_enabled => true
jdbc_page_size => "50000"
connection_retry_attempts => 10
connection_retry_attempts_wait_time => 120
jdbc_connection_string => "jdbc:postgresql://xxx/xxx"
jdbc_user => "user"
jdbc_password => "xxx"
jdbc_default_timezone => "UTC"
statement => "select ... from ... where ... and login_time >= :sql_last_value "
tracking_column => "login_time"
tracking_column_type => "timestamp"
use_column_value => true
last_run_metadata_path => "/home/data/logs/..."
schedule => "* * * * *"
type => "xxx"
}
}
However, the setup did not work as expected. A connection error occured, 8/10 table tranfer stopped only 2 still worked. I restart the logstash service and fixed it. But according to the log, there's not event retry attemp after the connection failure.
How to make the jdbc input automatically retry connect? Thanks for your help in advance.
[2022-06-07T18:08:03,763][ERROR][logstash.inputs.jdbc ][main] Unable to connect to database. Trying again {:error_message=>"Java::OrgPostgresqlUtil::PSQLException: The connection attempt failed."}
[2022-06-07T18:10:03,914][INFO ][logstash.inputs.jdbc ][main] (0.015812s) SELECT CAST(current_setting('server_version_num') AS integer) AS v
[2022-06-07T18:10:03,940][INFO ][logstash.inputs.jdbc ][main] (0.024598s) SELECT count(*) AS "count" FROM (select * from xxx where added_time >= '2022-06-06 11:47:00' and added_time >= '2022-06-07 10:06:48.221000+0000' ) AS "t1" LIMIT 1

MSSQL JDBC Driver library path is not recognized when using ~/ when running Logstash manually

Currently trying to populate the employee index with the below settings:
CONF
input {
jdbc {
jdbc_driver_library => "~/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://SERVER;user=USER;password=PASSWORD"
jdbc_user => "DB_USER"
jdbc_password => "DB_PASSWORD"
jdbc_validate_connection => true
jdbc_validation_timeout => -1
statement => "SELECT * FROM [dbo].Employee ORDER BY ID"
type => "employee"
}
}
filter {
}
output {
}
NOTE: filter and output sections of the conf file is purposely blank
LINUX COMMAND
sudo /usr/share/logstash/bin/logstash -f /home/ubuntu/Employee-pipeline.conf --path.settings /etc/logstash/ --path.data /var/lib/logstash_new
RESULT
Looks like logstash does not know or don't have access to ~/sqljdbc...*.jar
I also confirmed that the mssql-jdbc-6.2.1.jre8.jar exists
However, when I changed the path to /home/ubuntu/sqljdbc_6.2/enu/mssql-jdbc-6.2.1.jre8.jar, it runs successfully.
So ~/ is the same as /home/ubuntu
This started to occur after upgrading our Elastic Stack from v5.5 to v5.6. Also, note that this does not occur if we run the same conf file with the logstash service.

Logstash - Data from Kafka to ES

Using logstash 5.0.0, Taking kafka source as the input -> taking the data and producing the output in Elasticsearch. (ElasticSearch version 5.0.0)
Logstash conf:
input{
kafka{
bootstrap_servers => "XXX.XXX.XX.XXX:9092","XXX.XXX.XX.XXX:9092","XXX.XXX.XX.XXX:9092"
topics => ["a-data","f-data","n-data"]
group_id => "sound"
auto_offset_reset => "earliest"
consumer_threads => 2
}
}
filter{
json{
source => "message"
}
}
output {
elasticsearch {
hosts => [ "XXX.XXX.XX.XXX:9200" ]
}
}
When I run the below configuration , i am getting this following error.
$ ./logstash -f sound.conf
Sending Logstash logs to /logstash-5.0.0/logs which is now configured vi a log4j2.properties.
[2017-01-17T10:53:29,273][ERROR][logstash.agent ] fetched an invalid c onfig {:config=>"input{\nkafka{\nbootstrap_servers => \"XX.XXX.XXX.XX:9092\",\"XXX.XXX.XX.XXX:9092\",\"XXX.XXX.XX.XXX:9092\"\ntopics => [\"a-data\",\"f-data\ ",\"n-data\"]\ngroup_id => \"sound\"\nauto_offset_reset => \"earliest\"\nc onsumer_threads => 2\n}\n}\nfilter{\njson{\nsource => \"message\"\n}\n}\noutput {\nelasticsearch {\nhosts => [ \"XX.XX.XXX.XX:9200\" ]\n}\n}\n\n", :reason=>"Ex pected one of #, {, } at line 3, column 40 (byte 54) after input{\nkafka{\nboots trap_servers => \"XX.XX.XXX.XX:9092\""}
Can anyone help me with this configuration.
Shouldn't your topic be topics which is an array, where you've inserted the values as a hash:
topics => ["a-data","f-data","n-data"] <-- try changing this line

logstash-input-jdbc: “Unknown setting 'jdbc_driver_libary' for jdbc {:level=>:error}”

I try to access to MySQL service with logstash. I installed logstash-input-jdbc (/opt/logstash/bin/logstash-plugin install logstash-input-jdbc) and created /etc/logstash/conf.d/sample.conf:
input{
lumberjack{
...
}
jdbc{
type => "jdbc_hfc"
jdbc_connection_string => "jdbc:mysql://127.0.0.1:3306/test"
jdbc_user => "root"
jdbc_password => ""
jdbc_validate_connection => true
jdbc_driver_libary => "mysql-connector-java-5.1.40-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM hfc"
schedule => "00 07 * * *"
}
file{
...
}
}
output{
if [type] == "jdbc_hfc"
{
elasticsearch{
protocl => http
hosts => ["localhost:9200"]
index => "logstash-jdbc-hfc-%{+YYYY.MM.dd}"
}
}
}
When I excute the configtest (/opt/logstash/bin/logstash --configtest -f /etc/logstash/conf.d/sample.conf), I get the next error:
Unknown setting 'jdbc_driver_libary' for jdbc {:level=>:error}
The given configuration is invalid. Reason: Something is wrong with your configuration. {:level=>:fatal}
When I comment the jdbc_connection_string line, the configtest returns:
Configuration OK
But when I executed the sample.conf file, logstash retruns me the next error:
Pipeline aborted due to error {:exception=>"LogStash::ConfigurationError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.1/lib/logstash/plugin_mixins/jdbc.rb:159:in `prepare_jdbc_connection'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-jdbc-4.1.1/lib/logstash/inputs/jdbc.rb:187:in `register'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:330:in `start_inputs'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:329:in `start_inputs'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:180:in `start_workers'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/pipeline.rb:136:in `run'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.4.0-java/lib/logstash/agent.rb:491:in `start_pipeline'"], :level=>:error}
Where is my mistake? What can I do to resolve this problem?
Thanks a lot and best regards.
PD: If you need more information please, ask me.
The first error says it all:
Unknown setting 'jdbc_driver_libary' for jdbc {:level=>:error}
So you just have a typo in your configuration:
jdbc_driver_libary => "mysql-connector-java-5.1.40-bin.jar"
should read
jdbc_driver_library => "mysql-connector-java-5.1.40-bin.jar"
^
|

How to see the requests sent by LogStash to the elasticsearch output in Fiddler?

I have LS_JAVA_OPTS = -DproxySet=true -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8888
And yet, I see no traffic to my elasticsearch node from logstash in Fiddler.
I know my elasticsearch is up and running. When I curl it, Fiddler clearly shows the requests, so it is something about jruby that does not route requests through Fiddler.
I am not calling jruby directly. Rather I use the bin\logstash.bat script.
Appendix
My conf file:
input {
file {
path => 'c:/log/bje-Error.log'
sincedb_path => "NUL"
codec => plain {
charset => "ISO-8859-1"
}
codec => multiline {
pattern => "^%{TIMESTAMP_ISO8601} "
negate => true
what => previous
}
start_position => beginning
ignore_older => 0
}
}
filter {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} \[%{BASE10NUM:thread:int}] %{WORD:machine}:%{WORD:service} \[%{BASE10NUM:localId:int}?:%{UUID:logId}?:(?<jobKind>[^:]+)?:%{BASE10NUM:jobDefinitionId:int}? %{WORD:namespace}?:%{WORD:job}?:(?<customCtx>[^\]]*)\] %{LOGLEVEL:level} %{NOTSPACE:logger} - (?<text>(?m:.*))" }
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
document_type => 'logs_bje'
hosts => ["ncesearch01"]
}
}
Testing in powershell:
PS E:\logstash-2.3.2\bin> (ConvertFrom-Json((Invoke-WebRequest "http://ncesearch01:9200/logstash-*/_count").Content)).count
24666
PS E:\logstash-2.3.2\bin> .\logstash.bat -f C:\dayforce\DayforceDEV\elk\logstach.conf
LS_JAVA_OPTS was set to [-DproxySet=true -Dhttp.proxyHost=127.0.0.1 -Dhttp.proxyPort=8888]. This will be appended to the JAVA_OPTS [ -XX:HeapDumpPath="$LS_HOME/heapdump.hprof"]
io/console not supported; tty will not be manipulated
Settings: Default pipeline workers: 12
Pipeline main started
{
"message" => "2016-05-02 16:00:05.7079 [111] CANWS212:MyBJE [2251:e2737eeb-40d6-4b0e-9608-75ee3de894d3:ScheduledInstance:16 DFUnitTest:BillingDataCollectionJob:] ERROR
SharpTop.Engine.BackgroundJobs.Billing.BillingDataCollectionJob - The client database version is not defined in DFDatabaseIdentification \r",
"#version" => "1",
"#timestamp" => "2016-05-03T03:40:50.531Z",
"path" => "c:/log/bje-Error.log",
"host" => "CANWS212",
"timestamp" => "2016-05-02 16:00:05.7079",
"thread" => 111,
"machine" => "CANWS212",
"service" => "MyBJE",
"localId" => 2251,
"logId" => "e2737eeb-40d6-4b0e-9608-75ee3de894d3",
"jobKind" => "ScheduledInstance",
"jobDefinitionId" => 16,
"namespace" => "DFUnitTest",
"job" => "BillingDataCollectionJob",
"level" => "ERROR",
"logger" => "SharpTop.Engine.BackgroundJobs.Billing.BillingDataCollectionJob",
"text" => "The client database version is not defined in DFDatabaseIdentification \r"
}
^CTerminate batch job (Y/N)? ←[33mSIGINT received. Shutting down the agent. {:level=>:warn}←[0m
stopping pipeline {:id=>"main"}
Pipeline main has been shutdown
The signal HUP is in use by the JVM and will not work correctly on this platform
^CPS E:\logstash-2.3.2\bin> (ConvertFrom-Json((Invoke-WebRequest "http://ncesearch01:9200/logstash-*/_count").Content)).count
24667
PS E:\logstash-2.3.2\bin>
As you can see, http://ncesearch01:9200/logstash-*/_count returns incremented count, hence running logstash did send a request to the elasticsearch. However, it bypassed Fiddler, despite the LS_JAVA_OPTS.
I find some possible reasons for this condition,although I did not try.May this answer should be called "discussion",I`m sorry.
1.You may need a linux OS instead of windows,for the reason,
I am not sure this question has been deal in the latest logstash version
you may be interested in this,Make JAVA_OPTS and LS_JAVA_OPTS work consistently on Windows
2.As we see,the most possible is that
logstash ES_output plugin use the http way to send message
after logstash-2.0,you may use the old version?
moreInfo about ES_output_plugin,logstash-output-plugin-elasticsearch
If anyone has any ideas,your share will be expected~

Resources