I am new to ELK stack. Stuck with data ingestion in elasticsearch using logstash.
I am mentioning the steps I followed:-
Installed ELK Stack successfully.
Then installed plugin logstash-input-mongodb.
After that configured logstash with below file:-
input {
mongodb {
uri => 'mongodb://localhost:27017/dbName'
placeholder_db_dir => '/opt/logstash-mongodb/'
placeholder_db_name => 'logstash_sqlite.db'
collection => 'notifications'
batch_size => 1
}
}
output {
elasticsearch {
action => "index"
index => "notifications_data"
hosts => ["localhost:9200"]
}
stdout { codec => json }
}
Saved the above file as mongo-connector.conf
then run this using
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/mongo-connector.conf
After this, the logs on the terminal was:-
D, [2020-11-07T14:01:45.739178 #29918] DEBUG -- : MONGODB | localhost:27017 req:480 conn:1:1 sconn:78 | dbName.listCollections | STARTED | {"listCollections"=>1, "cursor"=>{}, "nameOnly"=>true, "$db"=>"dbName", "lsid"=>{"id"=><BSON::Binary:0x2064 type=uuid data=0x4508feda2dce4ec6...>}}
D, [2020-11-07T14:01:45.741919 #29918] DEBUG -- : MONGODB | localhost:27017 req:480 | dbName.listCollections | SUCCEEDED | 0.002s
D, [2020-11-07T14:01:50.756430 #29918] DEBUG -- : MONGODB | localhost:27017 req:481 conn:1:1 sconn:78 | dbName.find | STARTED | {"find"=>"notifications", "filter"=>{"_id"=>{"$gt"=>BSON::ObjectId('5fa012440d0e947dd8dfd2f9')}}, "limit"=>1, "$db"=>"dbName", "lsid"=>{"id"=><BSON::Binary:0x2064 type=uuid data=0x4508feda2dce4ec6...>}}
D, [2020-11-07T14:01:50.758080 #29918] DEBUG -- : MONGODB | localhost:27017 req:481 | dbName.find | SUCCEEDED | 0.001s
D, [2020-11-07T14:01:50.780259 #29918] DEBUG -- : MONGODB | localhost:27017 req:482 conn:1:1 sconn:78 | dbName.listCollections | STARTED | {"listCollections"=>1, "cursor"=>{}, "nameOnly"=>true, "$db"=>"dbName", "lsid"=>{"id"=><BSON::Binary:0x2064 type=uuid data=0x4508feda2dce4ec6...>}}
D, [2020-11-07T14:01:50.782687 #29918] DEBUG -- : MONGODB | localhost:27017 req:482 | dbName.listCollections | SUCCEEDED | 0.002s
D, [2020-11-07T14:01:53.986862 #29918] DEBUG -- : MONGODB | Server description for localhost:27017 changed from 'standalone' to 'standalone' [awaited].
D, [2020-11-07T14:01:53.987784 #29918] DEBUG -- : MONGODB | There was a change in the members of the 'Single' topology.
D, [2020-11-07T14:01:54.311966 #29918] DEBUG -- : MONGODB | Server description for localhost:27017 changed from 'standalone' to 'standalone'.
D, [2020-11-07T14:01:54.312747 #29918] DEBUG -- : MONGODB | There was a change in the members of the 'Single' topology.
D, [2020-11-07T14:01:55.799418 #29918] DEBUG -- : MONGODB | localhost:27017 req:483 conn:1:1 sconn:78 | dbName.find | STARTED | {"find"=>"notifications", "filter"=>{"_id"=>{"$gt"=>BSON::ObjectId('5fa012440d0e947dd8dfd2f9')}}, "limit"=>1, "$db"=>"dbName", "lsid"=>{"id"=><BSON::Binary:0x2064 type=uuid data=0x4508feda2dce4ec6...>}}
Below is the logstash logs file:-
[2020-11-07T16:32:33,678][WARN ][logstash.inputs.mongodb ][main][6a52e3ca90ba4ebc63108d49c11fcede25b196c679f313b40b02a8e17606c977] MongoDB Input threw an exception, restarting {:exception=>#<Sequel::DatabaseError: Java::JavaSql::SQLException: attempt to write a readonly database>}
Index gets created on elasticsearch but docs don't get inserted there.
Related
This issue has to do with the fact that the file exists on the backend container but not the postgres container. How could I transfer the file between containers automatically?
I am currently trying to execute the following script:
COPY climates(
station_id,
date,
element,
data_value,
m_flag,
q_flag,
s_flag,
obs_time
)
FROM '/usr/api/2017.csv`
DELIMITER ','
CSV HEADER;
within a docker container running a sequelize backend connecting to a postgres:14.1-alpine container.
The following error is returned:
db_1 | 2022-08-30 04:23:58.358 UTC [29] ERROR: could not open file "/usr/api/2017.csv" for reading: No such file or directory
db_1 | 2022-08-30 04:23:58.358 UTC [29] HINT: COPY FROM instructs the PostgreSQL server process to read a file. You may want a client-side facility such as psql's \copy.
db_1 | 2022-08-30 04:23:58.358 UTC [29] STATEMENT: COPY climates(
db_1 | station_id,
db_1 | date,
db_1 | element,
db_1 | data_value,
db_1 | m_flag,
db_1 | q_flag,
db_1 | s_flag,
db_1 | obs_time
db_1 | )
db_1 | FROM '/usr/api/2017.csv'
db_1 | DELIMITER ','
db_1 | CSV HEADER;
ebapi | Unable to connect to the database: MigrationError: Migration 20220829_02_populate_table.js (up) failed: Original error: could not open file "/usr/api/2017.csv" for reading: No such file or directory
ebapi | at /usr/api/node_modules/umzug/lib/umzug.js:151:27
ebapi | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ebapi | at async Umzug.runCommand (/usr/api/node_modules/umzug/lib/umzug.js:107:20)
ebapi | ... 2 lines matching cause stack trace ...
ebapi | at async start (/usr/api/index.js:14:3) {
ebapi | cause: Error
ebapi | at Query.run (/usr/api/node_modules/sequelize/lib/dialects/postgres/query.js:50:25)
ebapi | at /usr/api/node_modules/sequelize/lib/sequelize.js:311:28
ebapi | at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
ebapi | at async Object.up (/usr/api/migrations/20220829_02_populate_table.js:10:5)
ebapi | at async /usr/api/node_modules/umzug/lib/umzug.js:148:21
ebapi | at async Umzug.runCommand (/usr/api/node_modules/umzug/lib/umzug.js:107:20)
ebapi | at async runMigrations (/usr/api/util/db.js:52:22)
ebapi | at async connectToDatabase (/usr/api/util/db.js:32:5)
ebapi | at async start (/usr/api/index.js:14:3) {
ebapi | name: 'SequelizeDatabaseError',
...
Here is my docker-compose.yml
# set up a postgres database version: "3.8" services: db:
image: postgres:14.1-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
ports:
- "5432:5432"
volumes:
- db:/var/lib/postgresql/data
- ./db/init.sql:/docker-entrypoint-initdb.d/create_tables.sql api:
container_name: ebapi
build:
context: ./energybot
depends_on:
- db
ports:
- 3001:3001
environment:
DATABASE_URL: postgres://postgres:postgres#db:5432/postgres
DB_HOST: db
DB_PORT: 5432
DB_USER: postgres
DB_PASSWORD: postgres
DB_NAME: postgres
links:
- db
volumes:
- "./energybot:/usr/api"
volumes: db:
driver: local
I am trying to index sample csv based data into opendistro elasticsearch but failing to create the index. Could you please let me what i am missing here.
csv file to index
[admin#fedser32 logstashoss-docker]$ cat /tmp/student.csv
"aaa","bbb",27,"Day Street"
"xxx","yyy",33,"Web Street"
"sss","mmm",29,"Adam Street"
logstash.conf
[admin#fedser32 logstashoss-docker]$ cat logstash.conf
input {
file {
path => "/tmp/student.csv"
start_position => "beginning"
}
}
filter {
csv {
columns => ["firstname", "lastname", "age", "address"]
}
}
output {
elasticsearch {
hosts => ["https://fedser32.stack.com:9200"]
index => "sampledata"
ssl => true
ssl_certificate_verification => false
user => "admin"
password => "admin#1234"
}
}
My Opendistro cluster is listening on 9200 as shown below.
[admin#fedser32 logstashoss-docker]$ curl -X GET -u admin:admin#1234 -k https://fedser32.stack.com:9200
{
"name" : "odfe-node1",
"cluster_name" : "odfe-cluster",
"cluster_uuid" : "5GOEtg12S6qM5eaBkmzUXg",
"version" : {
"number" : "7.10.0",
"build_flavor" : "oss",
"build_type" : "tar",
"build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
"build_date" : "2020-11-09T21:30:33.964949Z",
"build_snapshot" : false,
"lucene_version" : "8.7.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
As per the logs it does indicate it is able to find the csv file as shown below.
logstash_1 | [2022-03-03T12:11:44,716][INFO ][logstash.outputs.elasticsearch][main] Index Lifecycle Management is set to 'auto', but will be disabled - Index Lifecycle management is not installed on your Elasticsearch cluster
logstash_1 | [2022-03-03T12:11:44,716][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"#timestamp"=>{"type"=>"date"}, "#version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2022-03-03T12:11:44,725][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x5c537d14 run>"}
logstash_1 | [2022-03-03T12:11:45,439][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.71}
logstash_1 | [2022-03-03T12:11:45,676][INFO ][logstash.inputs.file ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_20d37e3ca625c7debb90eb1c70f994d6", :path=>["/tmp/student.csv"]}
logstash_1 | [2022-03-03T12:11:45,697][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2022-03-03T12:11:45,738][INFO ][filewatch.observingtail ][main][2f140d63e9cab8ddc711daddee17a77865645a8de3d2be55737aa0da8790511c] START, creating Discoverer, Watch with file and sincedb collections
logstash_1 | [2022-03-03T12:11:45,761][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
logstash_1 | [2022-03-03T12:11:45,921][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
Could you check the access right for /tmp/student.csv file? it must be readable by user logstash.
check with this command:
#ls -l /tmp
Other way, if you have already indexed the file path, you have to clean up the sincedb
The thing that i was missing is i had to volume mount my CSV file into the logstash container as shown below after which i was able to index my csv data.
[admin#fedser opensearch-logstash-docker]$ cat docker-compose.yml
version: '2.1'
services:
logstash:
image: opensearchproject/logstash-oss-with-opensearch-output-plugin:7.16.2
ports:
- "5044:5044"
volumes:
- $PWD/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
- $PWD/student.csv:/tmp/student.csv
I have following code to save to a local running mongo instance:
MongoCredential credential = MongoCredential.createCredential("myuser", "mydatabase", "mypassword".toCharArray());
MongoClient mongo = MongoClients.create(MongoClientSettings.builder()
.applyToClusterSettings(builder -> builder.hosts(Arrays.asList(new
ServerAddress("localhost", 27017))))
.credential(credential)
.build());
MongoDatabase database = mongo.getDatabase("mydatabase");
MongoCollection<Document> collection = database.getCollection("mycollection");
collection.insertOne(document);
I have created a user for usernmae/password used in code above using db.createUser() command in mongo.exe shell and these are same credentials I provided while installing mongodb.
db.createUser(
{ user: "myuser",
pwd: "mypassword",
roles:[{role: "userAdminAnyDatabase" , db:"admin"}]})
But code fails with:
Exception in thread "main" com.mongodb.MongoSecurityException: Exception authenticating MongoCredential{mechanism=SCRAM-SHA-1, userName='myuser', source='mydatabase', password=<hidden>, mechanismProperties={}}
What am I missing here?
Where, i.e. in which database did you create the user? Typically users are created in database admin. When you connect to a MongoDB then you should always specify the authentication database and the database you like to use.
The defaults are a bit confusing and not really consistent, esp. different drivers/tools seem to behave different. See this table to get an overview:
+-------------------------------------------------------------------------------------+
|Connection parameters | Authentication | Current |
| | database | database |
+-------------------------------------------------------------------------------------+
|mongo -u user -p pwd --authenticationDatabase admin myDB | admin | myDB |
|mongo -u user -p pwd myDB | myDB | myDB |
|mongo -u user -p pwd --authenticationDatabase admin | admin | test |
|mongo -u user -p pwd --host localhost:27017 | admin | test |
|mongo -u user -p pwd | admin | test |
|mongo -u user -p pwd localhost:27017 | test | test |
|mongosh -u user -p pwd localhost:27017 | admin | test | -> Different on mongosh and legacy mongo shell
+-------------------------------------------------------------------------------------+
If you like to use Connection string in URI format, it would correspond to these ones. There it is more consistent and well documented.
+-------------------------------------------------------------------------------------+
|Connection string | Authentication | Current |
| | database | database |
+-------------------------------------------------------------------------------------+
|"mongodb://user:pwd#hostname/myDB?authSource=admin" | admin | myDB |
|"mongodb://user:pwd#hostname/myDB" | myDB | myDB |
|"mongodb://user:pwd#hostname?authSource=admin" | admin | test |
|"mongodb://user:pwd#hostname" | admin | test |
+-------------------------------------------------------------------------------------+
I guess you created the user in admin database but as you don't specify authenticationDatabase while connecting, Mongo defaults it to mydatabase where it fails, because user does not exist in database mydatabase.
I'm a beginner on ELK and trying to load data from MySQL to elasticsearch(for next step I want to query them via javarestclient), so I used logstash-6.2.4 and elasticsearch-6.2.4. and followed an example here.
when I run: bin/logstash -f /path/to/my.conf, I got the error:
[2018-04-22T10:15:08,713][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::Jdbc jdbc_connection_string=>\"jdbc:mysql://localhost:3306/testdb\", jdbc_user=>\"root\", jdbc_password=><password>, jdbc_driver_library=>\"/usr/local/logstash-6.2.4/config/mysql-connector-java-6.0.6.jar\", jdbc_driver_class=>\"com.mysql.jdbc.Driver\", statement=>\"SELECT * FROM testtable\", id=>\"7ff303d15d8fc2537248f48fae5f3925bca7649bbafc30d2cd52394ea9961797\", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>\"plain_f8d44c47-8421-4bb9-a6b9-0b34e0aceb13\", enable_metric=>true, charset=>\"UTF-8\">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>\"info\", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, last_run_metadata_path=>\"/Users/chu/.logstash_jdbc_last_run\", use_column_value=>false, tracking_column_type=>\"numeric\", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>", :error=>"can't dup Fixnum", :thread=>"#<Thread:0x3fae16e2 run>"}
[2018-04-22T10:15:09,256][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<TypeError: can't dup Fixnum>, :backtrace=>["org/jruby/RubyKernel.java:1882:in `dup'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date/format.rb:838:in `_parse'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date.rb:1830:in `parse'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:87:in `set_value'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:36:in `initialize'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:29:in `build_last_value_tracker'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb:216:in `register'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:342:in `register_plugin'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:353:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:353:in `register_plugins'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:500:in `start_inputs'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:394:in `start_workers'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:290:in `run'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:250:in `block in start'"], :thread=>"#<Thread:0x3fae16e2 run>"}
[2018-04-22T10:15:09,314][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main, action_result: false", :backtrace=>nil}
here is the testdbinit.conf(utf-8 encoding):
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/testdb"
jdbc_user => "root"
jdbc_password => "mypassword"
jdbc_driver_library => "/usr/local/logstash-6.2.4/config/mysql-connector-java-6.0.6.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM testtable"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "testdemo"
document_id => "%{personid}"
"document_type" => "person"
}
}
here is the table(database:testdb--->table:testtable):
mysql> select * from testtable;
+----------+----------+-----------+-----------+-------+
| PersonID | LastName | FirstName | City | flag |
+----------+----------+-----------+-----------+-------+
| 1003 | McWell | Sharon | Cape Town | exist |
| 1002 | Baron | Richard | Cape Town | exist |
| 1001 | Kallis | Jaques | Cape Town | exist |
| 1004 | Zhaosi | Nicholas | Iron Hill | exist |
+----------+----------+-----------+-----------+-------+
I try to google the issue, but still have no clue; I think maybe some type conversion errors(TypeError: can't dup Fixnum ) cause this issue, but what exactly is this"dup Fixnum", how to solve them?
And one more thing also confused me is: I run the same code yesterday, and succeed loaded data into elasticsearch and I could also search them via localhost:9200, but next morning when I try the same thing(on the same cpmputer), I met these issues. I have tossed this a whole day, please help me get some hints.
I also asked the same question on logstash community, with their help, I think I find the solution to my issue:
the exception trace exception=>#<TypeError: can't dup Fixnum> means there is a type conversion error. The sql_last_value which is initialized as 0 for numeric values or 1970-01-01 for datetime values. I think my sql_last_value stored in last_run_metadata_path is not the numeric or datetime value, so I add clean_run => true in conf file and run logstash again, no more error occurs. After clean_run => true was added, the wrong value of sql_last_value was reset to 0 or 1970-01-01 , the thread goes on and data be indexed successfully.
Logstash 5.2.1
The configuration below is Ok, the partial updates are woking. I just misunderstood the results and how time zone is used by Logstash.
jdbc_default_timezone
Timezone conversion. SQL does not allow for timezone data in timestamp fields. This plugin will automatically convert your SQL timestamp fields to Logstash timestamps, in relative UTC time in ISO8601 format.
Using this setting will manually assign a specified timezone offset, instead of using the timezone setting of the local machine. You must use a canonical timezone, Europe/Rome, for example.
I want to index some data from a PostgreSQL to Elasticseach with help of Logstash. The partial updates should be working.
But in my case, Logstash puts the wrong time zone in ~/.logstash_jdbc_last_run.
$cat ~/.logstash_jdbc_last_run
--- 2017-03-08 09:29:00.259000000 Z
My PC/Server time:
$date
mer 8 mar 2017, 10.29.31, CET
$cat /etc/timezone
Europe/Rome
My Logstash configuration.:
input {
jdbc {
# Postgres jdbc connection string to our database, mydb
jdbc_connection_string => "jdbc:postgresql://localhost:5432/postgres"
# The user we wish to execute our statement as
jdbc_user => "logstash"
jdbc_password => "logstashpass"
# The path to our downloaded jdbc driver
jdbc_driver_library => "/home/trex/Development/ship_to_elasticsearch/software/postgresql-42.0.0.jar"
# The name of the driver class for Postgresql
jdbc_driver_class => "org.postgresql.Driver"
jdbc_default_timezone => "Europe/Rome"
# our query
statement => "SELECT * FROM contacts WHERE timestamp > :sql_last_value"
# every 1 min
schedule => "*/1 * * * *"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
hosts => [ "localhost:9200" ]
index => "database.%{+yyyy.MM.dd.HH}"
}
}
Without jdbc_default_timezone the time zone is wrong too.
My PostgeSQL data:
postgres=# select * from "contacts"; uid | timestamp | email | first_name | last_name
-----+----------------------------+-------------------------+------------+------------
1 | 2017-03-07 18:09:25.358684 | jim#example.com | Jim | Smith
2 | 2017-03-07 18:09:25.3756 | | John | Smith
3 | 2017-03-07 18:09:25.384053 | carol#example.com | Carol | Smith
4 | 2017-03-07 18:09:25.869833 | sam#example.com | Sam |
5 | 2017-03-08 10:04:26.39423 | trex#example.com | T | Rex
The DB data is imported like this:
INSERT INTO contacts(timestamp, email, first_name, last_name) VALUES(current_timestamp, 'sam#example.com', 'Sam', null);
Why does Logstash put the wrong time zone in ~/.logstash_jdbc_last_run? And how to fix it?
2017-03-08 09:29:00.259000000 Z mean UTC timezone, it's correct.
It is defaulting to UTC time. If you would like to store it in a different timezone, you can convert the timestamp by adding a filter like so:
filter {
mutate {
add_field => {
# Create a new field with string value of the UTC event date
"timestamp_extract" => "%{#timestamp}"
}
}
date {
# Parse UTC string value and convert it to my timezone into a new field
match => [ "timestamp_extract", "yyyy-MM-dd HH:mm:ss Z" ]
timezone => "Europe/Rome"
locale => "en"
remove_field => [ "timestamp_extract" ]
target => "timestamp_europe"
}
}
This will convert the timezone, by first extracting the timestamp into a timestamp_extract field and then converting it into Europe/Rome timezone. And the new converted timestamp is put in the timestamp_europe field.
Hope its clearer now.