Logstash JDBC - how to process json field? - logstash

I have postgresql which stores some data as json fields, eg:
{"adults":2,"children":{"total":0,"ages":[]}}
I'm using logstash-input-jdbc plugin to process the data
How do i parse json from jdbc? From logs i see that the fields arrive as PGObject:
"travelers_json" => #<Java::OrgPostgresqlUtil::PGobject:0x278826b2>
which has a value and type properties.
I've tried using json filter, but i don't know how to access the value property to feed to json filter?
What i've tried:
source => "[travelers_json][value]"
source => "travelers_json.value"
source => "%{travelers_json.value}"
I must be missing something very obvious here?

Ok, so the simpliest way was to convert json to text in postgresql:
SELECT travelers_json::TEXT from xxx
but i still would like to know how to access that PGobject

Related

How to UPDATE JSON column completely in MySql 5.7?

I am new to MySQL 5.7. I have a table with a JSON column.
data:{"Game": "Cricket","Player":"Dhoni"}
This is just a sample JSON object but I have plenty of key,value pairs in my column.
I want to replace my JSON completely with new JSON object that I am receiving as a response from some API
e.g.
{"Game": "Hockey","Player":"Kohli","":"",..........}
Please suggest a method for the same.Thanks
To update the JSON completely in MySQL version >= 5.7, simply use the UPDATE query and it will work.
For example:
UPDATE json_table SET json = '{"Game": "Hockey","Player":"Kohli"}' (WHERE CLAUSE)

Logstash kv filter

I have a file with the following format:
10302\t<document>.....</document>
12303\t<document>.....</document>
10054\t<document>.....</document>
10034\t<document>.....</document>
as you can see there are two values separated by a tab char. I need to
index the first token (e.g. 10302, 12303...) as ID
extract (and then index) some information from the second token (the XML document). In other words, the second token would be used with the xml filter for extracting some information
Is it possibile to do that separating the two values using the kv filter? Ideally I should end, for each line, with a document like this:
id:10302
msg:<document>....</document>
I could use a grok filter but I'd like to avoid any regex as the field detection is very easy and can be accomplished with a simple key-value logic. However, using a plain kv detection I'm ending with the following:
"10302": <document>.....</document>
"12303": <document>.....</document>
"10054": <document>.....</document>
"10034": <document>.....</document>
and this is not want I need.
It is not possible to use kv for the job you want to do, as far as I know, since there are no possible key for the id (10302, 10303, 10304...). There are no possible key since there is nothing before the id.
This grok configuration would work, assuming each id + document is on the same line :
grok {
match => { "message" => "^%{INT:ID}\t%{GREEDYDATA:msg}"}
}

conversion fields kibana and logstash

I try to convert a field "tmp_reponse" in integer in the file "conf" with logstash as follows :
mutate {
convert => {"TMP_REPONSE" => "integer"}
}
,but on Kibana it shows me that he is still string. I do not understand how I can make a convertion to use my fields "tmp_response" to use it like as a metric fields on kibana
thank you help me please and if there is anyone who can explain to me how I can master the metrics on Kibana and use fields as being of metrics fields
mutate{} will change the type of the field in logstash. If you added a stdout{} output stanza, you would see that it's an integer at that point.
How elasticsearch treats it is another problem entirely. Elasticsearch usually sets the type of a field based on the first input received, so if you sent documents in before you added the mutate to your logstash config, they would have been strings and the elasticsearch index will always consider that field to be a string.
The type may also have been defined in an elasticsearch template or mapping.
The good news is that your mutate will probably set the type when a new index is created. If you're using daily indexes (the default in logstash), you can just wait a day. Or you can delete the index (losing any data so far) and let a new one be created. Or you could rebuild the index.
Good luck.

Reading Cassandra Map in Node.js

I have table created using map in cassandra, Now i am trying to read the table from node.js and it returns object for the map, can i get the item count in a map and loop through it to get the items in the map?
table script
create table workingteam (teamid bigint primary key, status map)
You did not post a lot of details. First you will need to study the object Cassandra sends you. Good way to start would be to convert it to the JSON format and dump to the output through log.
console.log("Cassandra sent: %j", object);
I'm guessing in this object you will find attributes like connection parameters, host, client etc, but also something iterative that will contain keys and values.

logstash & kibana setting the #message property

The #message property seems to be the core property when using logstash & kibana. My json logger sends the data with the message at
{"msg":"some one did something"}
if i change it so its
{"#message":"someone did something"}
the logstash server picks it up as "#fields.#message".
I am a bit confused how I can set this property to render correctly.
I suspect that the input is reading events as json and not json_event. The difference is that json will add any fields under the #fields namespace. json_event will expect the full logstash event serialized as json.
The functionality you have is probably what you want. You typically don't want to be sending the full json_event if you don't have to. You can overwrite the #message field in logstash with the mutate filter.
mutate {
type => 'json_logger'
replace => ["#message", "%{msg}"]
remove => "msg"
}

Resources