I have set up an ELK stack on one server and filebeat on 2 other servers to send data directly to logstash.
Setup is working fine and I got log result as per need but when I see field sections on Kibana UI (Left side), I see "host.hostname" field which have two servers fqdns (i.e "ip-113-331-116-35.us-east-1.compute.internal",
"ip-122-231-123-35.us-east-1.compute.internal"
)
I want to set alias or rename those value as Production-1 and Production-2 respectively to show on kibana UI
How can I change those values without breaking anything
If you need any code snippet let me know
You can use the translate filter in the filter block of your logstash pipeline to rename the values.
filter {
translate {
field => "[host][hostname]"
destination => "[host][hostname]"
dictionary => {
"ip-113-331-116-35.us-east-1.compute.internal" => "Production-1"
"ip-122-231-123-35.us-east-1.compute.internal" => "Production-2"
}
}
}
Since the field host.hostname is an ECS-field I would not suggest to rename this particular field.
In my opinion you have two choices:
1.) Create a pipeline in Logstash
You can set up a simple pipeline in Logstash where you use the mutate filter plugin and do a add_field operation. This will create a new field on your event with the value of host.hostname. Here's a quick example:
filter{
if [host][hostname]{
mutate{
add_field => { "your_cool_field_name" => "%{[host][hostname]}" }
}
}
}
2.) Setup a custom mapping/index template
You can define field aliases within your custom mappings. I recommend reading this article about field aliases
Related
I am working with Cosmos DB and I want to write a SQL query that returns different name of an key in document object.
To elaborate, imagine you have the following document in one container having "makeName" key in "make" object.
{
"vehicleDetailId":"38CBEAF7-5858-4EED-8978-E220D2BA745E",
"type":"Vehicle",
"vehicleDetail":{
"make":{
"Id":"B57ADAAD-C16E-44F9-A05B-AAB3BF7068B9",
"makeName":"BMW"
}
}
}
I want to write a query to display "vehicleMake" key in place of "makeName".
How to give alias name in the nested object property.
Output should be like below
{
"vehicleDetailId":"38CBEAF7-5858-4EED-8978-E220D2BA745E",
"type":"Vehicle",
"vehicleDetail":{
"make":{
"Id":"B57ADAAD-C16E-44F9-A05B-AAB3BF7068B9",
"vehicleMake":"BMW"
}
}
}
I have no idea how to query in Cosmosdb to get the above result.
Aliases for properties are similar to the way you'd create a column alias in SQL Server, with the as keyword. In your example, it would be:
SELECT c.vehicleDetail.make.makeName as vehicleMake
FROM c
This would return:
[
{
"vehicleMake": "BMW"
}
]
Try this:
SELECT c.vehicleDetailId, c.type,
{"make":{"Id":c.vehicleDetail.make.Id, "vehicleMake":c.vehicleDetail.make.makeName}} as vehicleDetail
FROM c
It uses the aliasing described in the following documentation. All of the aliasing examples I could find in the documentation or blog posts only show a single level of json output, but it happens that you can nest an object (make) within an object (vehichleDetail) to get the behavior you want.
https://learn.microsoft.com/en-us/azure/cosmos-db/sql-query-aliasing
I have a csv file which gets updated every hour.I need to upload it to kibana, I want to schedule logstash so that it gets updated every hour in kibana.I have searched many forums but I found about JDBC input scheduling but not for csv input.
You have to write your own logstash pipleline configuration. Based on how to read input, where to output the data. Kibana is a visualisation tool. Data is generally ingested in ElasticSearch and then viewed in Kibana dashboard. Pipeline configuration is read by logstash once it srarts up. A sample pipeline config, which reads csv data from a kafka topic, and pushes to ES is given below.
input {
kafka{
id => "test_id"
group_id => "test-consumer-group"
topics => ["test_metrics"]
bootstrap_servers => "kafka:9092"
consumer_threads => 1
codec => line
}
}
filter {
csv {
separator => ","
columns => ["timestamp","field1", "field2"]
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "_my_metrics"
}
}
Please refer below link to import data from CSV into Elasticsearch via Logstash and SinceDb.
https://qbox.io/blog/import-csv-elasticsearch-logstash-sincedb
Refer below link for more CSV filter plugin Configurations.
https://www.elastic.co/guide/en/logstash/current/plugins-filters-csv.html#plugins-filters-csv-columns
Hope it helps!
I want to override the item listing template file core/themes/classy/templates/dataset/item-list.html.twig for listing the fields field_slider_images as well as field_blog_tags respectively of their's multiple values of the field.
I have selected "Unordered List" in the view.
Please do check the attached image.
I have created following files :
item-list--field-blog-tags.html.twig
item-list--field-slider-images.html.twig
But, this is not rendered for the listing of the fields.
When I have created item-list.html.twig then only it will access.
However, both fields have different data to style and I am not able to get the current field name which is loading it's data in item-list.html.twig.
Had a brief look at this and it doesn't seem that 'item-list' to have suggestions, which is quite unfortunate.
In this situation there are two options:
Create your own suggestion which would accomplish exactly what you need.
You'll have to do something like this:
/
/*add new variable to theme with suggestion name*/
function hook_theme_registry_alter(&$theme_registry) {
$theme_registry['item_list']['variables']['suggestion'] = '';
}
//send a value to newly added variable to use it build the suggestion
function hook_ENTITY_TYPE_view(array &$build, $entity, $display, $view_mode) {
//add condition here if field exists or whatever, do the same for other field
$build['field_slider_images']['#suggestion'] = 'field_slider_images';
}
//use newly added variable to build suggestion
function hook_theme_suggestions_THEME_HOOK(array $variables) {//THEME_HOOK=item_list
$suggestions = array();
if(isset($variables['suggestion'])){
$suggestions[] = 'item_list__' . $variables['suggestion'];
}
return $suggestions;
}
Now you should be able to use item-list--field-slider-images.html.twig
Second option is to do what others in core did: use a new theme
function hook_ENTITY_TYPE_view(array &$build, $entity, $display, $view_mode) {
//add condition here if field exists or whatever, do the same for other field
$build['field_slider_images']['#theme'] = array(
'item_list',
'item_list__field_slider_images',
);
}
i want to retrieve the value from snmptrap input ,
The following log was generated while creating a loop,.
{
"message" => "##enterprise=[1.3.6.1.4.1.9.9.187],#timestamp=##value=2612151602>, #varbind_list=[##name= [1.3.6.1.4.1.9.9.187.1.2.5.1.17.32.1.14.16.255.255.17.0.0.0.0.0.0.0.0.2], #value=\"\x00\x00\">, ##name=[1.3.6.1.4.1.9.9.187.1.2.5.1.3.32.1.14.16.255.255.17.0.0.0.0.0.0.0.0.2], #value=##value=1>>, ##name=[1.3.6.1.4.1.9.9.187.1.2.5.1.28.32.1.14.16.255.255.17.0.0.0.0.0.0.0.0.2], #value=\"\">, ##name=[1.3.6.1.4.1.9.9.187.1.2.5.1.29.32.1.14.16.255.255.17.0.0.0.0.0.0.0.0.2], #value=##value=3>>], #specific_trap=7, #source_ip=\"1.2.3.4\", #agent_addr=##value=\"\xC0\xA8\v\e\">, #generic_trap=6>"
}
i want to retrive the value #source_ip from message , i try to use
mutate {
add_field => { "source_ip" =>["#source_ip"] }
}
to get the #souce_ip and for the new field , but still can't get the value ,
If anyone knows how to do with it , please help. Thanks.
The "#source_ip" information is not a field in what you've shown, but rather part of the [message] field. I would guess that the snmptrap{} input is not entirely happy with the message.
Given the example you have, you could run the message through the grok{} filter to pull out the "#source_ip" information.
I stopped using the snmptrap{} input due to other processing issues. I now run snmptrapd and have it write a json log file that is then read by a simple file{} input in logstash.
I am have a webservice which services GET requests of the following pattern
/v1/stores?name=<>&lat=23&lng=232....
There a number of query parameters which the request can accept. Is it possible to get url specific information kibana through log stash on kibana.What I really want is a average number of requests for each pattern along with their max, min and avg response types.I would also
You would want something like this as part of your logstash.conf:
grok {
// some pattern that extracts out the uri param (everything after ?) into a param field
}
kv {
source => 'param'
field_split => '&'
}
// you might also need to urldecode {} the parameters