Missing Index Patterns - logstash

I'm missing some index patterns in Kibana and I've been trying to figure out why this is the case. I have installed logstash, elasticsearch and kibana and started the services. How do I get logstash, apache-access etc to show in this section? Only filebeat shows.
I've used the CURL command for the localhost and port to see the indices and only kibana and filebeat are shown there are and apache-access and logstash are no where to be seen.
Can anyone guide me in the right direction to resolving this and being able to see 'logstash' and 'apache-access' under the patterns section.

Data is being saved inside indices in Elasticsearch cluster, in Kibana you can define index-patterns to show multiple indices at the same time.
When you look in the left menu of your screenshot you'll find a menu item called "Index Management", all indices will be shown there, here you'll find the name of the indices that exist in your Elasticsearch cluster.
An index pattern in Kibana is just a (wildcarded) pattern to allow you to see the data.
On the top right of your screenshot you see the button "+ Create Index Pattern", by clicking there you can define a new pattern which will live next to the existing one (filebeat-*).
Once you defined a second one, you'll be able to define which one is the default one chosen when you open Kibana and a dropdown will be available on your discover page in Kibana with the active index-pattern for your discovery at that time.
tash
So in short, press the "create index pattern" button twice entering once logstash* as the pattern and once apache-access* as pattern.

Related

How do I group logs in Kibana/Logstash?

We have an ELK setup and the Logstash is receiving all the logs from the Filebeat installed on the server. So when I open Kibana and it asks for an index I put just a * for the index value and go to the Discover tab to check the logs and it shows each line of the log in a separate expandable section.
I want to be able to group the logs based on the timestamp first and then on a common ID that is generated in our logs per request to identify it from the rest. An example of the logs we get :
DEBUG [2018-11-23 11:28:22,847][298b364850d8] Some information
INFO [2018-11-23 11:27:33,152][298b364850d8] Some information
INFO [2018-11-24 11:31:20,407][b66a88287eeb] Some information
DEBUG [2018-11-23 11:31:20,407][b66a88287eeb] Some information
I would like to see all logs for request ID : 298b364850d8 in the same drop down given they are continuous logs. Then it can break into the second dropdown again grouped by the request ID : b66a88287eeb in the order of timestamp.
Is this even possible or am I expecting too much from the tool?
OR if there is a better strategy to grouping of logs I'm more than happy to listen to suggestions.
I have been told by a friend that I could configure this in logstash to group logs based on some regex n stuff but I just don't know where and how to configure it to fo the grouping.
I am completely new to the whole ELK stack to bear with my questions which might be quite elementary in nature.
Your question is truly a little vague and broad as you say. However, I will try to help :)
Check the index that you define in the logstash output. This is the index that need to be defined Kibana - not *.
Create an Index Pattern to Connect to Elasticsearch. This will parse the fields of the logs and will allow you to filter as you want.
It recommend using a GUI tool (like Cerebro) to better understand what is going on in you ES. It would also help you to get better clue of the indices you have there.
Good Luck
You can use #timeStamp filter and search query as below sample image to filter what you want.

No data is appearing in SSMS even though my job is running without errors

Problem: No data is appearing in SSMS (Sql Server Management Studio)
I don't see any errors appearing and my job diagram successfully shows a process from input to output.
I'm trying to use the continuous export feature of Azure Application Insights, Stream Analytics, and SQL Database.
Here is my query:
SELECT
A.context.data.eventTime as eventTime,
A.context.device.type as deviceType,
A.context.[user].anonId as userId,
A.context.device.roleInstance as machineName
INTO DevUserlgnsOutput -- Output Name
FROM devUserlgnsStreamInput A -- Input Name
I tested the query with sample data and the output box below the query and it returned what I expected, so I don't think the query itself is the issue.
I also know that the custom events I'm trying to display the attributes of have occurred since I began the job. My job is also still running and has not stopped since its creation.
In addition, I would like to point out that the monitoring graph on the stream analytics page detects 0 inputs, 0 outputs, and 0 runtime errors.
Thank you in advance for the help!
Below are some pictures that might help:
Stream Analytics Output Details
The Empty SSMS after I clicked "display top 1000 rows," which should be filled with data
No input events, output events, or runtime errors for the stream analytics job
I've had this issue twice with 2 separate application insights, containers, jobs, etc. Both times I solved this by editing the path pattern of my input(s) to my job.
To navigate to the necessary blade to make the following changes:
1) Click on your stream analytics job
2) Click "inputs" under the "job topology" section of the blade
3) Click your input (if multiple inputs, do this to 1 at a time)
4) Use the blade that pops up on the right side of the screen
The 4 potential solutions I've come across are ( A-D in bold):
A. Making sure the path pattern you enter is plain text with no hidden characters (sometimes copying it from the container on Azure made it not plain text).
*Steps:*
1) Cut the path pattern you have already in the input blade
2) Paste it into Notepad and re-copy it
3) Re-paste it into the path pattern slot of your input
B. Append your path pattern with /{date}/{time}
Simply type this at the end of your path pattern in the blade's textbox
C. Remove the container name and the "/" that immediately follows it from the beginning of your path pattern (see picture below)
Edit path pattern
Should be self-explanatory after seeing the pic.
D. Changing the date format of your input to YYYY-MM-DD in the drop-down box.
Should also be self-explanatory (look at the above picture if not).
Hope this helps!!

Working with large number of fields in kibana

Is there a way to filter through the entries in the "Fields" dropdown in Kibana under the Visualize tab?
My data has over 1000 fields and so its not convenient having to scroll through a really long dropdown menu (that looks like below) - just to pick a field thats buried in there somewhere.
Is there a way to make it searchable like how it is in the discover page for indexes and for fields - as seen below:
I am open to other suggestions as well - if there is a different way to achive the same result - i.e., to pick fields to visualize when there are a lot of fields to pick from.
I am using Kibana 5.4.1 on Windows
Go to https://github.com/elastic/kibana and clone the repository, it's in 6 version

How can I crawl but not index web pages in OpenSearchServer?

I'm using OpenSearchServer to provide search functionality on a web site. I want to crawl all pages on the site for links to follow but I want to exclude some pages from the index. I can't work out how to do this.
Specifically the website includes a shop that has its own product search and I am keeping this search for products and categories. The product pages have URLs like http://www.thesite/p/123 so I don't want to include any page like this in the search results. However some product pages reference background info pages and I want these to be included in the search index.
The problem I have is that the filter has no effect on the results - it doesn't filter out the /p/ and /c/ results. If I change the filter by unticking the negative box I get no results so it seems to be either the contents of the field or the filter criteria that is causing the problem.
I've tried adding a negative filter to the default query called search in the Query > Filter tab on the index with url:"http://www.thesite/p/*"
but it seems that wildcards are not supported for query filters although they are supported for Crawler > Exclusion list filters.
I've tried adding a new field called urlField in Schema > Fields and populating it using an analyzer configured using the Whitespace Tokenizer and a regular expression (http://www.thesite/(c|p)/). When I use the Test button it seems to generate two tokens for my test URL http://www.thesite/p/123:
http://www.thesite/p/
p
I'd hoped to be able to use the first one in a Query > Filter to exclude all the shop results and optionally be able to use the p (for product) or c (for category) if I need to search the product pages sometime in the future.
The urlShop field in the schema is set up as follows:
Indexed: yes
Stored: no (because I don't need the field back, just want to be able to filter on it)
TermVector: No
Analyzer: urlShop
Copy of: url
I've added urlFilter:"http://www.thesite/p/" to Query > Filters with the negative box ticked.
This seems to have no effect on the results when I use the default renderer.
To see whether it affects the returned results I unticked the negative box in the query filter I get no results in the default renderer. This leads me to believe that the urlShop field is not being populated but I'm not sure how to check this directly.
I would like to know whether there is an easier way to do this but if my approach makes sense in the context of OpenSearchServer please can you help me identify what's wrong?
The website is running under IIS and OpenSearchServer will be configured on the same server running in Tomcat.
Finally figured this out...
Go to query and hit edit for your configured query. Then go to the filters tab. Add a query filter like this:
urlExact:"http://myurltoexclude*"
Check the "negative" box. Click add.
Now make sure to click "save in the tiny little button on the right hand side. This is the part I missed. The URLS are still in the DB and crawl, but at least they aren't returned in results.

kibana - display average times

I am using kibana 3 to display my nginx logs which include the request_time, I would like a graph to display the average request times over the last x seconds in kibana but am not sure how todo this. Is this easily done or do I need to push it out to graphite?
You're going to want to find the histogram settings panel. There should be a gear icon labelled "configure" or some such. Once there, find the panel's "mode" setting:
Pick "mean" mode, then select the field you'd like to show. Note you must select a field, and that field must be numeric, or the histogram will throw an error.
You can try it at the live demo, pretty quick. bytes is a good field to use.

Resources