how can i find my generated files logstash? - logstash

I'm beginner with ELK stack , so I configured logstash , and when I want to search with ElasticSearch I have no results , so I'm supposed to get a result , because I do my parse on grokdebug and it works very well .
I do my research as follows:
"http://localhost:9200/logstash-2016.03.14/_search?q=*"
I wanted to know if I can see my logstash files generated and if it generated the results or not?
knowing that I tried searching elastic search on a JSON file and it works.
The problem is at logstash .
thanks

Logstash does not generate any file (except for its configuration).
To debug your logstash instance, you can :
Use the --verbose flag and/or --debug
Use the -l "file.log" to output logs in file.log (default to stdout)
Use the stdout output plugin et see the results
Also, did you use the elasticsearch output plugin?

Related

Parsing Jenkins build log application log using logstash Grok pattern and load in Elasticsearch

I am very new to Logstash and ELK in general. I need to write a grok pattern for a Jenkins build console log. My requirement is below-
"Started by user User_SMS" => From this line, I have to extract the username "User_SMS" where the line starts with text "Started by user".
Similarly, from the line "git checkout -f 07999b25163b658686558d9a1d05dd99c30c6059 # timeout=10" I have to extract the hexadecimal checkout id 07999b25163b658686558d9a1d05dd99c30c6059 when line starts with "git checkout -f".
From the line I have to find the build status "Finished: SUCCESS". The line starts with "Finished:" and I have to capture the value "SUCCESS" here, it would be "FAILURE" in some other build as well.
Please help in parsing the log using Grok.
The index in Elasticsearch will have the above fields user_name, checkout_id, build_status etc.
I am unable to create the Grok pattern to parse this Jenkins log. Please guide me with this.
Jenkins Log image

how to provide default value for anlytics to jhipster command?

I am trying to build the jhipster automatically, kindly help me to give a default value to analytics while building the jhipster file.
jhipster jdl file.jdl --no-insight
Above command will build and monolith application, after some steps it asks a question for analytics, how do you provide a default value to anlytics in jhipster command or jdl file ?
Insights question
Analytics question
I see below options, but I dont see anything for anlytics
-V, --version output the version number
--blueprints <value> A comma separated list of one or more generator blueprints to use for the sub generators, e.g. --blueprints kotlin,vuejs
--force-insight Force insight
--no-insight Disable insight
--force Override every file (default: false)
--dry-run Print conflicts (default: false)
--whitespace Whitespace changes will not trigger conflicts (default: false)
--bail Fail on first conflict (default: false)
--skip-regenerate Don't regenerate identical files (default: false)
--skip-yo-resolve Ignore .yo-resolve files (default: false)
-d, --debug enable debugger
-h, --help display help for command
Set an env variable as export NG_CLI_ANALYTICS=ci before we run jhipster jdl app.jdl --no-insight

How can I change encoding to UTF-8 for linux redirection..?

Currently I use vm in GCP.
I want to get stackdriver logs to text file(my log includes Korean(unicode).)
When I try to read using gcloud command, I can read my log (Korean also readable..)
But when I try write to file using redirect command (>), all Korean characters converted to '???'
my command is here:
gcloud beta logging read --project=[my project] '[filters]' > log
How can I read korean character via file?
Thanks :)
This can be fixed by setting PYTHONIOENCODING to utf-8.
You can do this by adding PYTHONIOENCODING=UTF-8 gcloud ...

Output debug package logs inside a file

I'm using debug npm module to log stuff, is there a way to log into a file programmatically?
Right now I'm doing DEBUG=* node myApp.js > abc.log, how can I log into abc.log by simply running DEBUG=* node myApp.js, while also outputting in stderr?
I didn't find any package doing this.
The package doesn't seem to provide a builtin feature to do this, but it provides you with a hook to customise how logs are emitted.
There is an example in the Readme here.
Note: the example is a bit confusing because it shows you how to replace writing on stdout with ... writing on stdout using the console !
So what you should at the startup of the application:
Open a stream that writes to a file. Tutorial here if you need help on this
Override the log.log() as explained in the doc to write to your file instead of using console.log().

How to rerun Logstash jdbc input plugin?

I am using logstash 2.3 in an Ubuntu 14.04 , not as a service ( just extracted the tar.gz ). I successfully ran logstash jdbc input plug-in and fetched some data from my SQL server. Now I wanna re run the same, I forgot to set record_last_run to false during the test run.
When I try to re run it logstash is still standing. How can I get it to read the data again?
I tried to locate .logstash_jdbc_last_run with no luck in /home , /root and even /tmp. When I echo $USER_HOME it shows an empty line.
You can try to set clean_run to true.
https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html#plugins-inputs-jdbc-clean_run
clean_run
Value type is boolean
Default value is false
Whether the previous run state should be preserved

Resources