Trying to install logstash as windows service. All works when i manually run it from CMD like so:
C:\Elastic\Logstash\bin\logstash -f c:\Elastic\Logstash\config\logstash-sample.conf
I see that file changes are updated and posted to console (per .conf file console output)
However, when i install Logstash as windows service:
sc create Logstash binpath="\"C:\Elastic\Logstash\bin\logstash\" -f \"c:\Elastic\Logstash\config\logstash-sample.conf\""
It creates windows service but will fail when starting it:
Logstash log:
[2019-04-15T14:40:29,605][ERROR][org.logstash.Logstash ]
java.lang.IllegalStateException: Logstash stopped processing because
of an error: (SystemExit) exit
When I try to install logstash with NSSM like below, it runs, but does not work:
nssm.exe install logstash "C:\Elastic\Logstash\bin\logstash.bat" "agent -f C:\Elastic\Logstash\config\logstash-sample.conf"
Found the solution:
The problem I was having is due to "agent" keyword. In CMD i ran this:
nssm edit logstash
Then I got the following window and modified Arguments:
Related
I'm trying to start graphhopper using pm2... graphhopper is a java application and the way I initiate it on the terminal is by going to its folder and entering the following command:
java -jar matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar server config.yml
This application works fine running from the command line, but I haven't succeeded on running it as a service with pm2. The config file I'm using is this one (pm2 start config.json):
{
"apps":[
{
"name":"graphhopper",
"cwd":".",
"script":"/usr/bin/java",
"args":[
"-jar",
"/home/myyser/graphhopper/map-matching/matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar",
"server",
"config.yml"
],
"log_date_format":"YYYY-MM-DD HH:mm Z",
"exec_interpreter":"",
"exec_mode":"fork"
}
]
}
I'm 100% sure that what I'm getting wrong here is the way I'm writing the "server", "config.yml" parameters... Looking into pm2 logs graphhopper I can see that those parameters are not being recognized at all... I've tried to tweak the way it's done as well but I didn't manage to figure out the right solution. I know how to start a java application using pm2 with no parameters. But how can I do it with a java application that has parameters as in the case of graphhopper?
As stated in the comments, this issue can be solved by creating a bash script and running it with pm2 instead of running directly the java application... The bash script used was the file graphhopper.sh as the following:
#!/bin/bash
java -jar matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar server config.yml
And to start it as a service with pm2:
pm2 start graphhopper.sh --name=graphhopper
You can also run a fat jar directly in pm2 - use two dashes to separate the command args:
pm2 start java -- -jar matching-web/target/graphhopper-map-matching-web-1.0-SNAPSHOT.jar server config.yml
Mine java app worked fine - just use 1 arg - no array.
apps:
- name : 'admin'
script: '/opt/homebrew/opt/openjdk#11/bin/java'
args: '-jar ./wweevvAdmin/target/wweevvAdmin-1.0-SNAPSHOT.jar'
instances: '1'
autorestart: true
I'm new to ELK and I'm getting issues while running logstash. I ran the logatash as defined in below link
https://www.elastic.co/guide/en/logstash/current/advanced-pipeline.html
But when run filebeat and logstash, Its show logstash successfully runs at port 9600. In filebeat it gives like this
INFO No non-zero metrics in the last 30s
Logstash is not getting input from filebeat.Please help..
the filebeat .yml is
filebeat.prospectors:
- input_type: log
paths:
- /path/to/file/logstash-tutorial.log
output.logstash:
hosts: ["localhost:5043"]
and I ran this command
sudo ./filebeat -e -c filebeat.yml -d "publish"
The config file is
input {
beats {
port => "5043"
}
}
output {
stdout { codec => rubydebug }
}
then ran the commands
1)bin/logstash -f first-pipeline.conf --config.test_and_exit - this gave warnings
2)bin/logstash -f first-pipeline.conf --config.reload.automatic -This started the logstash on port 9600
I couldn't proceeds after this since filebeat gives the INFO
INFO No non-zero metrics in the last 30s
And the ELK version used is 5.1.2
The registry file stores the state and location information that Filebeat uses to track where it was last reading
So you can try updating or deleting registry file. see here
cd /var/lib/filebeat
sudo mv registry registry.bak
sudo service filebeat restart
I have also faced this issue and I have solved with above commands.
Filebeat reads from the end of your file, and is expecting new stuff to be added over time (like a log file).
To make it read from the beginning of the file, set the 'tail_files' option.
Also note the instructions there about re-processing a file, as that can come into play during testing.
I am trying to install the latest version of Logstash ie 5.1.1 on windows (Windows 7 Professional).
I unziped the logstash installtion file in the path : C:\Program Files\logstash-5.1.1. Now when i try to test the logstash installation with this command: logstash -e 'input { stdin { } } output { stdout {} }'
But following error is shown when this command in run:
Following is mentioned on their site in the Installation guide section:
Do not install Logstash into a directory path that contains colon (:) characters
Is this the reason i am getting this error?
If yes, it seems to me that there is no way to avoid directory path with a colon on a windows environment. How do i get around this problem ?
If no, what might be the reason and how do i fix it?
I had posted the same question on logstash forum
I got a confirmation that windows 7 is not supported !!
I have installed on windows 10 and able to run this
logstash -e 'input { stdin { } } output { stdout {} }'
Try this may be solve your issue and let me know if it does.
add environment variable let say 'LS_SETTING_DIR' and brows the path of logstash in my case it is C:\Users\mrizwan\Downloads\ELK\logstash-5.1.1\config
I don't know if you've solved this problem yet.
I experienced the same problem.
The answer is that there is a gap in the installation path of the logstath installation path.
So, there is a URL decoding error.
Find out where the space is located at logstash logstash igyeongno.
Since the logstash process on windows gets closed immediately after the connection to the machine is lost or logged off from the machine, How to keep the logstash process running continuously and in back-end on windows?
Use Non-Sucking Service Manager to install logstash as a windows service. Services can be started at boot and run without an active user login.
You can use the Windows Service Wrapper (https://github.com/kohsuke/winsw).
Simple put the wsw.exe into the logstash bin directory, rename it to logstash.exe and do the same for the configuration xml file. So you have a logstash.exe and logstash.xml file in the bin directory of logstash.
Now you need to adjust the xml file a bit. Mine looks like this (tweak it to your needs):
<configuration>
<id>logstash</id>
<name>Logstash</name>
<description>Logstash from elastic.co</description>
<executable>D:\logstash\bin\logstash.bat</executable>
<arguments></arguments>
<serviceaccount>
<domain>local</domain>
<user>waffel</user>
<password>XXX</password>-->
</serviceaccount>
<onfailure action="restart" delay="10 sec"/>
<onfailure action="restart" delay="20 sec"/>
<onfailure action="none" />
<resetfailure>1 hour</resetfailure>
<priority>Normal</priority>
<stoptimeout>15 sec</stoptimeout>
<stopparentprocessfirst>false</stopparentprocessfirst>
<startmode>Automatic</startmode>
<waithint>15 sec</waithint>
<sleeptime>1 sec</sleeptime>
<logpath>D:\logstash\logs</logpath>
<log mode="append"/>
<env name="JAVA_HOME" value="D:\jdk8x64" />
</configuration>
Then you can simple hit from the cmd (as admin)
logstash.exe test
Or to install the service
logstash.exe install
and then you can run from cmd (or service management)
logstash.exe start
logstash.exe stop
You may whatch the logsfiles for potential errors. For me it works fine with logstash 5.5.1
After update strongloop to v2.10 slc stops writing logs.
Also I couldn't make the app to start in production mode.
/etc/init/app.conf
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
env NODE_ENV=production
script
exec slc run /home/ubuntu/app/ \
-l /home/ubuntu/app/app.log \
-p /var/run/app.pid
end script
Can anybody check my upstart config or provide another working copy?
Are you were writing the pid to a file so that you can use it to send SIGUSR2 to the process to trigger log re-opening from logrotate?
Assuming you are using Upstart 1.4+ (Ubuntu 12.04 or newer), then you would be better off letting slc run log to its stdout and let Upstart take care of writing it to a file so that log rotation is done for you:
#!upstart
description "StrongLoop app"
start on startup
stop on shutdown
# assuming this is /etc/init/app.conf,
# stdout+stderr logged to: /var/log/upstart/app.log
console log
env NODE_ENV=production
exec /usr/local/bin/slc run --cluster=CPUs /home/ubuntu/app
The log rotation for "free" is nice, but the biggest benefit to this approach is Upstart can log errors that slc run reports even if they are a crash while trying to set up its internal logging, which makes debugging a lot easier.
Aside from what it means to your actual application, the only effect NODE_ENV has on slc run is to set the default number of cluster workers to the number of detected CPU cores, which literally translates to --cluster=CPUs.
Another problem I find is the node/npm path prefix not being in the $PATH as used by Upstart, so I normally put the full paths for executables in my Upstart jobs.
Service Installer
You could also try using strong-service-install, which is a module used by slc pm-install to install strong-pm as an OS service:
$ npm install -g strong-service-install
$ sudo sl-svc-install --name app --user ubuntu --cwd /home/ubuntu/app -- slc run --cluster=CPUs .
Note the spaces around the -- before slc run