I am trying to run logstash on my Debian machine. The config file is simple for testing purposes:
input {
stdin {}
}
output
{
file {
path => "/tmp/test_logstash"
}
}
When I run the command sudo /etc/init.d/logstash start I get the output logstash started.
Now I type some sample input in my command line such as ls -lah, which should be written to /tmp/test_logstash as configured in the config file.
But the nothing is written and when I ask about the status of logstash I get the output logstash is not running.
All log files in /var/log/logstash are empty files.
When I run /opt/logstash/bin/logstash -f /etc/logstash/conf.d everything works fine, but I need to run it as a service in the background.
I am new to using logstash and maybe it's something very easy to solve but I couldn't find any solution yet.
It would be great if someone has a solution for this.
EDIT:
Background is that I want to install and start logstash in an ansible playbook /opt/logstash/bin/logstash -f /path/to/config the playbook hangs in there as it is waiting for the command to be finished (wich will not be the case, because you have to quit logstash with ctrl + d then). Maybe there is an easier solution for that.
EDIT 2:
The owner of /opt/logstash directory is the user logstash with group logstash. The init.d startup script for logstash is simply:
#!/bin/bash
/opt/logstash/bin/logstash -f /etc/logstash/conf.d
Thanks in advance.
Just use this command:
/opt/logstash/bin/logstash -f /etc/logstash/conf.d &
This will start the process in background.
cat /var/log/logstash/logstash-plain.log
In my case, I ran this:
chmod -R 777 /var/lib/logstash
Related
BACKGROUND
I would like to explain the scenario properly here.
I am running jenkins_2.73.3 in my cloud server with ubuntu 16.04.
Currently, there are 3 users in the server:
root
develop-user (which I had created for many reasons such as test,deploy etc)
jenkins (which was created by jenkins ofcourse, I also added this jenkins user to sudoers group)
PROBLEM
I have a bash script that I am calling from a build step in Jenkins. Within this bash script,there is a nohup command for calling a separate deployScript in the background such as:
#!/bin/bash
nohup deployScript.sh > $WORKSPACE/app.log 2>&1 & echo $! > save_pid.txt
After the build step is completed, I see that a id is generated inside save_pid.txt but app.log is surprisingly empty. I can't kill any processes with this generated pid. So, that means there isn't any process created in the first place here. Also, the deployScript.sh does not seem to have any effect at all. It's just not working. This happens everytime I run the build in Jenkins. I can assure that there is nothing wrong with the deployScript.sh.
I have tried running this bash script with the develop-user manually without Jenkins and it works perfectly. Contents are written to the log file and also I can use the generated pid to kill the process. I have also tested this in my local environment and it works.
QUESTION
I have been looking at this for days. What might be the root cause here ?Where can I look into to see some logs or other info ? How is the pid generated whereas the log file is empty ? Is it a permission issue with the jenkins user ? Please help.
You can use below line inside the execute shell in jenkins to run it in background without the process being killed.
BUILD_ID=dontKillMe <command> &
So, it turned out to be a permission issue and also the script wasn't executable I guess as pointed out in the comments above.
So, now the bash script looks like below:
#!/bin/bash
sudo chmod a+x deployScript.sh
sudo nohup deployScript.sh > $WORKSPACE/app.log 2>&1 & echo $! > save_pid.txt
This works.
I am trying to run a shell script which runs at system login. To try, I used an example script, which has two lines
#!/bin/bash
echo “Hello World”
I Followed all the instructions mentioned on this website http://www.cyberciti.biz/tips/linux-how-to-run-a-command-when-boots-up.html
Even I did some steps such as editing /etc/rc.local, but still when I login, I do not see Hello world output running on terminal.
Can anyone please explain what is wrong here or may be I am missing something?
Looks to me like Ubuntu 16.04 is a systemd system.
Which means you should create a systemd service to run whatever you'd like # startup.
Look here https://wiki.ubuntu.com/SystemdForUpstartUsers#Example_Systemd_service
After you made your service, use systemctl to enable it on boot for systemd.
sudo systemctl enable mycustom.service
To start the service.
sudo systemctl start mycustom.service
You can also schedule a cron job to run once after the reboot command. The line will need "#reboot root /path/script.sh" You can specify whatever user you want to run the script. To make the result easier to find, you might make that second line something like
echo "Hello World" > /root/hello.txt
Add the line in /etc/profile under echo command as follows
echo “Hello World”
Then execute the following command
source /etc/profile
I'm new to cron and started using Whenever gem to perform scheduled tasks. However, sometimes they stall and I have to restart the app for it to pick up again. So, I wanted to inspect the logs to see if there are any exceptions that might cause it.
According to this wiki, I put a line in my schedule.rb setting the output location:
set :output, "/var/log/cron.log"
But this file is always empty.
I tried doing manually from the terminal /bin/bash -l -c 'echo "hello" >> /var/log/cron.log 2>&1' and it saved hello to the log.
Any thoughts? Thank you.
I've been scratching my head over this for hours, and I'm getting kind of frustrated. I'm new to logstash, so I might be doing something wrong, but after a few hours working on this, I can't figure out what. I configured both agent and server using the chef-logstash cookbook.
I have two system that I've set up, an agent and a server. The agent reads files, filters them, then ships them off to the redis instance on the server. The server grabs incoming entries from redis, and indexes them in elasticsearch (using embedded).
Here's my problem, I can use a simple config like the one below, enter input to the server, and everything ships off to the server, just fine.
input { stdin { } }
output {
redis {
host => "192.168.33.11"
data_type => "list"
key => "logstash"
codec => json
}
stdout { codec => rubydebug }
}
Everything get's picked up properly by the logstash running on my server (in vagrant), they get indexed, and I can see them in Kibana.
The agent is another story. On my agent, started with 3 config files, input_file_nginx.conf, output_stdout.conf, output_redis.conf. I found that the logs weren't getting to my redis on my server, so I tried to narrow it down. It was when I looked at my logs on my agent I got really confused. As far as I could tell, nothing was getting read. Either that, or my output_stdout.conf is messed up.
Here's my input_file_nginx.conf
input {
file {
path => "/home/silkstart/logs/*.log"
type => "nginx"
}
}
For reference, the two files in there are nginx.silkstart.80.access.log and nginx.silkstart.80.error.log, which both have 644 permissions, so should be readable.
And my output_stdout.conf
output {
stdout {
codec => rubydebug
}
}
These were all generated using logstash_config from some erbs.
My instance came almost verbatim from the agent.rb example
logstash_service name do
action [:enable]
method "runit"
end
Here's the resulting config
#!/bin/sh
cd //opt/logstash/agent
exec 2>&1
# Need to set LOGSTASH_HOME and HOME so sincedb will work
LOGSTASH_HOME="/opt/logstash/agent"
GC_OPTS=""
JAVA_OPTS="-server -Xms198M -Xmx596M -Djava.io.tmpdir=$LOGSTASH_HOME/tmp/ "
LOGSTASH_OPTS="agent -f $LOGSTASH_HOME/etc/conf.d"
LOGSTASH_OPTS="$LOGSTASH_OPTS --pluginpath $LOGSTASH_HOME/lib"
LOGSTASH_OPTS="$LOGSTASH_OPTS -vv"
LOGSTASH_OPTS="$LOGSTASH_OPTS -l $LOGSTASH_HOME/log/logstash.log"
export LOGSTASH_OPTS="$LOGSTASH_OPTS -w 1"
HOME=$LOGSTASH_HOME exec chpst -u logstash:logstash $LOGSTASH_HOME/bin/logstash $LOGSTASH_OPTS
This is fairly similar to my server config, which works
#!/bin/sh
ulimit -Hn 65550
ulimit -Sn 65550
cd //opt/logstash/server
exec 2>&1
# Need to set LOGSTASH_HOME and HOME so sincedb will work
LOGSTASH_HOME="/opt/logstash/server"
GC_OPTS=""
JAVA_OPTS="-server -Xms1024M -Xmx218M -Djava.io.tmpdir=$LOGSTASH_HOME/tmp/ "
LOGSTASH_OPTS="agent -f $LOGSTASH_HOME/etc/conf.d"
LOGSTASH_OPTS="$LOGSTASH_OPTS --pluginpath $LOGSTASH_HOME/lib"
LOGSTASH_OPTS="$LOGSTASH_OPTS -l $LOGSTASH_HOME/log/logstash.log"
export LOGSTASH_OPTS="$LOGSTASH_OPTS -w 1"
HOME=$LOGSTASH_HOME exec chpst -u logstash:logstash $LOGSTASH_HOME/bin/logstash $LOGSTASH_OPTS
The only difference I can see here is
ulimit -Hn 65550
ulimit -Sn 65550
but I don't see why that should stop that from working. This would increase the number of file descriptors, but the default 4096 should be plenty.
When make some requests to the server to make sure the log has new stuff, and I check the runit logs, it only points me to /opt/logstash/agent/log/logstash.log, which I have pasted the contents of at https://gist.github.com/jrstarke/384f192abdd93c0acf2a.
To really throw a wrench in things, if I sudo su logstash and run bin/logstash -f etc/conf.d from the command line, everything works as expected.
Any help would be greatly appreciated.
I managed to figure this out. For anyone else that's facing a similar issue, you will want to check your permissions on the files you're trying to access.
If you're accessing files that you have access to through group permissions, you're likely facing the same issue I did.
Look closely at this line
exec chpst -u logstash:logstash
That this tells us is that we want to run a program as user logstash, with the group permissions logstash. In my case, the group that I wanted to use was an additional group. The docs for chpst note that
If group consists of a colon-separated list of group names, chpst sets the group ids of all listed groups.
So if I wanted to run the program as user1 with both group1 and group2, that command would become
exec chpst -u user1:group1:group2
I hope this helps anyone else that is running into the same issue I did.
I'm having a problem with keeping a JBoss server running. Here's the command I'm using to start it:
sudo /JBOSS_HOME/bin/run.sh conf -b servername.domainname.tld
JBoss starts okay after about 4 minutes or so, and when I ps it, it shows up as a process. However, if I happen to log out of SSH and ps again, it's been stopped. Is there a way to start the server so it doesn't automatically stop when a user logs out of SSH?
I think the problem here is the standard output stream.
Redirect the output to a file and start the process in background like following.
sudo /JBOSS_HOME/bin/run.sh conf -b servername.domainname.tld > log_file &
This may help.