logstash file input failing to read file - linux

I've been scratching my head over this for hours, and I'm getting kind of frustrated. I'm new to logstash, so I might be doing something wrong, but after a few hours working on this, I can't figure out what. I configured both agent and server using the chef-logstash cookbook.
I have two system that I've set up, an agent and a server. The agent reads files, filters them, then ships them off to the redis instance on the server. The server grabs incoming entries from redis, and indexes them in elasticsearch (using embedded).
Here's my problem, I can use a simple config like the one below, enter input to the server, and everything ships off to the server, just fine.
input { stdin { } }
output {
redis {
host => "192.168.33.11"
data_type => "list"
key => "logstash"
codec => json
}
stdout { codec => rubydebug }
}
Everything get's picked up properly by the logstash running on my server (in vagrant), they get indexed, and I can see them in Kibana.
The agent is another story. On my agent, started with 3 config files, input_file_nginx.conf, output_stdout.conf, output_redis.conf. I found that the logs weren't getting to my redis on my server, so I tried to narrow it down. It was when I looked at my logs on my agent I got really confused. As far as I could tell, nothing was getting read. Either that, or my output_stdout.conf is messed up.
Here's my input_file_nginx.conf
input {
file {
path => "/home/silkstart/logs/*.log"
type => "nginx"
}
}
For reference, the two files in there are nginx.silkstart.80.access.log and nginx.silkstart.80.error.log, which both have 644 permissions, so should be readable.
And my output_stdout.conf
output {
stdout {
codec => rubydebug
}
}
These were all generated using logstash_config from some erbs.
My instance came almost verbatim from the agent.rb example
logstash_service name do
action [:enable]
method "runit"
end
Here's the resulting config
#!/bin/sh
cd //opt/logstash/agent
exec 2>&1
# Need to set LOGSTASH_HOME and HOME so sincedb will work
LOGSTASH_HOME="/opt/logstash/agent"
GC_OPTS=""
JAVA_OPTS="-server -Xms198M -Xmx596M -Djava.io.tmpdir=$LOGSTASH_HOME/tmp/ "
LOGSTASH_OPTS="agent -f $LOGSTASH_HOME/etc/conf.d"
LOGSTASH_OPTS="$LOGSTASH_OPTS --pluginpath $LOGSTASH_HOME/lib"
LOGSTASH_OPTS="$LOGSTASH_OPTS -vv"
LOGSTASH_OPTS="$LOGSTASH_OPTS -l $LOGSTASH_HOME/log/logstash.log"
export LOGSTASH_OPTS="$LOGSTASH_OPTS -w 1"
HOME=$LOGSTASH_HOME exec chpst -u logstash:logstash $LOGSTASH_HOME/bin/logstash $LOGSTASH_OPTS
This is fairly similar to my server config, which works
#!/bin/sh
ulimit -Hn 65550
ulimit -Sn 65550
cd //opt/logstash/server
exec 2>&1
# Need to set LOGSTASH_HOME and HOME so sincedb will work
LOGSTASH_HOME="/opt/logstash/server"
GC_OPTS=""
JAVA_OPTS="-server -Xms1024M -Xmx218M -Djava.io.tmpdir=$LOGSTASH_HOME/tmp/ "
LOGSTASH_OPTS="agent -f $LOGSTASH_HOME/etc/conf.d"
LOGSTASH_OPTS="$LOGSTASH_OPTS --pluginpath $LOGSTASH_HOME/lib"
LOGSTASH_OPTS="$LOGSTASH_OPTS -l $LOGSTASH_HOME/log/logstash.log"
export LOGSTASH_OPTS="$LOGSTASH_OPTS -w 1"
HOME=$LOGSTASH_HOME exec chpst -u logstash:logstash $LOGSTASH_HOME/bin/logstash $LOGSTASH_OPTS
The only difference I can see here is
ulimit -Hn 65550
ulimit -Sn 65550
but I don't see why that should stop that from working. This would increase the number of file descriptors, but the default 4096 should be plenty.
When make some requests to the server to make sure the log has new stuff, and I check the runit logs, it only points me to /opt/logstash/agent/log/logstash.log, which I have pasted the contents of at https://gist.github.com/jrstarke/384f192abdd93c0acf2a.
To really throw a wrench in things, if I sudo su logstash and run bin/logstash -f etc/conf.d from the command line, everything works as expected.
Any help would be greatly appreciated.

I managed to figure this out. For anyone else that's facing a similar issue, you will want to check your permissions on the files you're trying to access.
If you're accessing files that you have access to through group permissions, you're likely facing the same issue I did.
Look closely at this line
exec chpst -u logstash:logstash
That this tells us is that we want to run a program as user logstash, with the group permissions logstash. In my case, the group that I wanted to use was an additional group. The docs for chpst note that
If group consists of a colon-separated list of group names, chpst sets the group ids of all listed groups.
So if I wanted to run the program as user1 with both group1 and group2, that command would become
exec chpst -u user1:group1:group2
I hope this helps anyone else that is running into the same issue I did.

Related

How to callculate the free space on a host machine and get the information inside a docker container [duplicate]

How to control host from docker container?
For example, how to execute copied to host bash script?
This answer is just a more detailed version of Bradford Medeiros's solution, which for me as well turned out to be the best answer, so credit goes to him.
In his answer, he explains WHAT to do (named pipes) but not exactly HOW to do it.
I have to admit I didn't know what named pipes were when I read his solution. So I struggled to implement it (while it's actually very simple), but I did succeed.
So the point of my answer is just detailing the commands you need to run in order to get it working, but again, credit goes to him.
PART 1 - Testing the named pipe concept without docker
On the main host, chose the folder where you want to put your named pipe file, for instance /path/to/pipe/ and a pipe name, for instance mypipe, and then run:
mkfifo /path/to/pipe/mypipe
The pipe is created.
Type
ls -l /path/to/pipe/mypipe
And check the access rights start with "p", such as
prw-r--r-- 1 root root 0 mypipe
Now run:
tail -f /path/to/pipe/mypipe
The terminal is now waiting for data to be sent into this pipe
Now open another terminal window.
And then run:
echo "hello world" > /path/to/pipe/mypipe
Check the first terminal (the one with tail -f), it should display "hello world"
PART 2 - Run commands through the pipe
On the host container, instead of running tail -f which just outputs whatever is sent as input, run this command that will execute it as commands:
eval "$(cat /path/to/pipe/mypipe)"
Then, from the other terminal, try running:
echo "ls -l" > /path/to/pipe/mypipe
Go back to the first terminal and you should see the result of the ls -l command.
PART 3 - Make it listen forever
You may have noticed that in the previous part, right after ls -l output is displayed, it stops listening for commands.
Instead of eval "$(cat /path/to/pipe/mypipe)", run:
while true; do eval "$(cat /path/to/pipe/mypipe)"; done
(you can nohup that)
Now you can send unlimited number of commands one after the other, they will all be executed, not just the first one.
PART 4 - Make it work even when reboot happens
The only caveat is if the host has to reboot, the "while" loop will stop working.
To handle reboot, here what I've done:
Put the while true; do eval "$(cat /path/to/pipe/mypipe)"; done in a file called execpipe.sh with #!/bin/bash header
Don't forget to chmod +x it
Add it to crontab by running
crontab -e
And then adding
#reboot /path/to/execpipe.sh
At this point, test it: reboot your server, and when it's back up, echo some commands into the pipe and check if they are executed.
Of course, you aren't able to see the output of commands, so ls -l won't help, but touch somefile will help.
Another option is to modify the script to put the output in a file, such as:
while true; do eval "$(cat /path/to/pipe/mypipe)" &> /somepath/output.txt; done
Now you can run ls -l and the output (both stdout and stderr using &> in bash) should be in output.txt.
PART 5 - Make it work with docker
If you are using both docker compose and dockerfile like I do, here is what I've done:
Let's assume you want to mount the mypipe's parent folder as /hostpipe in your container
Add this:
VOLUME /hostpipe
in your dockerfile in order to create a mount point
Then add this:
volumes:
- /path/to/pipe:/hostpipe
in your docker compose file in order to mount /path/to/pipe as /hostpipe
Restart your docker containers.
PART 6 - Testing
Exec into your docker container:
docker exec -it <container> bash
Go into the mount folder and check you can see the pipe:
cd /hostpipe && ls -l
Now try running a command from within the container:
echo "touch this_file_was_created_on_main_host_from_a_container.txt" > /hostpipe/mypipe
And it should work!
WARNING: If you have an OSX (Mac OS) host and a Linux container, it won't work (explanation here https://stackoverflow.com/a/43474708/10018801 and issue here https://github.com/docker/for-mac/issues/483 ) because the pipe implementation is not the same, so what you write into the pipe from Linux can be read only by a Linux and what you write into the pipe from Mac OS can be read only by a Mac OS (this sentence might not be very accurate, but just be aware that a cross-platform issue exists).
For instance, when I run my docker setup in DEV from my Mac OS computer, the named pipe as explained above does not work. But in staging and production, I have Linux host and Linux containers, and it works perfectly.
PART 7 - Example from Node.JS container
Here is how I send a command from my Node.JS container to the main host and retrieve the output:
const pipePath = "/hostpipe/mypipe"
const outputPath = "/hostpipe/output.txt"
const commandToRun = "pwd && ls-l"
console.log("delete previous output")
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath)
console.log("writing to pipe...")
const wstream = fs.createWriteStream(pipePath)
wstream.write(commandToRun)
wstream.close()
console.log("waiting for output.txt...") //there are better ways to do that than setInterval
let timeout = 10000 //stop waiting after 10 seconds (something might be wrong)
const timeoutStart = Date.now()
const myLoop = setInterval(function () {
if (Date.now() - timeoutStart > timeout) {
clearInterval(myLoop);
console.log("timed out")
} else {
//if output.txt exists, read it
if (fs.existsSync(outputPath)) {
clearInterval(myLoop);
const data = fs.readFileSync(outputPath).toString()
if (fs.existsSync(outputPath)) fs.unlinkSync(outputPath) //delete the output file
console.log(data) //log the output of the command
}
}
}, 300);
Use a named pipe.
On the host OS, create a script to loop and read commands, and then you call eval on that.
Have the docker container read to that named pipe.
To be able to access the pipe, you need to mount it via a volume.
This is similar to the SSH mechanism (or a similar socket-based method), but restricts you properly to the host device, which is probably better. Plus you don't have to be passing around authentication information.
My only warning is to be cautious about why you are doing this. It's totally something to do if you want to create a method to self-upgrade with user input or whatever, but you probably don't want to call a command to get some config data, as the proper way would be to pass that in as args/volume into docker. Also, be cautious about the fact that you are evaling, so just give the permission model a thought.
Some of the other answers such as running a script. Under a volume won't work generically since they won't have access to the full system resources, but it might be more appropriate depending on your usage.
The solution I use is to connect to the host over SSH and execute the command like this:
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
UPDATE
As this answer keeps getting up votes, I would like to remind (and highly recommend), that the account which is being used to invoke the script should be an account with no permissions at all, but only executing that script as sudo (that can be done from sudoers file).
UPDATE: Named Pipes
The solution I suggested above was only the one I used while I was relatively new to Docker. Now in 2021 take a look on the answers that talk about Named Pipes. This seems to be a better solution.
However, nobody there mentioned anything about security. The script that will evaluate the commands sent through the pipe (the script that calls eval) must actually not use eval for the whole pipe output, but to handle specific cases and call the required commands according to the text sent, otherwise any command that can do anything can be sent through the pipe.
That REALLY depends on what you need that bash script to do!
For example, if the bash script just echoes some output, you could just do
docker run --rm -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
Another possibility is that you want the bash script to install some software- say the script to install docker-compose. you could do something like
docker run --rm -v /usr/bin:/usr/bin --privileged -v $(pwd)/mybashscript.sh:/mybashscript.sh ubuntu bash /mybashscript.sh
But at this point you're really getting into having to know intimately what the script is doing to allow the specific permissions it needs on your host from inside the container.
My laziness led me to find the easiest solution that wasn't published as an answer here.
It is based on the great article by luc juggery.
All you need to do in order to gain a full shell to your linux host from within your docker container is:
docker run --privileged --pid=host -it alpine:3.8 \
nsenter -t 1 -m -u -n -i sh
Explanation:
--privileged : grants additional permissions to the container, it allows the container to gain access to the devices of the host (/dev)
--pid=host : allows the containers to use the processes tree of the Docker host (the VM in which the Docker daemon is running)
nsenter utility: allows to run a process in existing namespaces (the building blocks that provide isolation to containers)
nsenter (-t 1 -m -u -n -i sh) allows to run the process sh in the same isolation context as the process with PID 1.
The whole command will then provide an interactive sh shell in the VM
This setup has major security implications and should be used with cautions (if any).
Write a simple server python server listening on a port (say 8080), bind the port -p 8080:8080 with the container, make a HTTP request to localhost:8080 to ask the python server running shell scripts with popen, run a curl or writing code to make a HTTP request curl -d '{"foo":"bar"}' localhost:8080
#!/usr/bin/python
from BaseHTTPServer import BaseHTTPRequestHandler,HTTPServer
import subprocess
import json
PORT_NUMBER = 8080
# This class will handles any incoming request from
# the browser
class myHandler(BaseHTTPRequestHandler):
def do_POST(self):
content_len = int(self.headers.getheader('content-length'))
post_body = self.rfile.read(content_len)
self.send_response(200)
self.end_headers()
data = json.loads(post_body)
# Use the post data
cmd = "your shell cmd"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
p_status = p.wait()
(output, err) = p.communicate()
print "Command output : ", output
print "Command exit status/return code : ", p_status
self.wfile.write(cmd + "\n")
return
try:
# Create a web server and define the handler to manage the
# incoming request
server = HTTPServer(('', PORT_NUMBER), myHandler)
print 'Started httpserver on port ' , PORT_NUMBER
# Wait forever for incoming http requests
server.serve_forever()
except KeyboardInterrupt:
print '^C received, shutting down the web server'
server.socket.close()
If you are not worried about security and you're simply looking to start a docker container on the host from within another docker container like the OP, you can share the docker server running on the host with the docker container by sharing it's listen socket.
Please see https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface and see if your personal risk tolerance allows this for this particular application.
You can do this by adding the following volume args to your start command
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
or by sharing /var/run/docker.sock within your docker compose file like this:
version: '3'
services:
ci:
command: ...
image: ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
When you run the docker start command within your docker container,
the docker server running on your host will see the request and provision the sibling container.
credit: http://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/
As Marcus reminds, docker is basically process isolation. Starting with docker 1.8, you can copy files both ways between the host and the container, see the doc of docker cp
https://docs.docker.com/reference/commandline/cp/
Once a file is copied, you can run it locally
docker run --detach-keys="ctrl-p" -it -v /:/mnt/rootdir --name testing busybox
# chroot /mnt/rootdir
#
I have a simple approach.
Step 1: Mount /var/run/docker.sock:/var/run/docker.sock (So you will be able to execute docker commands inside your container)
Step 2: Execute this below inside your container. The key part here is (--network host as this will execute from host context)
docker run -i --rm --network host -v /opt/test.sh:/test.sh alpine:3.7
sh /test.sh
test.sh should contain the some commands (ifconfig, netstat etc...) whatever you need.
Now you will be able to get host context output.
You can use the pipe concept, but use a file on the host and fswatch to accomplish the goal to execute a script on the host machine from a docker container. Like so (Use at your own risk):
#! /bin/bash
touch .command_pipe
chmod +x .command_pipe
# Use fswatch to execute a command on the host machine and log result
fswatch -o --event Updated .command_pipe | \
xargs -n1 -I "{}" .command_pipe >> .command_pipe_log &
docker run -it --rm \
--name alpine \
-w /home/test \
-v $PWD/.command_pipe:/dev/command_pipe \
alpine:3.7 sh
rm -rf .command_pipe
kill %1
In this example, inside the container send commands to /dev/command_pipe, like so:
/home/test # echo 'docker network create test2.network.com' > /dev/command_pipe
On the host, you can check if the network was created:
$ docker network ls | grep test2
8e029ec83afe test2.network.com bridge local
In my scenario I just ssh login the host (via host ip) within a container and then I can do anything I want to the host machine
I found answers using named pipes awesome. But I was wondering if there is a way to get the output of the executed command.
The solution is to create two named pipes:
mkfifo /path/to/pipe/exec_in
mkfifo /path/to/pipe/exec_out
Then, the solution using a loop, as suggested by #Vincent, would become:
# on the host
while true; do eval "$(cat exec_in)" > exec_out; done
And then on the docker container, we can execute the command and get the output using:
# on the container
echo "ls -l" > /path/to/pipe/exec_in
cat /path/to/pipe/exec_out
If anyone interested, my need was to use a failover IP on the host from the container, I created this simple ruby method:
def fifo_exec(cmd)
exec_in = '/path/to/pipe/exec_in'
exec_out = '/path/to/pipe/exec_out'
%x[ echo #{cmd} > #{exec_in} ]
%x[ cat #{exec_out} ]
end
# example
fifo_exec "curl https://ip4.seeip.org"
Depending on the situation, this could be a helpful resource.
This uses a job queue (Celery) that can be run on the host, commands/data could be passed to this through Redis (or rabbitmq). In the example below, this is occurring in a django application (which is commonly dockerized).
https://www.codingforentrepreneurs.com/blog/celery-redis-django/
To expand on user2915097's response:
The idea of isolation is to be able to restrict what an application/process/container (whatever your angle at this is) can do to the host system very clearly. Hence, being able to copy and execute a file would really break the whole concept.
Yes. But it's sometimes necessary.
No. That's not the case, or Docker is not the right thing to use. What you should do is declare a clear interface for what you want to do (e.g. updating a host config), and write a minimal client/server to do exactly that and nothing more. Generally, however, this doesn't seem to be very desirable. In many cases, you should simply rethink your approach and eradicate that need. Docker came into an existence when basically everything was a service that was reachable using some protocol. I can't think of any proper usecase of a Docker container getting the rights to execute arbitrary stuff on the host.

How to log the live output of a running process

I want to run a game server inside my Ubuntu machine. I want to run it in the background and write the live output of that process inside a log file. I tried using nohup and running the game server using "&" at the end but I couldn't make it work the way I wanted.
Then I started reading about named pipes and actually gave it a go. I made a simple script that in theory should work. But, of course I am missing something.
First, I made a pipe using the mkfifo command.
mkfifo testpipe
Then I created a small script:
#!/bin/bash
./mta-server64 > pipe &
pid=$!
echo $pid // so I know the pid of the process
cat < pipe > log.txt &
(Note: I wrote this code from memory.)
The code works only when there is an error and the process stops. It actually records the game console error. But when the game server is running I get no output in the log file.
I want to read the output (stdout and stderr if I am not mistaken) of a process running in background and record it those inside a log file.
I also thought about using screen as it logs everything inside a file but I would prefer not using it if there is a better solution.
EDIT:
First of all: thank you for the interest you had in helping me. In the same way, I have to apologize for only giving scarce details about what I intend to do with this small project and for my limited understanding of stdout and stderr.
Let's go to the first base.
I want to run a game server named Multi Theft Auto (https://multitheftauto.com/). This is GTA San Andreas but multiplayer.
I can easily run this game server in my Ubuntu server by calling the executable ./mta-server-64. After calling it the game server console appears:
[|] MTA: San Andreas :: 0/32 players :: 196 resources :: 125 fps (25)
MTA:BLUE Server for MTA:SA
==================================================================
= Multi Theft Auto: San Andreas v1.5.6 [64 bit]
==================================================================
= Server name : Default MTA Server
= Server IP address: auto
= Server port : 22884
=
= Log file : /root/mta/mods/deathmatch/logs/server.log
= Maximum players : 32
= HTTP port : 22564
= Voice Chat : Disabled
= Bandwidth saving : Medium
==================================================================
[09:49:07] Resource 'mapmanager' requests some acl rights. Use the command 'aclrequest list mapmanager'
[09:49:07] Resources: 196 loaded, 0 failed
[09:49:07] Starting resources...
[09:49:07] Server minclientversion is now 1.5.6-9.16588.0
[09:49:07] INFO: MAPMANAGER: Some important ACL permissions are missing. To ensure the correct functioning of Mapmanager, please write: aclrequest allow mapmanager all
[09:49:07] Gamemode 'play' started.
[09:49:07] Authorized serial account protection is enabled for the ACL group(s): `Admin` See http://mtasa.com/authserial
[09:49:07] WARNING: <owner_email_address> not set
[09:49:07] Server started and is ready to accept connections!
[09:49:07] To stop the server, type 'shutdown' or press Ctrl-C
[09:49:07] Type 'help' for a list of commands.
[09:49:07] Querying MTA master server... success! (Auto detected IP:xxx.xxx.xxx.xxx)
I am using the following script to run the process in the background and (try to) get the live output from:
#!/bin/bash
newport=$(shuf -i 22003-22900 -n 1)
newip=$(shuf -i 22003-22900 -n 1)
rm -rf ~/server/*
cp -r /home/user*/ftp/server/mtaserver/serverfiles/* ~/server
sed -i "s/<httpport>[0-9][0-9][0-9][0-9][0-9]<\/httpport>/<httpport>$newport<\/httpport>/g" ~/server/mods/deathmatch/mtaserver.conf
sed -i "s/<serverport>[0-9][0-9][0-9][0-9][0-9]<\/serverport>/<serverport>$newip<\/serverport>/g" ~/server/mods/deathmatch/mtaserver.conf
~/server/mta-server64 2>&1 | tee -a outfile &
mta_pid=$!
echo $mta_pid
sleep 6
pkill $mta_pid
(Note: Because of some technical problems I had to add the first few lines of script which automatically replace the game files with new ones and also replace the existing ports with random ones.)
This script starts the server and tries to log the output of the process. The process is automatically killed after few seconds so there is only one instance of the game server at any given time.
THE ISSUE:
This script only logs the output if there is an error. I still cannot get the live output of the process when it is still running. Maybe this is an issue with the game server but truly believe there should be a way to make it work the way I intend.
I believe you want to use tee command to split the pipe output to log file.
I suggest you read this article and these answers 1 2.
Usually this is enough nohup somecommand > somecommand.log 2>&1 & then, tail -F somecommand.log to follow the logs.
After 2 days I finally figured out a way to make it work (the way I intended to work, without taking in consideration any major security/performance risks).
Reading the comments made me realize I was attacking the wrong point. The stdout of the game server is buffered, thus making it impossible to log it into a log file using the methods I tried when I posted my question At least this is what I came to understand).
I did some research on how to run the application without having the stdout buffered: https://serverfault.com/questions/294218/is-there-a-way-to-redirect-output-to-a-file-without-buffering-on-unix-linux
My code now:
stdbuf -o0 ~/server/mta-server64 >> pipe &
cat < pipe | tee -a outfile &
After creating the named pipe it executes the game server inside that pipe and then appends the stdout into the log file.
The stdbug -o0 command disables the stdout buffering (as noted in the link above).
This works for me and I cannot guarantee it will work for anybody else. I am still not aware if disabling the buffering is a safe approach to my issue but for now it is what I need.

Logstash not working when run as service

I am trying to run logstash on my Debian machine. The config file is simple for testing purposes:
input {
stdin {}
}
output
{
file {
path => "/tmp/test_logstash"
}
}
When I run the command sudo /etc/init.d/logstash start I get the output logstash started.
Now I type some sample input in my command line such as ls -lah, which should be written to /tmp/test_logstash as configured in the config file.
But the nothing is written and when I ask about the status of logstash I get the output logstash is not running.
All log files in /var/log/logstash are empty files.
When I run /opt/logstash/bin/logstash -f /etc/logstash/conf.d everything works fine, but I need to run it as a service in the background.
I am new to using logstash and maybe it's something very easy to solve but I couldn't find any solution yet.
It would be great if someone has a solution for this.
EDIT:
Background is that I want to install and start logstash in an ansible playbook /opt/logstash/bin/logstash -f /path/to/config the playbook hangs in there as it is waiting for the command to be finished (wich will not be the case, because you have to quit logstash with ctrl + d then). Maybe there is an easier solution for that.
EDIT 2:
The owner of /opt/logstash directory is the user logstash with group logstash. The init.d startup script for logstash is simply:
#!/bin/bash
/opt/logstash/bin/logstash -f /etc/logstash/conf.d
Thanks in advance.
Just use this command:
/opt/logstash/bin/logstash -f /etc/logstash/conf.d &
This will start the process in background.
cat /var/log/logstash/logstash-plain.log
In my case, I ran this:
chmod -R 777 /var/lib/logstash

Cron / wget jobs intermittently not running - not getting into access log

I've a number of accounts running cron-started php jobs hourly.
The generic structure of the command is this:
wget -q -O - http://some.site.com/cron.php
Now, this used to be running just fine.
Lately, though, on a number of accounts it has started playing up - but only on this one server. Once or twice a day the php file is not run.
The access log is missing the relevant entry.
While the cron log shows that the job was run.
We've added a bit to the command to log things out (-o /tmp/logfile) but it shows nothing.
I'm at a loss, really. I'm looking for ideas what can be wrong, or how to sidestep this issue as it has started taking up way too much of my time.
Has anyone seen anything remotely like this?
Thanks in advance!
Try this command
wget -d -a /tmp/logfile -O - http://some.site.com/cron.php
With -q you turn off wget's output. With -d you turn on debug output (maybe -v for verbose output is already enough). With -a you append logging messages to /tmp/logfile instead of always creating a new file.
You can also use curl:
curl http://some.site.com/cron.php

DSH (dancer's shell) hangs when getting tail -f logs

I'm using dancer's shell dsh (http://www.netfort.gr.jp/~dancer/software/dsh.html.en) to send a tail -f command to 6 machines. I was hoping to use this to view a merged log from a service which resides in the same directory on each of these machines. The machines are all running RHEL 4. (Not my choice.)
What actually happens, is that I retrieve 4-20 lines from each log and then it just hangs.
Here are my options:
dsh -c -M -r ssh -g services -- /usr/bin/tail -f /var/myservice/my.log
"services" refers to a group of 6 servers.
I've tried several different ssh options in the dsh.conf file, including -n, -t, and -f, but it doesn't seem to make a difference. Also, screen is not installed on the target servers.
What's wrong with my command? How can I make it act like a proper tail -f?
It turns out chepner's comment is right. Those logs just don't create much output. I tried the identical command with a set of more active web applications and it works fine.
I know that command as "distributed shell", but no matter.
I'm suspicious the double-dash in the middle of your command string is asking for it to accept stdin input, which indeed would make it appear to hang. Try it without the "--"

Resources