I'm trying to log all commands that users run on our servers and then ship them into one centralized logging server.
For that, I created an rsyslog logger that writes everything into one file (/var/log/commands.log) using the history command. I then use filebeat to ship the logs to the log-server. My problem is someone being able to unset HISTFILE which would stop logging. I'm not too worried about someone doing echo "" > .bash_history because I'm shipping logs immediately to another server.
It shouldn't be super fool-proof because everything can be outsmarted, but I would still like to improve it. Is it possible to create an audit watch for changes to HISTFILE? Or create some sort of listener that whenever a user unsets HISTFILE it would immediately set it back & alert me? Should I create some daemon that sets HISTFILE every 5 seconds?
Huge thanks ahead!
Just as a side note, I know it's possible to log commands with auditd but it recorded commands that the system runs too which cluttered everything.
Solved,
Created a file: /etc/profile.d/histfile.sh
and in it put :
HISTFILE="${HOME:-~}/.bash_history"
readonly HISTFILE
Related
I am using rc.local to start my node script on start with:
node .> "/log_file_$(date +"%H:%M:%S_%m_%d_%Y").txt"
It works fine - but now once the log grows in size - I need to create a new log on a server every 12/24 hours; without restarting the server.
Is there any simple way to change the node app output destination?
I would prefer not to use any library for that, because I need to log all the messages including errors, warns, not only console.log.
Thanks for your help.
There are a number of options, I'll offer two:
1. Stackdriver
Stream your logs to Stackdriver, which is part of Google Cloud, and don't store them on your server at all. In your node.js application, you can can setup Winston and use the Winston transport for Stackdriver. Then you can analyze and query them there, and don't need to worry about storage running out.
2. logrotate
If you want to deal with this manually, you can configure logrotate. It will gzip older logs so that they consume less disk space. This is a sort of older, "pre-cloud" way of doing things.
I'm trying to start redis with a dump.rdb file on linux, but I get a core error. However, when I start on Windows with te same file, it runs perfectly. Also, if I try to start on this Linux machine with a smaller file, it seems that it starts.
Could it be a memory problem?
Thanks!
Amparo
i had the similar problem before and the reason is that the user "redis" do not have the write authority in the folder of "dump.rdb".bgsave is the default way to backup data to disk in redis. so i run the code " config set stop-writes-on-bgsave-error no" in redis.cli and the problem be fixed.what's more,you also can change the folder of the "dump.rdb" in redis.conf.
I have a server where Nimbus/Supervisor/Zookeeper is continuously running. I want to get E-Mail Notification whenever any of them is not running or if server is down due to any reason. What script should I write? I know the mail and cron part, Just need some hint on the Nimbus Checking part? A very lame way that i used is that I did
`ps -ef | grep Nimbus`
And I checked what it returns. But I believe it won't work when the server itself is down. I didn't check because it is a running server and I don't want to mess with it. So, Do I have to use any other application?
One possible way might be to use wait. Not sure whether it'll be possible for your app or not. But possible use might be like below :
wait `processname`
Return indicates the process has exited.
I have been trying this for days but still struggling.
The objective of the script is to perform real time log monitoring on multiple servers (29 in particular) and correlate login failure records between servers. The servers' log will be compressed at 23:59:59 everyday, and a new log starts from 0 o'clock.
My idea was to use tail -f | grep "failed password" | tee centralized_log on every server, activated by a loop through all server names, run on background, and output the login failure records to a centralized log. But it dosn't work. And it creates a lot of daemons which will become zombies as soon as I terminates the script.
I am also considering to do tail at some minutes interval. But as the logs grow larger, the processing time will increase. How to set a pointer to where the previous tail stopped?
So could you please suggest a better and working way to do multiple logs monitoring and correlation. Additional installations are not encouraged unless totally necessary.
If your logs are going through syslog, and you're using rsyslogd, then you can configure the syslog on each machine to forward the specific messages you're interested in to one (or two) centralized log servers, using a property match like:
:msg, contains, "failed password"
See the rsyslog documentation for more details about how to set up reliable syslog forwarding.
I have a nodejs application that I run like this, over SSH:
$ tmux
$ node server.js
This starts my node application in a tmux session.
Obviously, I don't have the SSH session open all the time.
What I've been finding is that occasionally my application can get in a state where it won't server up any pages. This might be related to the application itself, or perhaps just a poorly disconnected SSH session.
Either way, simply logging into SSH, running:
$ tmux attach
And giving focus to the pane makes everything responsive again.
I thought the entire point of node.js was that everything is non-blocking - then what's going on here?
When a pane is is copy mode, tmux does not read from its tty. If some program running “in” in the tty continues to generate output, then the OS’s tty buffer will eventually fill and cause the writing process/thread to block. I do not know the internals of Node.js, but it may not expect writes to stdout/stderr to block: the console functions do not seem to have callbacks, so they may actually be blocking.
So, Node.js could very well end up blocked if the pane in which it was running was left in copy mode when your SSH connection was dropped.
If you need to assure non-blocking logging, then you might want to redirect (or tee) your stdout and stderr to a file and use something like less to view the prior logs (avoiding tmux’s copy mode since it might cause blocking).
Maybe something like this:
# Redirect stdout/stderr to a file, running Node.js in the background.
# Start a "less +F" on the log so that we immediately have a "tail" running.
node app.js >>app.log 2>&1 & less +F app.log
Or
# This pane will act as a 'tail -f', but do not use copy-mode here.
# Instead, run e.g. 'less app.log' in another pane to review prior logs.
node app.js 2>&1 | tee -a app.log
Or, if you are using a logging library, it might have something that you can use to automatically write to files.