Auditd - auditctl rule to monitor dir only (not all sub dir and files etc..) [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am trying to use auditd to monitor changes to a directory.
The problem is that when I setup a rule it does monitor the dir I specified but also all the sub dir and files making the monitor useless due to endless verbosity.
Here is the rule I setup:
auditctl -w /home/raven/public_html -p war -k raven-pubhtmlwatch
when I search the logs using
ausearch -k raven-pubhtmlwatch
I get thousands of lines of logs that list everything under public_html/
How can I limit the rule to changes on the directory specified only?
Thank you very much.

A watch is really a syscall rule in disguise. If you place a watch on a
directory, auditctl will turn it into:
-a exit,always -F dir=/home/raven/public_html -F perm=war -F key=raven-pubhtmlwatch
The -F dir field is recursive. However, if you just want to watch the directory
entries, you can change that to -F path.
-a exit,always -F path=/home/raven/public_html -F perm=war -F key=raven-pubhtmlwatch
This is not recursive and just watches the inode that the directory occupies.
I had to add the rule manually in:
/etc/audit/audit.rules
then restart auditd using
/etc/init.d/auditd restart
now the rules are added and it works great!
All credit goes to Steve # redhat who answered my question in the audit mailing list:
https://www.redhat.com/archives/linux-audit/2013-September/msg00057.html

Related

Error applying chroot to group (groupmod: group 'www' does not exist) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
So I am trying to chroot all the users who are in group www to the directory /var/www. But I every time I try to do that, it comes back saying the group doesn't exist. (even though the group does exist)
[root#server var]# cat /etc/fedora-release
Fedora release 26 (Twenty Six)
[root#server var]# groupadd -r www
[root#server var]# groupmod -R /var/www www
groupmod: group 'www' does not exist
[root#server var]# ls -la
drwxrwxrwx. 5 root www 46 Jul 12 06:44 www
As you can see the error message is less than helpful. I have looked around on stackoverflow but haven't come across an answer to this specific question yet.
Can anyone shed some light on what I am doing wrong?
That is not what groupmod -R does. What it means is that the groupmod program will chroot into the directory, and then do everything. It’s intended for when you have one system mounted inside another, such as if you booted from a live USB drive to make changes to a broken system.
Once groupmod has run chroot, it looks in the /var/www/etc/group file to figure out what group ID www corresponds to, which of course fails because if your system is at all sanely set up you don’t have a var/www/etc/group file.
I do not know how to make sure all processes by a specific user run in a chroot, and I don’t think that’s the right way to achieve your goal. If a program is chrooted into /var/www, it doesn’t have access to any of the utilities it might expect, like the web server executable. Instead, I would look at the documentation of your web server and see if it supports this directly, or see if you can get a custom mount namespace using systemd.

getting error on Supervison on supervisorctl ERROR (no such process) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I've seen this question asked before, but none of the solutions have worked for me.
I'm having problems using the supervisor on my rpi b+. Every time I try to run my start my process, I get an error saying:
pi#raspberrypi ~ $ sudo supervisorctl start server
server: ERROR (no such process)
I have my config file set up at /etc/supervisord.conf
[program:server]
directory=/home/pi/ledticker
command=/usr/bin/python NetworkServer.py
autostart=false
autorestart=true
stopsignal=QUIT
[supervisord]
logfile=/var/log/supervisor/supervisord.log ; (main log file;default $CWD/supervisord.log)
logfile_maxbytes=50MB ; (max main logfile bytes b4 rotation;default 50MB)
logfile_backups=10 ; (num of main logfile rotation backups;default 10)
loglevel=info ; (log level;default info; others: debug,warn,trace)
pidfile=/tmp/supervisord.pid ; (supervisord pidfile;default supervisord.pid)
nodaemon=false ; (start in foreground if true;default false)
minfds=1024 ; (min. avail startup file descriptors;default 1024)
minprocs=200 ; (min. avail process descriptors;default 200)
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
[unix_http_server]
file=/tmp/supervisor.sock ; (the path to the socket file)
I have tried doing the reread, update, reload commands but they haven't worked. Any ideas?
You should try to reload supervisord :
# supervisorctl reload
[y/N] ? y
In many cases, this error is resolved by that reload.
On my Fedora22, I modified below lines in /etc/supervisord.conf:
[include]
files = supervisord.d/*.ini
to
[include]
files = supervisord.d/*.conf
and then reload
i had faced same problem before. It was resolve by following solutions.
First edit your supervisord.conf file and add below lines :
[unix_http_server]
file=/tmp/supervisor.sock
chmod=0777
start SupervisorD service first using following command :
$ sudo /usr/bin/supervisord -c /etc/supervisord.conf
You can verify using : ps -ef | grep python
After supervisord starts, Try to start your program using following command :
$ sudo /usr/bin/supervisorctl -c /etc/supervisord.conf start all
In case of process multi-instances configuration full process name might look like server:server_0 (depends on your process_name template). Try:
sudo supervisorctl restart server:*
Otherwise you'll get same (no such process) error.
In some versions of supervisor the [include] section not work, you need to add the programs in the main supervisor configuration file in /etc/supervisord.conf

Linux unable to create core dump from application [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have two servers running a vendor application. On one server if the app crashes it creates a core dump but the second it does not.
The servers were supposed to be set up the same but I am trying to figure out why the application doesn't create a core dump. I've checked all the typical settings and have been doing research with no luck.
The strange part is that if I run a kill -s SIGSEGV $$ as my app user, it generates a core dump in the same directory the app is supposed to create the core dump. The vendor and Linux group are both unsure at the moment that is why I'm looking here for help.
$ cat /proc/sys/kernel/core_pattern
core
$ cat /proc/sys/kernal/core_uses_pid
1
$ ulimit -c
unlimited
$ cat /etc/security/limits.conf | grep core
* soft core unlimited
* hard core unlimited
$ cat /etc/profile | grep ulimit
ulimit -c unlimited > /dev/null 2>&1
$ cat /proc/sys/fs/suid_dumpable
0
$ cat /etc/sysconfig/init | grep CORE
DAEMON_COREFILE_LIMIT='unlimited'
There could be several other reasons why the coredump is not created. Check the list of possible reasons in core(5): http://linux.die.net/man/5/core
Check dmesg output.
Check the specific process corefile size limit in /proc/PID/limits.
Check if the process user can create a file of typical coredump size in /proc/PID/cwd directory.
Specify absolute file path in /proc/sys/kernel/core_pattern, pointing to a known writable location.
Create a short program adhering to the coredump-accepting protocol, saving it somewhere, and specify it in /proc/sys/kernel/core_pattern, according to core(5). Coredumps piped to programs are not subject to limits.

Lost httpd.conf file located apache [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can I find where my httpd.conf file is located?
I am running an Ubuntu Linux server from the Amazon Web Services EC2 (Elastic Compute Cloud) and I can't find my Apache config.
Get the path of running Apache
$ ps -ef | grep apache
apache 12846 14590 0 Oct20 ? 00:00:00 /usr/sbin/apache2
Append -V argument to the path
$ /usr/sbin/apache2 -V | grep SERVER_CONFIG_FILE
-D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf"
Reference:
http://commanigy.com/blog/2011/6/8/finding-apache-configuration-file-httpd-conf-location
See http://wiki.apache.org/httpd/DistrosDefaultLayout for discussion of where you might find Apache httpd configuration files on various platforms, since this can vary from release to release and platform to platform. The most common answer, however, is either /etc/apache/conf or /etc/httpd/conf
Generically, you can determine the answer by running the command:
httpd -V
(That's a capital V). Or, on systems where httpd is renamed, perhaps apache2ctl -V
This will return various details about how httpd is built and configured, including the default location of the main configuration file.
One of the lines of output should look like:
-D SERVER_CONFIG_FILE="conf/httpd.conf"
which, combined with the line:
-D HTTPD_ROOT="/etc/httpd"
will give you a full path to the default location of the configuration file

real estate linux back up solution [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
I did a lot of research..but I couldnt find what I exactly want. Can anyone have any/some knowledge regarding how a real estate company backup strategy should be. I mean, there are different backup types such as full, incremental and differential backups.
Which solution(s) a real estate company should use to backup its resources and how frequently (daily, weekly, etc)?
assume that they have linux servers...
many thanks..
This belongs to serverfault, However you need to provide more details.
You should run incremental daily backups and a weekly fully backup.
for MySQL databases check : http://dev.mysql.com/doc/refman/5.1/en/backup-methods.html
for other files you can use rsync with hard copies.
Check this TLDPhowto and LJ article
Concider using encryption on the backup drive, either full disk using dmcrypt or if you use tar/cpio pipe it to openssl (ex : tar -xf - path1 path2 | openssl enc -aes-128-cbc -salt > backup.$(date --iso).tgz.aes
Example daily rsync backup script:
#!/bin/sh
BACKUP_DIR=/mnt/backups/
BACKUP_PATHES="/var /home"
cd ${BACKUP_DIR}
rm -rf backup.5 backup.5.log.bz2 &>/dev/null
recycle() {
i=$1; y=$(($i+1))
b=${2-backup}
mv "${b}.$i" "${b}.$y" &>/dev/null
mv "${b}.$i.log.bz2" "${b}.$y.log.bz2" &>/dev/null
}
recycle 4
recycle 3
recycle 2
recycle 1
recycle 0
OPTS="--numeric-ids --delete --delete-after --delete-excluded"
nice -n20 ionice -c2 -n2 rsync -axlHh -v --link-dest=../backup.1 ${OPTS} ${BACKUP_PATHES} backup.0/ --exclude-from=/root/.rsync-exclude 2>&1 | bzip2 -9 > backup.0.log.bz2
cd /root &>/dev/null

Resources