Lost httpd.conf file located apache [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can I find where my httpd.conf file is located?
I am running an Ubuntu Linux server from the Amazon Web Services EC2 (Elastic Compute Cloud) and I can't find my Apache config.

Get the path of running Apache
$ ps -ef | grep apache
apache 12846 14590 0 Oct20 ? 00:00:00 /usr/sbin/apache2
Append -V argument to the path
$ /usr/sbin/apache2 -V | grep SERVER_CONFIG_FILE
-D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf"
Reference:
http://commanigy.com/blog/2011/6/8/finding-apache-configuration-file-httpd-conf-location

See http://wiki.apache.org/httpd/DistrosDefaultLayout for discussion of where you might find Apache httpd configuration files on various platforms, since this can vary from release to release and platform to platform. The most common answer, however, is either /etc/apache/conf or /etc/httpd/conf
Generically, you can determine the answer by running the command:
httpd -V
(That's a capital V). Or, on systems where httpd is renamed, perhaps apache2ctl -V
This will return various details about how httpd is built and configured, including the default location of the main configuration file.
One of the lines of output should look like:
-D SERVER_CONFIG_FILE="conf/httpd.conf"
which, combined with the line:
-D HTTPD_ROOT="/etc/httpd"
will give you a full path to the default location of the configuration file

Related

Error applying chroot to group (groupmod: group 'www' does not exist) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
So I am trying to chroot all the users who are in group www to the directory /var/www. But I every time I try to do that, it comes back saying the group doesn't exist. (even though the group does exist)
[root#server var]# cat /etc/fedora-release
Fedora release 26 (Twenty Six)
[root#server var]# groupadd -r www
[root#server var]# groupmod -R /var/www www
groupmod: group 'www' does not exist
[root#server var]# ls -la
drwxrwxrwx. 5 root www 46 Jul 12 06:44 www
As you can see the error message is less than helpful. I have looked around on stackoverflow but haven't come across an answer to this specific question yet.
Can anyone shed some light on what I am doing wrong?
That is not what groupmod -R does. What it means is that the groupmod program will chroot into the directory, and then do everything. It’s intended for when you have one system mounted inside another, such as if you booted from a live USB drive to make changes to a broken system.
Once groupmod has run chroot, it looks in the /var/www/etc/group file to figure out what group ID www corresponds to, which of course fails because if your system is at all sanely set up you don’t have a var/www/etc/group file.
I do not know how to make sure all processes by a specific user run in a chroot, and I don’t think that’s the right way to achieve your goal. If a program is chrooted into /var/www, it doesn’t have access to any of the utilities it might expect, like the web server executable. Instead, I would look at the documentation of your web server and see if it supports this directly, or see if you can get a custom mount namespace using systemd.

Auditd - auditctl rule to monitor dir only (not all sub dir and files etc..) [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I am trying to use auditd to monitor changes to a directory.
The problem is that when I setup a rule it does monitor the dir I specified but also all the sub dir and files making the monitor useless due to endless verbosity.
Here is the rule I setup:
auditctl -w /home/raven/public_html -p war -k raven-pubhtmlwatch
when I search the logs using
ausearch -k raven-pubhtmlwatch
I get thousands of lines of logs that list everything under public_html/
How can I limit the rule to changes on the directory specified only?
Thank you very much.
A watch is really a syscall rule in disguise. If you place a watch on a
directory, auditctl will turn it into:
-a exit,always -F dir=/home/raven/public_html -F perm=war -F key=raven-pubhtmlwatch
The -F dir field is recursive. However, if you just want to watch the directory
entries, you can change that to -F path.
-a exit,always -F path=/home/raven/public_html -F perm=war -F key=raven-pubhtmlwatch
This is not recursive and just watches the inode that the directory occupies.
I had to add the rule manually in:
/etc/audit/audit.rules
then restart auditd using
/etc/init.d/auditd restart
now the rules are added and it works great!
All credit goes to Steve # redhat who answered my question in the audit mailing list:
https://www.redhat.com/archives/linux-audit/2013-September/msg00057.html

Find out how many SSH connections currently exist [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm using a simple shell script on my Linux server which checks if an rsync job is running or if any client accesses some directories from the server via Samba. If this is the case then nothing happens, but if are there no jobs and Samba isn't used than the server goes into hibernation.
Is there any simple command which I can use to check if an SSH connection to the server exists? I want to add this to my shell script so that the server doesn't hibernate if such a connection exists.
Scan the process list for sshd: .
Established connections look something like this: sshd: <username>…
ps -A x | grep [s]shd
should work for you.
use who command
it gives output like
username pts/1 2013-06-19 19:51 (ip)
You could parse that to see how many non locals are added and get their usernames (or there are options see man who for more info
gives a count of how many non localhost users there are
who | grep -v localhost | wc -l

How to find out which process is using localhost:80? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I'm using linux mint xfce edition, my localhost:80 was used by some program but I don't which one, when I open firefox and visit localhost:80, it says
It works!
This is the default web page for this server.
The web server software is running but no content has been added, yet.
I've tried to use lsof -i #localhost:80, but it returns nothing.
netstat -anpt | grep :80 as root user should list process using port 80.
With your web browser closed it can help you identify the process.
Try this:
# fuser -n tcp 80
From the manpage:
-n SPACE, --namespace SPACE
Select a different name space. The name spaces file (file names,
the default), udp (local UDP ports), and tcp (local TCP ports)
are supported. For ports, either the port number or the symbolic name can
be specified. If there is no ambiguity, the shortcut
notation name/space (e.g. 80/tcp) can be used.

Linux unable to create core dump from application [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have two servers running a vendor application. On one server if the app crashes it creates a core dump but the second it does not.
The servers were supposed to be set up the same but I am trying to figure out why the application doesn't create a core dump. I've checked all the typical settings and have been doing research with no luck.
The strange part is that if I run a kill -s SIGSEGV $$ as my app user, it generates a core dump in the same directory the app is supposed to create the core dump. The vendor and Linux group are both unsure at the moment that is why I'm looking here for help.
$ cat /proc/sys/kernel/core_pattern
core
$ cat /proc/sys/kernal/core_uses_pid
1
$ ulimit -c
unlimited
$ cat /etc/security/limits.conf | grep core
* soft core unlimited
* hard core unlimited
$ cat /etc/profile | grep ulimit
ulimit -c unlimited > /dev/null 2>&1
$ cat /proc/sys/fs/suid_dumpable
0
$ cat /etc/sysconfig/init | grep CORE
DAEMON_COREFILE_LIMIT='unlimited'
There could be several other reasons why the coredump is not created. Check the list of possible reasons in core(5): http://linux.die.net/man/5/core
Check dmesg output.
Check the specific process corefile size limit in /proc/PID/limits.
Check if the process user can create a file of typical coredump size in /proc/PID/cwd directory.
Specify absolute file path in /proc/sys/kernel/core_pattern, pointing to a known writable location.
Create a short program adhering to the coredump-accepting protocol, saving it somewhere, and specify it in /proc/sys/kernel/core_pattern, according to core(5). Coredumps piped to programs are not subject to limits.

Resources