Job for httpd.service failed because the control process exited with error code See "systemctl status httpd.service" and "journalctl -xe" for details - linux

I am unable to restart my apache server to successfully install the SSL certificates.
I get the following error
Job for httpd.service failed because the control process exited with error code. See "systemctl status httpd.service" and "journalctl -xe" for details.
I have tried several articles and the root cause seems to be the following
Mar 29 13:05:09 localhost.localdomain httpd\[1234546\]: (98)Address already in use: AH00072: make_sock: could not bind to address \[::\]:80
Mar 29 13:05:09 localhost.localdomain httpd\[1234546\]: (98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:80
I am able to diagnose the issue and get the following output and is also attached. I am unable to proceed further. Can you please help ?
Server - AlmaLinux 8
Host - IONOS
Server version: Apache/2.4.37 (AlmaLinux)
-- Unit session-62994.scope has finished starting up.
-
-- Unit session-62994.scope has finished starting up.
-
-- The unit session-62994.scope has successfully entered the 'dead' state.
Mar 31 06:07:10 localhost.localdomain dhclient\[1326\]: XMT: Solicit on ens192, interval 110600ms.
Mar 31 06:07:10 localhost.localdomain dhclient\[1326\]: RCV: Advertise message on ens192 from fe80::250:56ff:fe8c:84c6.
Mar 31 06:07:10 localhost.localdomain dhclient\[1326\]: RCV: Advertise message on ens192 from fe80::250:56ff:fe9a:f13a.
Mar 31 06:07:30 localhost.localdomain sshd\[1297516\]: Invalid user sui from 167.99.68.65 port 48488
Mar 31 06:07:30 localhost.localdomain sshd\[1297516\]: pam_unix(sshd:auth): check pass; user unknown
Mar 31 06:07:30 localhost.localdomain sshd\[1297516\]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=167.99.68.65
Mar 31 06:07:32 localhost.localdomain sshd\[1297516\]: Failed password for invalid user sui from 167.99.68.65 port 48488 ssh2
Mar 31 06:07:34 localhost.localdomain sshd\[1297516\]: Received disconnect from 167.99.68.65 port 48488:11: Bye Bye \[preauth\]
Mar 31 06:07:34 localhost.localdomain sshd\[1297516\]: Disconnected from invalid user sui 167.99.68.65 port 48488 \[preauth\]
Mar 31 06:07:44 localhost.localdomain unix_chkpwd\[1297520\]: password check failed for user (root)
Mar 31 06:07:44 localhost.localdomain sshd\[1297518\]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=61.177.173.27 user=root
Mar 31 06:07:46 localhost.localdomain sshd\[1297518\]: Failed password for root from 61.177.173.27 port 58626 ssh2
Mar 31 06:07:46 localhost.localdomain unix_chkpwd\[1297521\]: password check failed for user (root)
\[root#localhost \~\]# ss --listening --tcp --numeric --processes
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 128 0.0.0.0:80 0.0.0.0:\* users:(("nginx",pid=1087,fd=10),("nginx",pid=1086,fd=10),("nginx",pid=1084,fd=10))
LISTEN 0 128 0.0.0.0:22 0.0.0.0:\* users:(("sshd",pid=1335,fd=5))
LISTEN 0 128 0.0.0.0:443 0.0.0.0:\* users:(("nginx",pid=1087,fd=11),("nginx",pid=1086,fd=11),("nginx",pid=1084,fd=11))
LISTEN 0 128 \[::\]:22 \[::\]:\* users:(("sshd",pid=1335,fd=7))
LISTEN 0 80 \*:3306 *:* users:(("mysqld",pid=1098,fd=19))
Tried -
apachectl configtest - Result: syntax ok
setenforce 0

Related

How to send haproxy info log to rsyslog via unix sock?

Hi i'm trying to config haproxy/rsyslog so that ONLY haproxy info log is sent to ryslog via unix sock.
Here my config:
haproxy config
frontend MY_FRONT_END
log 127.0.0.1 /var/log/haproxy/dev/log info
bind *:12080
default_backend HTTP_BACKEND
rsyslog config
$ModLoad imuxsock
$InputUnixListenSocketCreatePath on
$InputUnixListenSocketHostName localhost
$AddUnixListenSocket /var/log/haproxy/dev/log
*.info /var/log/haproxy/access.log
However, what i see in the log is not just haproxy log, the log contain all the info that not relate to haproxy (the first three log lines)
Dec 28 20:28:12 localhost sudo: testaccount : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/sh -c ip addr show
Dec 28 20:28:12 localhost sudo: testaccount : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/sh -c ip route
Dec 28 20:28:13 localhost sudo: testaccount : TTY=pts/1 ; PWD=/var/log/haproxy ; USER=root ; COMMAND=/sbin/service haproxy restart
Dec 28 20:28:13 localhost polkitd[59350]: Registered Authentication Agent for unix-process:32995:43061437 (system bus name :1.28346 [/usr/bin/pkttyagent --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_CA.UTF-8)
Dec 28 20:28:13 localhost systemd: Stopping HAProxy Load Balancer...
Dec 28 20:28:13 localhost haproxy: [WARNING] 362/202813 (30706) : Exiting Master process...
Dec 28 20:28:13 localhost haproxy: [NOTICE] 362/202813 (30706) : haproxy version is 2.2.6
Dec 28 20:28:13 localhost haproxy: [NOTICE] 362/202813 (30706) : path to executable is /usr/local/sbin/haproxy
Dec 28 20:28:13 localhost haproxy: [ALERT] 362/202813 (30706) : Current worker #1 (30708) exited with code 143 (Terminated)
Dec 28 20:28:13 localhost haproxy: [WARNING] 362/202813 (30706) : All workers exited. Exiting... (0)
Dec 28 20:28:13 localhost systemd: Starting HAProxy Load Balancer...
Dec 28 20:28:13 localhost haproxy[33016]: Proxy MY_FRONT_END started.
Dec 28 20:28:13 localhost haproxy[33016]: Proxy HTTP_BACKEND started.
Dec 28 20:28:13 localhost haproxy: [NOTICE] 362/202813 (33016) : New worker #1 (33018) forked
Dec 28 20:28:13 localhost systemd: Started HAProxy Load Balancer.
Dec 28 20:28:13 localhost polkitd[59350]: Unregistered Authentication Agent for unix-process:32995:43061437 (system bus name :1.28346, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_CA.UTF-8) (disconnected from bus)
Dec 28 20:28:13 localhost sudo: testaccount : TTY=pts/1 ; PWD=/var/log/haproxy ; USER=root ; COMMAND=/sbin/service rsyslog restart
How do i config to achieve this (only send haproxy info log to rsyslog through unix sock) ?
The correct answer is probably to use a ruleset to embrace just the imuxsock part, but I don't know how to do that in legacy syntax.
A simpler solution that is less optimal is to check for the programname in the log item. To also match for severity levels 0 to 6 (emerg to info) gives the result:
if $programname=="haproxy" and $syslogseverity<=6 then /var/log/haproxy/access.log
I'm not sure, but you could alternatively try just moving your configuration earlier in the file, before the standard logging code, but then your haproxy logs would appear in the standard logs too unless you use something like
*.info /var/log/haproxy/access.log
*.* stop
where stop stops further processing of that input.

Apache2 fails to start Putty AWS Windows

I am trying to start a django project using an AWS EC2 linux server using putty on windows, however Apache2 is showing an error related to the fact that the address is already used as shown in the next code:
apache2.service - The Apache HTTP Server
Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2020-07-25 19:51:59 UTC; 2min 7s ago
Docs: https://httpd.apache.org/docs/2.4/
Process: 15022 ExecStart=/usr/sbin/apachectl start (code=exited, status=1/FAILURE)
Jul 25 19:51:59 ip-172-31-4-25 apachectl[15022]: AH00558: apache2: Could not reliably determine
the server's fully qualified domain name, using 127.0.0.1. Set the 'Serv
Jul 25 19:51:59 ip-172-31-4-25 apachectl[15022]: (98)Address already in use: AH00072: make_sock:
could not bind to address [::]:80
Jul 25 19:51:59 ip-172-31-4-25 apachectl[15022]: (98)Address already in use: AH00072: make_sock:
could not bind to address 0.0.0.0:80
Jul 25 19:51:59 ip-172-31-4-25 apachectl[15022]: no listening sockets available, shutting down
Jul 25 19:51:59 ip-172-31-4-25 apachectl[15022]: AH00015: Unable to open logs
Jul 25 19:51:59 ip-172-31-4-25 apachectl[15022]: Action 'start' failed.
Jul 25 19:51:59 ip-172-31-4-25 apachectl[15022]: The Apache error log may have more information.
Jul 25 19:51:59 ip-172-31-4-25 systemd[1]: apache2.service: Control process exited, code=exited,
status=1/FAILURE
Jul 25 19:51:59 ip-172-31-4-25 systemd[1]: apache2.service: Failed with result 'exit-code'.
Jul 25 19:51:59 ip-172-31-4-25 systemd[1]: Failed to start The Apache HTTP Server.
I already tried to verify the status of the network using the next code, but the failure still the same.
systemctl status apache2.service
I also tried to verify the service listening ports using the next code, and the output was:
bitnami#ip-172-31-4-25:~$ sudo netstat -ntlp | grep 80
tcp6 0 0 :::80 :::* LISTEN 15122/httpd
I would appreciate any recommendation on it.
Check Skype, if its running then close first and then start Apache service.
Skype also use port 80.

interpreting the auth.log on a linux system, what qualifies as one login attempt

Using Python 3.5 i am composing a bit of code to analyze the /var/log/auth.log and discern a few happenings from it. I am on Ubuntu 17.04 with default settings for the output to /var/log/auth.log
I am attempting to quantify a failed login event. However when i inspect the log file. It seems to me that a failed login event is logged multiple times. Is it safe to infer that all the lines below correspond to one failed login attempt as the call goes through the different layers of the system? Or is each line below a separate failed login attempt.
Lines that i am inclined to attribute to one failed login attempt:
Jun 21 20:05:33 node1 sshd[24969]: Failed password for invalid user root from 221.194.47.252 port 43974 ssh2
Jun 21 20:05:38 node1 sshd[24969]: message repeated 2 times: [ Failed password for invalid user root from
221.194.47.252 port 43974 ssh2]
Jun 21 20:05:38 node1 sshd[24969]: Received disconnect from 221.194.47.252 port 43974:11: [preauth]
Jun 21 20:05:38 node1 sshd[24969]: Disconnected from 221.194.47.252 port 43974 [preauth]
Jun 21 20:05:38 node1 sshd[24969]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser=
rhost=221.194.47.252 user=root
Jun 21 20:05:41 node1 sshd[24971]: User root from 221.194.47.252 not allowed because none of user's groups are listed
in AllowGroups
Jun 21 20:05:41 node1 sshd[24971]: input_userauth_request: invalid user root [preauth]
Jun 21 20:05:42 node1 sshd[24971]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=
rhost=221.194.47.252 user=root
More context:
Jun 21 20:05:33 node1 sshd[24969]: Failed password for invalid user root from 221.194.47.252 port 43974 ssh2
Jun 21 20:05:38 node1 sshd[24969]: message repeated 2 times: [ Failed password for invalid user root from
221.194.47.252 port 43974 ssh2]
Jun 21 20:05:38 node1 sshd[24969]: Received disconnect from 221.194.47.252 port 43974:11: [preauth]
Jun 21 20:05:38 node1 sshd[24969]: Disconnected from 221.194.47.252 port 43974 [preauth]
Jun 21 20:05:38 node1 sshd[24969]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser=
rhost=221.194.47.252 user=root
Jun 21 20:05:41 node1 sshd[24971]: User root from 221.194.47.252 not allowed because none of user's groups are listed
in AllowGroups
Jun 21 20:05:41 node1 sshd[24971]: input_userauth_request: invalid user root [preauth]
Jun 21 20:05:42 node1 sshd[24971]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=
rhost=221.194.47.252 user=root
Jun 21 20:05:44 node1 sshd[24971]: Failed password for invalid user root from 221.194.47.252 port 42071 ssh2
Jun 21 20:05:48 node1 sshd[24971]: message repeated 2 times: [ Failed password for invalid user root from
221.194.47.252 port 42071 ssh2]
Jun 21 20:05:49 node1 sshd[24971]: Received disconnect from 221.194.47.252 port 42071:11: [preauth]
Jun 21 20:05:49 node1 sshd[24971]: Disconnected from 221.194.47.252 port 42071 [preauth]
Jun 21 20:05:49 node1 sshd[24971]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser=
rhost=221.194.47.252 user=root
Jun 21 20:05:51 node1 sshd[24976]: User root from 221.194.47.252 not allowed because none of user's groups are listed
in AllowGroups
Jun 21 20:05:51 node1 sshd[24976]: input_userauth_request: invalid user root [preauth]
Jun 21 20:05:51 node1 sshd[24976]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=
rhost=221.194.47.252 user=root
Jun 21 20:05:54 node1 sshd[24976]: Failed password for invalid user root from 221.194.47.252 port 58648 ssh2
Jun 21 20:05:58 node1 sshd[24976]: message repeated 2 times: [ Failed password for invalid user root from
221.194.47.252 port 58648 ssh2]
Jun 21 20:05:59 node1 sshd[24976]: Received disconnect from 221.194.47.252 port 58648:11: [preauth]
Jun 21 20:05:59 node1 sshd[24976]: Disconnected from 221.194.47.252 port 58648 [preauth]
Jun 21 20:05:59 node1 sshd[24976]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser=
rhost=221.194.47.252 user=root
Jun 21 20:06:02 node1 sshd[24980]: User root from 221.194.47.252 not allowed because none of user's groups are listed
in AllowGroups
Jun 21 20:06:02 node1 sshd[24980]: input_userauth_request: invalid user root [preauth]
Jun 21 20:06:02 node1 sshd[24980]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser=
rhost=221.194.47.252 user=root
Should i go by the pid of the sshd process to determine one failed login attempt? I can't go by the port since over one connection per port, multiple failed login attempts can occur and i am trying to be as granular as possible in counting failed login attempts for analysis later.
Any other ideas? My next step is to grep the sshd source or pam to see what i can find.

Error in starting postgresql service in linux through command line

I was starting the postgresql service by
systemctl start postgresql.service
It was raising an error as below
Job for postgresql.service failed. See "systemctl status postgresql.service" and "journalctl -xn" for details.
Please help how to start the service through command line in linux?
Output for journalctl -xn
osboxes:/home/osboxes # journalctl -xn
-- Logs begin at Wed 2015-04-08 10:08:38 BST, end at Tue 2016-03-22 14:15:07 GMT. --
Mar 22 14:09:03 osboxes wickedd[824]: eno16777760: Notified neighbours about IP address 192.168
Mar 22 14:09:03 osboxes wickedd[824]: route ipv4 0.0.0.0/0 via 192.168.182.2 dev eno16777760 ty
Mar 22 14:09:04 osboxes wickedd[824]: Skipping hostname update, none available
Mar 22 14:15:01 osboxes cron[9120]: pam_unix(crond:session): session opened for user root by (u
Mar 22 14:15:01 osboxes systemd[9121]: pam_unix(systemd-user:session): session opened for user
Mar 22 14:15:01 osboxes CRON[9120]: pam_unix(crond:session): session closed for user root
Mar 22 14:15:01 osboxes systemd[9122]: pam_unix(systemd-user:session): session closed for user
Mar 22 14:15:07 osboxes postgresql[9160]: Initializing PostgreSQL 9.3.11 at location ~postgres/
Mar 22 14:15:07 osboxes postgresql[9160]: ..failed
Mar 22 14:15:07 osboxes postgresql[9160]: You can find a log of the initialisation in ~postgres

Bug: Varnish 4 install on CentOS 7 (systemctl)

Installed Varnish from yum; but immediate error when initiating via systemctl.
Jul 28 14:11:54 localhost.localdomain varnishd[6546]: .init_func = VGC_function_vcl_init,
Jul 28 14:11:54 localhost.localdomain varnishd[6546]: .fini_func = VGC_function_vcl_fini,
Jul 28 14:11:54 localhost.localdomain varnishd[6546]: };
Jul 28 14:11:54 localhost.localdomain varnishd[6557]: Assert error in main(), mgt/mgt_main.c line 686:
Jul 28 14:11:54 localhost.localdomain varnishd[6557]: Condition((daemon(1,0)) == 0) not true.
Jul 28 14:11:54 localhost.localdomain varnishd[6557]: errno = 19 (No such device)
Jul 28 14:11:54 localhost.localdomain systemd[1]: Failed to read PID from file /var/run/varnish.pid: Invalid argument
Jul 28 14:11:54 localhost.localdomain systemd[1]: varnish.service never wrote its PID file. Failing.
Jul 28 14:11:54 localhost.localdomain systemd[1]: Failed to start Varnish a high-perfomance HTTP accelerator.
Jul 28 14:11:54 localhost.localdomain systemd[1]: Unit varnish.service entered failed state.
SELinux is disabled; package was installed via root. This is a fresh install.
Looks like you need to reboot. ;)
The message:
Failed to read PID from file /var/run/varnish.pid Invalid argument
is non-critical. It is just systemd trying to read the pidfile too early. You can poll status with:
systemctl status varnish
If its "Main PID" entry is matching the contents of /var/run/varnish.pid(and if varnishd is started via systemd, it always does), you can ignore that message. This is fixed in later versions of systemd.

Resources