We have an application that is running on RHEL6/32 bit and RHEL6/64 bit. This application uses postgresql 8.4 from the beginning. Now, we want to provide support for this application on RHEL7/64 bit. RHEL7 comes with default postgresql 9.2 in its yum list and this is getting installed and its related services are running properly as well. But after installing postgresql 8.4 on RHEL7, it seems like the services are never running. Please find below the logs:
[root#linpubn218 postgres]# service postgresql status
postgresql.service - SYSV: PostgreSQL database server.
Loaded: loaded (/etc/rc.d/init.d/postgresql)
Active: failed (Result: resources) since Mon 2016-07-25 12:40:28 IST; 2h 0min ago
Docs: man:systemd-sysv-generator(8)
Jul 25 12:40:26 linpubn218.gl.avaya.com systemd[1]: Starting SYSV: PostgreSQL database server....
Jul 25 12:40:28 linpubn218.gl.avaya.com postgresql[26957]: Starting postgresql service: [ OK ]
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: PID file /var/run/postmaster-8.4.pid not readable (yet?) after start.
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: Failed to start SYSV: PostgreSQL database server..
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: Unit postgresql.service entered failed state.
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: postgresql.service failed.
Jul 25 14:33:45 linpubn218.gl.avaya.com systemd[1]: Unit postgresql.service cannot be reloaded because it is inactive.
Jul 25 14:33:45 linpubn218.gl.avaya.com systemd[1]: Unit postgresql.service cannot be reloaded because it is inactive.
After looking at the logs in journalctl -xe
[root#linpubn218 postgres]# journalctl -xe
Jul 25 14:39:21 linpubn218.gl.avaya.com yum[29260]: Installed: postgresql84-libs-8.4.17-1PGDG.rhel6.x86_64
Jul 25 14:39:45 linpubn218.gl.avaya.com yum[29275]: Installed: postgresql84-8.4.17-1PGDG.rhel6.x86_64
Jul 25 14:40:01 linpubn218.gl.avaya.com useradd[29316]: failed adding user 'postgres', exit code: 9
Jul 25 14:40:02 linpubn218.gl.avaya.com CROND[29320]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Jul 25 14:40:02 linpubn218.gl.avaya.com systemd[1]: Reloading.
Jul 25 14:40:03 linpubn218.gl.avaya.com systemd[1]: Configuration file /usr/lib/systemd/system/auditd.service is marked world-inaccessible. This has no effect as config
Jul 25 14:40:03 linpubn218.gl.avaya.com yum[29309]: Installed: postgresql84-server-8.4.17-1PGDG.rhel6.x86_64
Jul 25 14:42:05 linpubn218.gl.avaya.com polkitd[819]: Registered Authentication Agent for unix-process:29459:43987285 (system bus name :1.292 [/usr/bin/pkttyagent --not
Jul 25 14:42:05 linpubn218.gl.avaya.com systemd[1]: Starting SYSV: PostgreSQL database server....
Jul 25 14:42:06 linpubn218.gl.avaya.com runuser[29473]: pam_unix(runuser-l:session): session closed for user postgres
Jul 25 14:42:08 linpubn218.gl.avaya.com postgresql[29464]: Starting postgresql service: [ OK ]
Jul 25 14:42:08 linpubn218.gl.avaya.com systemd[1]: PID file /var/run/postmaster-8.4.pid not readable (yet?) after start.
Jul 25 14:42:08 linpubn218.gl.avaya.com systemd[1]: Failed to start SYSV: PostgreSQL database server..
Can postgresql 8.4 be installed on RHEL7, which is a systemd based OS? If yes, then what should I do to remove the above error?
I noticed that in /etc/init.d/postgresql-8.4 there is a declared variable:
pidfile="/var/run/postmaster-${PGMAJORVERSION}.${PGPORT}.pid"
But in systemctl, PIDfile is not the same:
# systemctl show postgresql-8.4.service -p PIDFile
PIDFile=/var/run/postmaster-8.4.pid
So, to fix the problem edit /etc/init.d/postgresql-8.4 and replace
pidfile="/var/run/postmaster-${PGMAJORVERSION}.${PGPORT}.pid"
with
pidfile="/var/run/postmaster-${PGMAJORVERSION}.pid"
then reload systemctl:
# systemctl daemon-reload
#/etc/init.d/postgresql-8.4 start
Starting postgresql-8.4 (via systemctl): [ OK ]
Generally permissions caused this type of error
su - postgres
After that:
chmod 700 -R <data_directory>
And you should check SELinux as well.
Related
I installed Postgres 12.3 from source code with steps(according to this):
./configure --with-openssl --with-systemd
make
sudo make install
If I start with pg_ctl from postgres user all works fine:
pg_ctl -D $PGDATA -l /path/to/logfile
Then I try to create a systemd service, as described here.
Steps:
Create file /etc/systemd/system/postgresql.service with content:
[Unit]
Description=PostgreSQL database server
Documentation=man:postgres(1)
[Service]
Type=notify
User=postgres
ExecStart=/usr/local/pgsql/bin/postgres -D /path/to/pgdata
ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
KillSignal=SIGINT
TimeoutSec=0
[Install]
WantedBy=multi-user.target
sudo systemctl enable postgresql.service
Then I reboot my machine.
After restart Postgres unavaliable. Some logs:
sudo systemctl status postgresql.service
postgresql.service - PostgreSQL database server
Loaded: loaded (/etc/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2020-06-05 03:23:32 MSK; 37s ago
Docs: man:postgres(1)
Process: 724 ExecStart=/usr/local/pgsql/bin/postgres -D /path/to/pgdata (code=exited, status=1/FAILURE)
Main PID: 724 (code=exited, status=1/FAILURE)
Jun 05 03:23:31 ctsvc systemd[1]: Starting PostgreSQL database server...
Jun 05 03:23:32 ctsvc systemd[1]: postgresql.service: Main process exited, code=exited, status=1/FAILURE
Jun 05 03:23:32 ctsvc systemd[1]: Failed to start PostgreSQL database server.
Jun 05 03:23:32 ctsvc systemd[1]: postgresql.service: Unit entered failed state.
Jun 05 03:23:32 ctsvc systemd[1]: postgresql.service: Failed with result 'exit-code'.
journalctl -xe | grep postgres
-- Subject: Unit postgresql.service has begun start-up
-- Unit postgresql.service has begun starting up.
Jun 05 03:23:32 ctsvc postgres[724]: 2020-06-05 03:23:32.209 MSK [724] LOG: starting PostgreSQL 12.3 on armv7l-unknown-linux-gnueabihf, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 32-bit
Jun 05 03:23:32 ctsvc postgres[724]: 2020-06-05 03:23:32.211 MSK [724] LOG: could not bind IPv4 address "172.17.17.42": Cannot assign requested address
Jun 05 03:23:32 ctsvc postgres[724]: 2020-06-05 03:23:32.211 MSK [724] HINT: Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
Jun 05 03:23:32 ctsvc postgres[724]: 2020-06-05 03:23:32.211 MSK [724] WARNING: could not create listen socket for "172.17.17.42"
Jun 05 03:23:32 ctsvc postgres[724]: 2020-06-05 03:23:32.211 MSK [724] FATAL: could not create any TCP/IP sockets
Jun 05 03:23:32 ctsvc postgres[724]: 2020-06-05 03:23:32.212 MSK [724] LOG: database system is shut down
Jun 05 03:23:32 ctsvc systemd[1]: postgresql.service: Main process exited, code=exited, status=1/FAILURE
-- Subject: Unit postgresql.service has failed
-- Unit postgresql.service has failed.
Jun 05 03:23:32 ctsvc systemd[1]: postgresql.service: Unit entered failed state.
Jun 05 03:23:32 ctsvc systemd[1]: postgresql.service: Failed with result 'exit-code'.
Jun 05 03:24:09 ctsvc sudo[1602]: user1 : TTY=pts/0 ; PWD=/home/user1 ; USER=root ; COMMAND=/bin/systemctl status postgresql.service
netstat -tnl | grep "5432" - shows nothing.
After that I can manualy run this service:
sudo systemctl status postgresql.service
● postgresql.service - PostgreSQL database server
Loaded: loaded (/etc/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2020-06-05 03:30:57 MSK; 8s ago
Docs: man:postgres(1)
Main PID: 1681 (postgres)
Tasks: 8 (limit: 4915)
CGroup: /system.slice/postgresql.service
├─1681 /usr/local/pgsql/bin/postgres -D /path/to/pgdata
├─1683 postgres: checkpointer
├─1684 postgres: background writer
├─1685 postgres: walwriter
├─1686 postgres: autovacuum launcher
├─1687 postgres: stats collector
├─1688 postgres: logical replication launcher
└─1693 postgres: postgres postgres 172.17.17.40(53600) idle
Jun 05 03:30:56 ctsvc systemd[1]: Starting PostgreSQL database server...
Jun 05 03:30:57 ctsvc postgres[1681]: 2020-06-05 03:30:57.006 MSK [1681] LOG: starting PostgreSQL 12.3 on armv7l-unknown-linux-gnueabihf, compiled b
Jun 05 03:30:57 ctsvc postgres[1681]: 2020-06-05 03:30:57.007 MSK [1681] LOG: listening on IPv4 address "172.17.17.42", port 5432
Jun 05 03:30:57 ctsvc postgres[1681]: 2020-06-05 03:30:57.032 MSK [1681] LOG: listening on Unix socket "/tmp/.s.PGSQL.5432"
Jun 05 03:30:57 ctsvc postgres[1681]: 2020-06-05 03:30:57.424 MSK [1682] LOG: database system was shut down at 2020-06-05 02:59:03 MSK
Jun 05 03:30:57 ctsvc postgres[1681]: 2020-06-05 03:30:57.725 MSK [1681] LOG: database system is ready to accept connections
Jun 05 03:30:57 ctsvc systemd[1]: Started PostgreSQL database server.
netstat -tnl | grep '5432'
tcp 0 0 172.17.17.42:5432 0.0.0.0:* LISTEN
In my postgresql.conf I have following:
# - Connection Settings -
listen_addresses = '172.17.17.42'
port = 5432
max_connections = 100
If it helps: Postgres runs on Cubietruck with Armbian.
uname -a
Linux ctsvc 4.19.62-sunxi #5.92 SMP Wed Jul 31 22:07:23 CEST 2019 armv7l GNU/Linux
In my system there are no more processes that try to bind this port at boot time. As far as I understand, with the service itself and Postgresql everything is fine. However, something strange happens during the launch, but I can’t understand how to find out the reason of this behavior.
Thanks in advance.
Finally my file /etc/systemd/system/postgresql.service looks like this:
[Unit]
Description=PostgreSQL database server
Documentation=man:postgres(1)
Wants=network-online.target
After=network.target network-online.target
[Service]
Type=notify
User=postgres
ExecStart=/usr/local/pgsql/bin/postgres -D /path/to/pgdata
ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
KillSignal=SIGINT
TimeoutSec=0
[Install]
WantedBy=multi-user.target
Thanks to Laurenz Albe comment, I added following in Unit section:
Wants=network-online.target
After=network.target network-online.target
to make sure that network fully operational before PG start. After this PG running correctly after reboot.
I've updated webmin, but now, it refuse to restart :
● webmin.service - LSB: web-based administration interface for Unix systems
Loaded: loaded (/etc/init.d/webmin; generated; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-07-29 09:30:29 CEST; 12s ago
Docs: man:systemd-sysv-generator(8)
Process: 1485 ExecStart=/etc/init.d/webmin start (code=exited, status=2)
Jul 29 09:30:26 vps513135 systemd[1]: Starting LSB: web-based administration interface for Unix systems...
Jul 29 09:30:27 vps513135 perl[1486]: pam_unix(webmin:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost= user=root
Jul 29 09:30:29 vps513135 systemd[1]: webmin.service: Control process exited, code=exited status=2
Jul 29 09:30:29 vps513135 systemd[1]: Failed to start LSB: web-based administration interface for Unix systems.
Jul 29 09:30:29 vps513135 systemd[1]: webmin.service: Unit entered failed state.
Jul 29 09:30:29 vps513135 systemd[1]: webmin.service: Failed with result 'exit-code'.
Can someone explain me what does pam_unix(webmin:auth): authentication failure mean ?
some more infos :
root#vps513135:~# uname -a
Linux vps513135 4.9.0-7-amd64 #1 SMP Debian 4.9.110-1 (2018-07-05) x86_64 GNU/Linux
Thank you :)
SOLUTION
I tried to start like this
root#vps513135:~# /etc/webmin/start
Starting Webmin server in /usr/share/webmin
Failed to open SSL key /home/sowdowdow/domains/sow.sowdowdow.fr/ssl.key at /usr/share/webmin/miniserv.pl line 4414.
The output is a bit more clear, and finally found a solution here.
Comment out the lines related to the borked server in /etc/webmin/miniserv.conf.
#ipcert_sow.sowdowdow.fr,*.sow.sowdowdow.fr=/home/sowdowdow/domains/sow.sowdowdow.fr/ssl.cert
#ipkey_sow.sowdowdow.fr,*.sow.sowdowdow.fr=/home/sowdowdow/domains/sow.sowdowdow.fr/ssl.key
Webmin doesn't support systemctl command. Instead of that please use the following command to start Webmin service.
/etc/rc.d/init.d/webmin stop
systemctl start webmin
I tried this command and I am able to start Webmin service on my server.
I have a CoreOS beta (1185.2.0) installed.
I have the following systemd service file to start calico-node:
[Unit]
Description=Calico per-host agent
Requires=network-online.target
After=network-online.target
[Service]
Slice=machine.slice
PermissionsStartOnly=true
Environment=ETCD_CA_CERT_FILE=/etc/ssl/etcd/ca.pem
Environment=ETCD_CERT_FILE=/etc/ssl/etcd/etcd1.pem
Environment=ETCD_KEY_FILE=/etc/ssl/etcd/etcd1-key.pem
Environment=CALICO_DISABLE_FILE_LOGGING=true
Environment=HOSTNAME=10.79.218.2
Environment=IP=10.79.218.2
Environment=FELIX_FELIXHOSTNAME=10.79.218.2
Environment=CALICO_NETWORKING=true
Environment=NO_DEFAULT_POOLS=true
Environment=ETCD_ENDPOINTS=https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379
ExecStartPre=/bin/mkdir /var/run/calico
ExecStart=/usr/bin/rkt run --inherit-env --stage1-from-dir=stage1-fly.aci --volume=var-run-calico,kind=host,source=/var/run/calico --volume=modules,kind=host,source=/lib/modules,readOnly=false --mount=volume=modules,target=/lib/modules --volume=dns,kind=host,source=/etc/resolv.conf,readOnly=true --volume=etcd-tls-certs,kind=host,source=/etc/ssl/etcd,readOnly=true --mount=volume=dns,target=/etc/resolv.conf --mount=volume=etcd-tls-certs,target=/etc/ssl/etcd --mount=volume=var-run-calico,target=/var/run/calico --trust-keys-from-https quay.io/calico/node:v0.22.0
KillMode=mixed
Restart=always
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
welp.. the systemd fails with:
● calico-node.service - Calico per-host agent
Loaded: loaded (/etc/systemd/system/calico-node.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit-hit) since Tue 2016-10-25 04:51:15 UTC; 9min ago
Process: 1970 ExecStart=/usr/bin/rkt run --inherit-env --stage1-from-dir=stage1-fly.aci --volume=var-run-calico,kind=host,source=/var/
Process: 4307 ExecStartPre=/bin/mkdir /var/run/calico (code=exited, status=1/FAILURE)
Main PID: 1970 (code=exited, status=1/FAILURE)
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: Failed to start Calico per-host agent.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Unit entered failed state.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Failed with result 'exit-code'.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Service hold-off time over, scheduling restart.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: Stopped Calico per-host agent.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Start request repeated too quickly.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: Failed to start Calico per-host agent.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Unit entered failed state.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Failed with result 'start-limit-hit'.
I tried setting the environment variables on terminal and running the rkt command and I got the error message
image: using image from file /usr/lib/rkt/stage1-images/stage1-fly.aci
run: open /usr/lib/rkt/stage1-images/stage1-fly.aci.asc: no such file or directory
I think that error may relate to the following configuration file at /etc/rkt/paths.d/paths.json
{
"rktKind": "paths",
"rktVersion": "v1",
"stage1-images": "/usr/lib/rkt/stage1-images"
}
I need the paths configuration file later on for kubernetes.
any ideas? the asc file really doesn't exist there.
/usr/lib is a dynamic link to /usr/lib64. rkt configured there not to search for certificates for container images at /usr/lib64 and not /usr/lib.
it seems that by default this configuration is already set properly, so just removing the file /etc/rkt/paths.d/paths.json resolves the issue.
full answer at https://github.com/coreos/rkt/issues/3320
I'm having a dilemma here. I was required to (attempt to) upgrade MongoDB on my CentOS 7 server from 2.6.X to 3.0+. I tried following the basic guide from Mongo (replacing the binaries directly) and this worked perfectly well... in local. On the server my MongoDB service is totally flipping out and I have no idea. And on top of that Mongo Shell is still at 2.6 somehow XD
systemctl status mongo* reveals this catastrophe:
root#staging:~# systemctl status mongo*
● mongod.service - SYSV: Mongo is a scalable, document-oriented database.
Loaded: loaded (/etc/rc.d/init.d/mongod)
Active: failed (Result: exit-code) since 一 2016-01-25 16:57:13 CST; 18h ago
Docs: man:systemd-sysv-generator(8)
1月 25 16:57:13 staging systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
1月 25 16:57:13 staging runuser[5310]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
1月 25 16:57:13 staging runuser[5310]: pam_unix(runuser:session): session closed for user mongod
1月 25 16:57:13 staging mongod[5301]: Starting mongod: [FAILED]
1月 25 16:57:13 staging systemd[1]: mongod.service: control process exited, code=exited status=1
1月 25 16:57:13 staging systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
1月 25 16:57:13 staging systemd[1]: Unit mongod.service entered failed state.
1月 25 16:57:13 staging systemd[1]: mongod.service failed.
1月 26 11:03:04 staging systemd[1]: Stopped SYSV: Mongo is a scalable, document-oriented database..
1月 26 11:04:52 staging systemd[1]: Stopped SYSV: Mongo is a scalable, document-oriented database..
● mongos.service
Loaded: not-found (Reason: No such file or directory)
Active: failed (Result: exit-code) since 一 2016-01-25 15:46:20 CST; 20h ago
1月 25 15:46:20 staging systemd[1]: Starting High-performance, schema-free document-oriented database...
1月 25 15:46:20 staging mongos[2712]: /usr/bin/mongos: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such f... directory
1月 25 15:46:20 staging systemd[1]: mongos.service: control process exited, code=exited status=127
1月 25 15:46:20 staging systemd[1]: Failed to start High-performance, schema-free document-oriented database.
1月 25 15:46:20 staging systemd[1]: Unit mongos.service entered failed state.
1月 25 15:46:20 staging systemd[1]: mongos.service failed.
1月 25 16:04:23 staging systemd[1]: Stopped High-performance, schema-free document-oriented database.
1月 26 11:18:04 staging systemd[1]: Stopped mongos.service.
Hint: Some lines were ellipsized, use -l to show in full.
Any assistance at all would be greatly appreciated!
Thanks again, as always.
This was ultimately solved by yum remove mongo* followed by manually removing ANYTHING referring to Mongo in any way (found using locate mongo*). Then adding an up to date Mongo repo and installing v3.2.1 via yum (contrary to the more commonplace suggestion from MongoDB to simply replace the binaries directly).
I installed packstack on my fresh installation of Fedora 21 with all updates. When I run
packstack --allinone I received this error:
ERROR : Error appeared during Puppet run: 192.168. 1.*_keystone.pp Error:
Could not start Service[keystone]: Execution of '/sbin/service openstack-keystone
start'` returned 1: Redirecting to /bin/systemctl start openstack-keystone.service
You will find full trace in log /var/tmp/packstack/20141223-022613-whLvTs/manifests
/192.168.1.*_keystone.pp.log
And this is the log:
Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder#services]:
Dependency Service[keystone] has failures: true
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder#services]:
Skipping because of failed dependencies
Notice: Finished catalog run in 13.02 seconds
With systemctl status openstack-keystone.service get this:
openstack-keystone.service - OpenStack Identity Service (code-named Keystone)
Loaded: loaded (/usr/lib/systemd/system/openstack-keystone.service; disabled)
Active: failed (Result: start-limit) since Tue 2014-12-23 19:47:36 EET; 1min 59s ago
Process: 22526 ExecStart=/usr/bin/keystone-all (code=exited, status=1/FAILURE)
Main PID: 22526 (code=exited, status=1/FAILURE)
Dec 23 19:47:35 localhost.localdomain systemd[1]: Failed to start OpenStack...
Dec 23 19:47:35 localhost.localdomain systemd[1]: Unit openstack-keystone.s...
Dec 23 19:47:35 localhost.localdomain systemd[1]: openstack-keystone.servic...
Dec 23 19:47:36 localhost.localdomain systemd[1]: start request repeated to...
Dec 23 19:47:36 localhost.localdomain systemd[1]: Failed to start OpenStack...
Dec 23 19:47:36 localhost.localdomain systemd[1]: Unit openstack-keystone.s...
Dec 23 19:47:36 localhost.localdomain systemd[1]: openstack-keystone.servic...
This can happen due SELinux avc denial because of a missing policy.
You can try to put SELinux to permissive mode:
# setenforce 0
A similar bug