Failed global initialization: FileRenameFailed: Could not rename preexisting log file - linux

I get this error while trying to launch mongo deamon.
CONTROL [main] Failed global initialization: FileRenameFailed: Could
not rename preexisting log file
"/var/lib/mongodb/log/mongod.log" to
"/var/lib/mongodb/log/mongod.log.2021-12-02T14-32-24"; run
with --logappend or manually remove file: Permission denied
config
storage:
dbPath: "/var/lib/mondodb/data"
systemLog:
destination: file
path: "/var/lib/mongodb/log/mongod.log"
mongodb has ownership of /var/lib/mongodb and subdirs. Permissions are supposed to be fine.
mondodb dir
drwxr-xr-x 2 mongodb mongodb 4096 Dec 2 15:42 config
drwxr-xr-x 2 mongodb mongodb 4096 Dec 2 15:41 data
drwxr-xr-x 2 mongodb mongodb 4096 Dec 2 15:42 log
The service itself won't run either
> sudo service mongod status
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2021-12-06 17:09:38 GMT; 1s ago
Docs: https://docs.mongodb.org/manual
Process: 24234 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, status=100)
Main PID: 24234 (code=exited, status=100)
Dec 06 17:09:37 GEL-R90VQK84 systemd[1]: Started MongoDB Database Server.
Dec 06 17:09:38 GEL-R90VQK84 systemd[1]: mongod.service: Main process exited, code=exited, status=100/n/a
Dec 06 17:09:38 GEL-R90VQK84 systemd[1]: mongod.service: Failed with result 'exit-code'.

You are running the deamon as root?
Check the ownership of the file
/var/lib/mongodb/log/mongod.log

Related

what happen in rocketchat service

when I run the command
sudo systemctl enable --now rocketchat
sudo systemctl status --now rocketchat
● rocketchat.service - The Rocket.Chat server
Loaded: loaded (/lib/systemd/system/rocketchat.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2022-12-18 16:48:30 CST; 9s ago
Process: 30676 ExecStart=/root/.nvm/versions/node/v19.3.0/bin/node /opt/Rocket.Chat/main.js (code=exited, status=203/EXEC)
Main PID: 30676 (code=exited, status=203/EXEC)
Dec 18 16:48:30 iZbp10iedgidvzsp3ltox5Z systemd[1]: Started The Rocket.Chat server.
Dec 18 16:48:30 iZbp10iedgidvzsp3ltox5Z systemd[30676]: rocketchat.service: Failed to execute command: Permission denied
Dec 18 16:48:30 iZbp10iedgidvzsp3ltox5Z systemd[30676]: rocketchat.service: Failed at step EXEC spawning /root/.nvm/versions/node/v19.3.0/bin/node: Permission denied
Dec 18 16:48:30 iZbp10iedgidvzsp3ltox5Z systemd[1]: rocketchat.service: Main process exited, code=exited, status=203/EXEC
Dec 18 16:48:30 iZbp10iedgidvzsp3ltox5Z systemd[1]: rocketchat.service: Failed with result 'exit-code'.
I don't kown which command is denieded
-rwxrwxrwx 1 admin admin 92571400 Dec 14 19:22 /root/.nvm/versions/node/v19.3.0/bin/node*
Can you tell me how to solve this problem?

NGINX error after 'sudo systemctl status nginx' - Failed with result 'exit-code'

After trying to add new domains do my ubuntu 20.04 cloud server with nginx and pm2,I created a server block in
'/etc/nginx/sites-available/mydomain.ar'
and did the same thing into
'/etc/nginx/sites-enabled/mydomain.ar'
The next step was to do a link to both files with
ln -s /etc/nginx/sites-available/cloud.ktsoftware.ar /etc/nginx/sites-enabled/cloud.ktsoftware.ar
got a error that files already existed
ln: failed to create symbolic link '/etc/nginx/sites-enabled/mydomain.ar': File exists
in consequence i run to forced the link
sudo ln -sf /etc/nginx/sites-available/cloud.ktsoftware.ar /etc/nginx/sites-enabled/cloud.ktsoftware.ar
everything appears ok, no error response after that. Then i do
sudo systemctl status nginx
and got this error:
● nginx.service - A high performance web server and a reverse proxy server
Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2022-07-10 13:47:17 -03; 17min ago
Docs: man:nginx(8)
Process: 1287489 ExecStartPre=/usr/sbin/nginx -t -q -g daemon on; master_process on; (code=exited, status=1/FAILURE)
below first error paragraph
Jul 10 13:47:17 vps-2421400-x systemd[1]: nginx.service: Succeeded.
Jul 10 13:47:17 vps-2421400-x systemd[1]: Stopped A high performance web server and a reverse proxy server.
Jul 10 13:47:17 vps-2421400-x systemd[1]: Starting A high performance web server and a reverse proxy server...
Jul 10 13:47:17 vps-2421400-x nginx[1287489]: nginx: [emerg] open() "/etc/nginx/sites-enabled/mydomain.conf" failed (2: No such file or directory) in /etc/n>
Jul 10 13:47:17 vps-2421400-x nginx[1287489]: nginx: configuration file /etc/nginx/nginx.conf test failed
Jul 10 13:47:17 vps-2421400-x systemd[1]: nginx.service: Control process exited, code=exited, status=1/FAILURE
Jul 10 13:47:17 vps-2421400-x systemd[1]: nginx.service: Failed with result 'exit-code'.
Jul 10 13:47:17 vps-2421400-x systemd[1]: Failed to start A high performance web server and a reverse proxy server.
lines 1-14/14 (END)
"lines 1-14/14 (END)" *
and crashed everything i think.
What is the best way to link the domains server blocks?

restore from pg_basebackup

I made daily backups of a postgresql DB using the command
/usr/bin/pg_basebackup -D $outdir -Ft -x -z -w -R -v
Now I want to restore this DB on another server. I used the description on https://www.postgresql.org/docs/9.5/static/continuous-archiving.html#BACKUP-PITR-RECOVERY.
The recovery.conf file included in the backup has the following contents:
standby_mode = 'on'
primary_conninfo = 'user=postgres port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres'
The next step (8.) in the documentation says to start postgresql. This results in a failure due to a timeout:
3783 postgres: startup process waiting for 0000000100000024000000B
On the original server I don't have this file. Is it possible to restore only the state of the pg_basebackup without using any WAL files? What should then be in the recovery.conf file?
Following the suggestion by #JosMac I moved the recovery.conf with this result:
shaun2:/var/lib/pgsql/data # service postgresql start
Job for postgresql.service failed because the control process exited with error code. See "systemctl status postgresql.service" and "journalctl -xe" for details.
shaun2:/var/lib/pgsql/data # service postgresql status
â postgresql.service - PostgreSQL database server
Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2018-06-18 12:02:53 CEST; 12s ago
Process: 1340 ExecStop=/usr/lib/postgresql-init stop (code=exited, status=0/SUCCESS)
Process: 9355 ExecStart=/usr/lib/postgresql-init start (code=exited, status=1/FAILURE)
Main PID: 1060 (code=exited, status=0/SUCCESS)
Jun 18 12:02:52 shaun2 postgres[9369]: [3-1] 2018-06-18 12:02:52 CEST LOG: invalid checkpoint record
Jun 18 12:02:52 shaun2 postgres[9369]: [4-1] 2018-06-18 12:02:52 CEST FATAL: could not locate required checkpoint record
Jun 18 12:02:52 shaun2 postgres[9369]: [4-2] 2018-06-18 12:02:52 CEST HINT: If you are not restoring from a backup, try removing the file "/var/lib/pgsql/data/backup_label".
Jun 18 12:02:52 shaun2 postgres[9367]: [2-1] 2018-06-18 12:02:52 CEST LOG: startup process (PID 9369) exited with exit code 1
Jun 18 12:02:52 shaun2 postgres[9367]: [3-1] 2018-06-18 12:02:52 CEST LOG: aborting startup due to startup process failure
Jun 18 12:02:53 shaun2 postgresql-init[9355]: pg_ctl: could not start server
Jun 18 12:02:53 shaun2 systemd[1]: postgresql.service: Control process exited, code=exited status=1
Jun 18 12:02:53 shaun2 systemd[1]: Failed to start PostgreSQL database server.
Jun 18 12:02:53 shaun2 systemd[1]: postgresql.service: Unit entered failed state.
Jun 18 12:02:53 shaun2 systemd[1]: postgresql.service: Failed with result 'exit-code'.
I suppose that PostgreSQL is still looking for the missing WAL file because of the contents of backup_label:
shaun2:/var/lib/pgsql/data # cat backup_label
START WAL LOCATION: 24/B0000028 (file 0000000100000024000000B0)
CHECKPOINT LOCATION: 24/B0000028
BACKUP METHOD: streamed
BACKUP FROM: master
START TIME: 2018-06-14 02:55:08 CEST
LABEL: pg_basebackup base backup
Result after moving backup_label away:
shaun2:/var/lib/pgsql/data # service postgresql status
â postgresql.service - PostgreSQL database server
Loaded: loaded (/usr/lib/systemd/system/postgresql.service; enabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Mon 2018-06-18 12:17:54 CEST; 4s ago
Process: 1340 ExecStop=/usr/lib/postgresql-init stop (code=exited, status=0/SUCCESS)
Process: 10401 ExecStart=/usr/lib/postgresql-init start (code=exited, status=1/FAILURE)
Main PID: 1060 (code=exited, status=0/SUCCESS)
Jun 18 12:17:53 shaun2 postgres[10414]: [4-1] 2018-06-18 12:17:53 CEST LOG: invalid secondary checkpoint record
Jun 18 12:17:53 shaun2 postgres[10414]: [5-1] 2018-06-18 12:17:53 CEST PANIC: could not locate a valid checkpoint record
Jun 18 12:17:54 shaun2 postgres[10412]: [2-1] 2018-06-18 12:17:54 CEST LOG: startup process (PID 10414) was terminated by signal 6: Aborted
We use pg_basebackup for backups and also did several restorations so generally it works very well without problems.
But I would recommend you to use parameter -X stream instead of -x (meaning "-X fetch"). With this parameter pg_basebackup will catch and store WAL log segments created during the time of backup together with data files. These WAL logs will be stored in separate pg_xlog.tar or pg_wal.tar files (depending on PG version).
Full description of restoration can be find here - pg_basebackup / pg-barman – restore tar backup
The -R option generates a recovery.conf file that is useful if the backup will be used in replica servers, because it sets the server in standby_mode and it also has the primary_conninfo to pull data from the primary.
So, if you just want to make/restore backups, I wouldn't use -R. Just in case it helps, I used these options: -v -P -x -F tar -z.
To restore the backup, unzip it to the proper directory (e.g. /var/lib/postgresql/$VERSION/main), create an empty recovery.conf file there (or clear the one you have, but better don't use -R), and start the server.

postgresql service not running on RHEL 7

We have an application that is running on RHEL6/32 bit and RHEL6/64 bit. This application uses postgresql 8.4 from the beginning. Now, we want to provide support for this application on RHEL7/64 bit. RHEL7 comes with default postgresql 9.2 in its yum list and this is getting installed and its related services are running properly as well. But after installing postgresql 8.4 on RHEL7, it seems like the services are never running. Please find below the logs:
[root#linpubn218 postgres]# service postgresql status
postgresql.service - SYSV: PostgreSQL database server.
Loaded: loaded (/etc/rc.d/init.d/postgresql)
Active: failed (Result: resources) since Mon 2016-07-25 12:40:28 IST; 2h 0min ago
Docs: man:systemd-sysv-generator(8)
Jul 25 12:40:26 linpubn218.gl.avaya.com systemd[1]: Starting SYSV: PostgreSQL database server....
Jul 25 12:40:28 linpubn218.gl.avaya.com postgresql[26957]: Starting postgresql service: [ OK ]
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: PID file /var/run/postmaster-8.4.pid not readable (yet?) after start.
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: Failed to start SYSV: PostgreSQL database server..
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: Unit postgresql.service entered failed state.
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: postgresql.service failed.
Jul 25 14:33:45 linpubn218.gl.avaya.com systemd[1]: Unit postgresql.service cannot be reloaded because it is inactive.
Jul 25 14:33:45 linpubn218.gl.avaya.com systemd[1]: Unit postgresql.service cannot be reloaded because it is inactive.
After looking at the logs in journalctl -xe
[root#linpubn218 postgres]# journalctl -xe
Jul 25 14:39:21 linpubn218.gl.avaya.com yum[29260]: Installed: postgresql84-libs-8.4.17-1PGDG.rhel6.x86_64
Jul 25 14:39:45 linpubn218.gl.avaya.com yum[29275]: Installed: postgresql84-8.4.17-1PGDG.rhel6.x86_64
Jul 25 14:40:01 linpubn218.gl.avaya.com useradd[29316]: failed adding user 'postgres', exit code: 9
Jul 25 14:40:02 linpubn218.gl.avaya.com CROND[29320]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Jul 25 14:40:02 linpubn218.gl.avaya.com systemd[1]: Reloading.
Jul 25 14:40:03 linpubn218.gl.avaya.com systemd[1]: Configuration file /usr/lib/systemd/system/auditd.service is marked world-inaccessible. This has no effect as config
Jul 25 14:40:03 linpubn218.gl.avaya.com yum[29309]: Installed: postgresql84-server-8.4.17-1PGDG.rhel6.x86_64
Jul 25 14:42:05 linpubn218.gl.avaya.com polkitd[819]: Registered Authentication Agent for unix-process:29459:43987285 (system bus name :1.292 [/usr/bin/pkttyagent --not
Jul 25 14:42:05 linpubn218.gl.avaya.com systemd[1]: Starting SYSV: PostgreSQL database server....
Jul 25 14:42:06 linpubn218.gl.avaya.com runuser[29473]: pam_unix(runuser-l:session): session closed for user postgres
Jul 25 14:42:08 linpubn218.gl.avaya.com postgresql[29464]: Starting postgresql service: [ OK ]
Jul 25 14:42:08 linpubn218.gl.avaya.com systemd[1]: PID file /var/run/postmaster-8.4.pid not readable (yet?) after start.
Jul 25 14:42:08 linpubn218.gl.avaya.com systemd[1]: Failed to start SYSV: PostgreSQL database server..
Can postgresql 8.4 be installed on RHEL7, which is a systemd based OS? If yes, then what should I do to remove the above error?
I noticed that in /etc/init.d/postgresql-8.4 there is a declared variable:
pidfile="/var/run/postmaster-${PGMAJORVERSION}.${PGPORT}.pid"
But in systemctl, PIDfile is not the same:
# systemctl show postgresql-8.4.service -p PIDFile
PIDFile=/var/run/postmaster-8.4.pid
So, to fix the problem edit /etc/init.d/postgresql-8.4 and replace
pidfile="/var/run/postmaster-${PGMAJORVERSION}.${PGPORT}.pid"
with
pidfile="/var/run/postmaster-${PGMAJORVERSION}.pid"
then reload systemctl:
# systemctl daemon-reload
#/etc/init.d/postgresql-8.4 start
Starting postgresql-8.4 (via systemctl): [ OK ]
Generally permissions caused this type of error
su - postgres
After that:
chmod 700 -R <data_directory>
And you should check SELinux as well.

MongoDB installation hell

I'm having a dilemma here. I was required to (attempt to) upgrade MongoDB on my CentOS 7 server from 2.6.X to 3.0+. I tried following the basic guide from Mongo (replacing the binaries directly) and this worked perfectly well... in local. On the server my MongoDB service is totally flipping out and I have no idea. And on top of that Mongo Shell is still at 2.6 somehow XD
systemctl status mongo* reveals this catastrophe:
root#staging:~# systemctl status mongo*
● mongod.service - SYSV: Mongo is a scalable, document-oriented database.
Loaded: loaded (/etc/rc.d/init.d/mongod)
Active: failed (Result: exit-code) since 一 2016-01-25 16:57:13 CST; 18h ago
Docs: man:systemd-sysv-generator(8)
1月 25 16:57:13 staging systemd[1]: Starting SYSV: Mongo is a scalable, document-oriented database....
1月 25 16:57:13 staging runuser[5310]: pam_unix(runuser:session): session opened for user mongod by (uid=0)
1月 25 16:57:13 staging runuser[5310]: pam_unix(runuser:session): session closed for user mongod
1月 25 16:57:13 staging mongod[5301]: Starting mongod: [FAILED]
1月 25 16:57:13 staging systemd[1]: mongod.service: control process exited, code=exited status=1
1月 25 16:57:13 staging systemd[1]: Failed to start SYSV: Mongo is a scalable, document-oriented database..
1月 25 16:57:13 staging systemd[1]: Unit mongod.service entered failed state.
1月 25 16:57:13 staging systemd[1]: mongod.service failed.
1月 26 11:03:04 staging systemd[1]: Stopped SYSV: Mongo is a scalable, document-oriented database..
1月 26 11:04:52 staging systemd[1]: Stopped SYSV: Mongo is a scalable, document-oriented database..
● mongos.service
Loaded: not-found (Reason: No such file or directory)
Active: failed (Result: exit-code) since 一 2016-01-25 15:46:20 CST; 20h ago
1月 25 15:46:20 staging systemd[1]: Starting High-performance, schema-free document-oriented database...
1月 25 15:46:20 staging mongos[2712]: /usr/bin/mongos: error while loading shared libraries: libssl.so.1.0.0: cannot open shared object file: No such f... directory
1月 25 15:46:20 staging systemd[1]: mongos.service: control process exited, code=exited status=127
1月 25 15:46:20 staging systemd[1]: Failed to start High-performance, schema-free document-oriented database.
1月 25 15:46:20 staging systemd[1]: Unit mongos.service entered failed state.
1月 25 15:46:20 staging systemd[1]: mongos.service failed.
1月 25 16:04:23 staging systemd[1]: Stopped High-performance, schema-free document-oriented database.
1月 26 11:18:04 staging systemd[1]: Stopped mongos.service.
Hint: Some lines were ellipsized, use -l to show in full.
Any assistance at all would be greatly appreciated!
Thanks again, as always.
This was ultimately solved by yum remove mongo* followed by manually removing ANYTHING referring to Mongo in any way (found using locate mongo*). Then adding an up to date Mongo repo and installing v3.2.1 via yum (contrary to the more commonplace suggestion from MongoDB to simply replace the binaries directly).

Resources