I am trying to set up Postgresql 9.3 server on Centos 7 (installation via yum) inside a custom directory, which in my case is an encrypted partition (/custom_container/database) that is mounted on startup. For a certain reason Postgresql does not behave like it should in the manual and makes an error on service startup.
Note: It does not want to accept the PGDATA environment variable which I set, and when running
su - postgres -c '/usr/pgsql-9.3/bin/initdb'
(given that the PGDATA directory is owned by postgres:postgres) the cluster gets initialized inside the default directory /var/lib/pgsql/9.3/data/
The only way to change that is using
su - postgres -c '/usr/pgsql-9.3/bin/initdb --pgdata=$PGDATA'
Which initializes the directory inside the custom container I am using. This is something I could not figure out, as the docs say that PGDATA variable is taken on default.
Problem: When running
service postgresql-9.3 start
I get an error with the log
postgresql-9.3.service - PostgreSQL 9.3 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-9.3.service; disabled)
Active: failed (Result: exit-code) since Mon 2014-11-10 15:24:15 CET; 1s ago
Process: 2785 ExecStartPre=/usr/pgsql-9.3/bin/postgresql93-check-db-dir ${PGDATA} (code=exited, status=1/FAILURE)
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: Starting PostgreSQL 9.3 database server...
Nov 10 15:24:15 CentOS-70-64-minimal postgresql93-check-db-dir[2785]: "/var/lib/pgsql/9.3/data/" is missing or empty.
Nov 10 15:24:15 CentOS-70-64-minimal postgresql93-check-db-dir[2785]: Use "/usr/pgsql-9.3/bin/postgresql93-setup initdb" to initialize t...ster.
Nov 10 15:24:15 CentOS-70-64-minimal postgresql93-check-db-dir[2785]: See %{_pkgdocdir}/README.rpm-dist for more information.
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: postgresql-9.3.service: control process exited, code=exited status=1
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: Failed to start PostgreSQL 9.3 database server.
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: Unit postgresql-9.3.service entered failed state.
Which means that Postgresql, even though the cluster is initialized in the new $PGDATA directory (/custom_container/database) still looks for the cluster in /var/lib/pgsql/9.3/data/
Did anyone experience this Postgresql behavior before? Could it be that I forgot certain configuration options or that the problem comes from Postgresql installation?
Thank you in advance!
It appears the real problem was setting the environment variables, which I got working in the following thread:
Centos 7 environment variables for Postgres service
The issue is the PGDATA variable set inside the custom /etc/systemd/system/postgresql-9.3.service which should be created from the contents of /usr/lib/systemd/system/postgresql-9.3.service which uses the default PGDATA var.
You need to create a custom postgresql.service file in /etc/systemd/system/, which overrides the default PGDATA environment variable. Your custom service file can .include the default postgresql service file, so you only need to add what you want to change. That way, upgrades can still modify/improve? stuff in the default service file, while your change is preserved.
This is how I just did it in Centos 7:
cat <<END >/etc/systemd/system/postgresql.service
.include /lib/systemd/system/postgresql.service
[Service]
Environment=PGDATA=/mnt/postgres/data ## <== SET THIS TO YOUR WANTED $PGDATA
END
systemctl daemon-reload
systemctl restart postgresql.service
Verify :
ps -ax | grep [p]ostgres
Update:
Rather than manually creating the file and adding the .include line, you can also use the systemd built-in way:
systemctl edit postgresql.service
This will open your default editor and save your changes to /etc/systemd/system/postgresql.service.d/override.conf
try this:
## Login with postgres user
su - postgres
export PGDATA=/your_path/data
pg_ctl -D $PGDATA start &
I think the most "CentOS 7 way" to do it is to copy the service file:
sudo cp /usr/lib/systemd/system/postgresql-9.6.service /etc/systemd/system/postgresql-9.6.service
Then edit the file /etc/systemd/system/postgresql-9.6.service:
# Location of database directory
Environment=PGDATA=/mnt/volume/var/lib/pgsql/9.6/data/
Then start it sudo systemctl start postgresql-9.6 and verify:
# sudo ps -ax | grep postmaster
32100 ? Ss 0:00 /usr/pgsql-9.6/bin/postmaster -D /mnt/volume/var/lib/pgsql/9.6/data/
Try to edit file /etc/init.d/postgresql-9.3:
PGDATA=/your/custom/path
Related
So after installing mongodb in my Ubuntu, I tried to run "mongo", but it said,
MongoDB shell version v4.4.1
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
So I enabled mongod service and started it, then ran the command,
sudo systemctl status mongod
And It said,
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2020-09-17 00:23:08 +06; 8min ago
Docs: https://docs.mongodb.org/manual
Process: 45414 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, sta>
Main PID: 45414 (code=exited, status=1/FAILURE)
Sep 17 00:23:08 john systemd[1]: Started MongoDB Database Server.
Sep 17 00:23:08 john mongod[45414]: about to fork child process, waiting until server is>
Sep 17 00:23:08 john mongod[45427]: forked process: 45428
Sep 17 00:23:08 john mongod[45414]: ERROR: child process failed, exited with error numbe>
Sep 17 00:23:08 john mongod[45414]: To see additional information in this output, start >
Sep 17 00:23:08 john systemd[1]: mongod.service: Main process exited, code=exited, statu>
Sep 17 00:23:08 john systemd[1]: mongod.service: Failed with result 'exit-code'.
And I can't run the mongodb shell. What should I do?
I came across this issue yesterday and I was able to resolve it by:
removing the mongod.lockfile.
running the config fork command.
remove .lock file:
sudo rm /usr/local/var/mongodb/mongod.lock
Run:
mongod --config /usr/local/etc/mongod.conf --fork.
and use the mongo command again.
mongod need to be running before you can run mongo without that error.
p.s. here is the answer for others who stumble upon original question from the title
I also got that same error , I think this may happened because of some updation in our PC (like .NET framework updation something)
then I uninstalled and reinstalled MongoDB again and its working
You have yo go /etc , modify the mongod.conf, because:
"By default, MongoDB launches with bindIp set to 127.0.0.1,", which binds to the localhost network interface. This means that the mongod can only accept connections from clients that are running on the same machine.
Then could sudo nano mongod.conf and change 127.0.0.1 to 0.0.0.0
You must restart mongo.
Create a folder data in root C: directory.
Create another folder db inside data folder.
Now run mongod in cmd in path
C:\Program Files\MongoDB\Server\5.0\bin>mongo
Don't close this command prompt.
Open another cmd in same path
C:\ProgramFiles\MongoDB\Server\5.0\bin>mongo
Run mongo command.
Now it will connect.
I want to make a systemd unit for pgagnent.
I found only init.d script on this page http://technobytz.com/automatic-sql-database-backup-postgres.html, but I don't know how to exec start-stop-daemon in systemd.
I have written that unit:
[Unit]
Description=pgagent
After=network.target postgresql.service
[Service]
ExecStart=start-stop-daemon -b --start --quiet --exec pgagent --name pgagent --startas pgagent -- hostaddr=localhost port=5432 dbname=postgres user=postgres
ExecStop=start-stop-daemon --stop --quiet -n pgagent
[Install]
WantedBy=multi-user.target
But I get errors like:
[/etc/systemd/system/pgagent.service:14] Executable path is not absolute, ignoring: start-stop-daemon --stop --quiet -n pgagent
What is wrong with that unit?
systemd expects the ExecStart and ExecStop commands to include the full path to the executable.
start-stop-daemon is not necessary for services under systemd management. you will want to have it execute the underlying pgagent commands.
look at https://unix.stackexchange.com/questions/220362/systemd-postgresql-start-script for an example
If you installed pgagent with yum or apt-get, it should have created the systemd file for you. For example, on RHEL 7 (essentially CentOS 7), you can install PostgreSQL 12 followed by pgagent
sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo yum install postgresql12
sudo yum install postgresql12-server
sudo yum install pgagent_12.x86_64
This installs PostgreSQL to /var/lib/pgsql/12 and pgagent_12 to /usr/bin/pgagent_12
In addition, it creates a systemd file at /usr/lib/systemd/system/pgagent_12.service
View the status of the service with systemctl status pgagent_12
Configure it to auto-start, then start it, with:
sudo systemctl enable pgagent_12
sudo systemctl start pgagent_12
Most likely the authentication will fail, since the default .service file has
ExecStart=/usr/bin/pgagent_12 -s ${LOGFILE} hostaddr=${DBHOST} dbname=${DBNAME} user=${DBUSER} port=${DBPORT}
Confirm with sudo tail /var/log/pgagent_12.log which will show
Sat Oct 12 19:35:47 2019 WARNING: Couldn't create the primary connection [Attempt #1]
Sat Oct 12 19:35:52 2019 WARNING: Couldn't create the primary connection [Attempt #2]
Sat Oct 12 19:35:57 2019 WARNING: Couldn't create the primary connection [Attempt #3]
Sat Oct 12 19:36:02 2019 WARNING: Couldn't create the primary connection [Attempt #4]
To fix things, we need to create a .pgpass file that is accessible when the service starts. First, stop the service
sudo systemctl stop pgagent_12
Examining the service file with less /usr/lib/systemd/system/pgagent_12.service shows it has
User=pgagent
Group=pgagent
Furthermore, /etc/pgagent/pgagent_12.conf has
DBNAME=postgres
DBUSER=postgres
DBHOST=127.0.0.1
DBPORT=5432
LOGFILE=/var/log/pgagent_12.log
Examine the /etc/passwd file to look for the pgagent user and its home directory: grep "pgagent" /etc/passwd
pgagent:x:980:977:pgAgent Job Schedule:/home/pgagent:/bin/false
Thus, we need to create a .pgpass file at /home/pgagent/.pgpass to define the postgres user's password
sudo su -
mkdir /home/pgagent
chown pgagent:pgagent /home/pgagent
chmod 0700 /home/pgagent
echo "127.0.0.1:5432:postgres:postgres:PasswordGoesHere" > /home/pgagent/.pgpass
chown pgagent:pgagent /home/pgagent/.pgpass
chmod 0600 /home/pgagent/.pgpass
The directory and file permissions are important. If you're having problems, you can enable debug logging by editing the service file at /usr/lib/systemd/system/pgagent_12.service to enable debug logging by updating the ExecStart command to have -l 2
ExecStart=/usr/bin/pgagent_12 -l 2-s ${LOGFILE} hostaddr=${DBHOST} dbname=${DBNAME} user=${DBUSER} port=${DBPORT}
After changing a .service file, things must be reloaded with sudo systemctl daemon-reload (systemd will inform you of this requirement if you forget it).
Keep starting/stopping the service and checking /var/log/pgagent_12.log Eventually, it will start properly and sudo systemctl status pgagent_12 will show
● pgagent_12.service - PgAgent for PostgreSQL 12
Loaded: loaded (/usr/lib/systemd/system/pgagent_12.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-10-12 20:18:18 PDT; 13s ago
Process: 6159 ExecStart=/usr/bin/pgagent_12 -s ${LOGFILE} hostaddr=${DBHOST} dbname=${DBNAME} user=${DBUSER} port=${DBPORT} (code=exited, status=0/SUCCESS)
Main PID: 6160 (pgagent_12)
Tasks: 1
Memory: 1.1M
CGroup: /system.slice/pgagent_12.service
└─6160 /usr/bin/pgagent_12 -s /var/log/pgagent_12.log hostaddr=127.0.0.1 dbname=postgres user=postgres port=5432
Oct 12 20:18:18 prismweb3 systemd[1]: Starting PgAgent for PostgreSQL 12...
Oct 12 20:18:18 prismweb3 systemd[1]: Started PgAgent for PostgreSQL 12.
I want to run a script at system startup in a Debian 9 box. My script works when run standalone, but fails under systemd.
My script just copies a backup file from a remote server to the local machine:
#!/bin/sh
set -e
/usr/bin/sshpass -p "PASSWORD" /usr/bin/scp -p USER#10.0.0.2:ORIGINPATH/backupserver.zip DESTINATIONPATH/backupserver/
Just for privacy I replaced password, user, and paths above.
I wrote the following systemd service unit:
[Unit]
Description=backup script
[Service]
Type=oneshot
ExecStart=PATH/backup.sh
[Install]
WantedBy=default.target
Then I set permissions for the script:
chmod 744 PATH/backup.sh
And installed the service:
chmod 664 /etc/systemd/system/backup.service
systemctl daemon-reload
systemctl enable backup.service
When I reboot the script fails:
● backup.service - backup script
Loaded: loaded (/etc/systemd/system/backup.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2017-05-13 13:39:54 -03; 47min ago
Main PID: 591 (code=exited, status=1/FAILURE)
Result of journalctl -xe:
mai 16 23:34:27 rodrigo-acer systemd[1]: backup.service: Main process exited, code=exited, status=6/NOTCONFIGURED
mai 16 23:34:27 rodrigo-acer systemd[1]: Failed to start backup script.
mai 16 23:34:27 rodrigo-acer systemd[1]: backup.service: Unit entered failed state.
mai 16 23:34:27 rodrigo-acer systemd[1]: backup.service: Failed with result 'exit-code'.
What could be wrong?
Solved guys. There was 2 problems:
1 - I had to change the service unit file to make the service run only after network was up. The unit section was changed to:
[Unit]
Description = World server backup
Wants = network-online.target
After = network.target network-online.target
2 - The root user did not have the remote host added to the known host list, unlike the ordinary user I used to test the script.
Failed with result 'exit-code' you could try this on your last line:
# REQUIRED FOR SYSTEMD: 0 means clean no error
exit 0
You may also need to add:
Type=forking
to the systemd entry similar to: https://serverfault.com/questions/751030/systemd-ignores-return-code-while-starting-service
If your service or script does not fork add a & at the end to run it in the background, and exit with 0 fast. Otherwise it will be like a startup that times out and takes forever / seems like frozen service.
In my ubuntu machine when I run the command curl -X GET 'http://localhost:9200' to test connection it show following message.
curl: (7) Failed to connect to localhost port 9200: Connection refused
When i check server status with sudo systemctl start elasticsearch it show following message.
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2016-11-20 16:32:30 BDT; 44s ago
Docs: http://www.elastic.co
Process: 8653 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet -Edefault.path.logs=${LOG_DIR} -Edefa
Process: 8649 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 8653 (code=exited, status=1/FAILURE)
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,579 main ERROR Null object returned for RollingFile in Appenders.
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,579 main ERROR Null object returned for RollingFile in Appenders.
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,580 main ERROR Unable to locate appender "rolling" for logger config "root"
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,580 main ERROR Unable to locate appender "index_indexing_slowlog_rolling" for logge
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,581 main ERROR Unable to locate appender "index_search_slowlog_rolling" for logger
Nov 20 16:32:29 bahar elasticsearch[8653]: 2016-11-20 16:32:25,581 main ERROR Unable to locate appender "deprecation_rolling" for logger config "o
Nov 20 16:32:29 bahar elasticsearch[8653]: [2016-11-20T16:32:25,592][WARN ][o.e.c.l.LogConfigurator ] ignoring unsupported logging configuration
Nov 20 16:32:30 bahar systemd[1]: elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
Nov 20 16:32:30 bahar systemd[1]: elasticsearch.service: Unit entered failed state.
Nov 20 16:32:30 bahar systemd[1]: elasticsearch.service: Failed with result 'exit-code'.
This is the error for the PATH and LOgs in the elasticsearch.yml (etc/elasticsearch/elasticsearch.yml)
Uncheck these path and your error will be removed.
That means elasticsearch is not running. And from what I see, there is a problem with starting it. Check your elasticsearch configuration.
check if Elasticsearch is running,run the follwing command:
$ ps aux|grep elasticsearch
if Elasticsearch is not started,check your JAVA Environment,download a new Elasticsearch and install it again:
1.check if JAVA is correctly installed:
$ java -version
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
if your JAVA version is lower 1.7,change a new one.
2.download Elasticsearch install package,unzip it:
$ tar -zxvf elasticsearch-2.3.3.gz
3. run Elasticsearch
$ cd elasticsearch-2.3.3
$ ./bin/elasticsearch
Usually it's the write permission issue for the log directory (default as /var/log/elasticsearch), use ls -l to check the permission and change mode to 777 for the log directory and files if necessary.
Long story short: a system reboot might get it OK.
It has been a while since the question is asked. Anyway, I ran into a similar problem recently.
The elasticsearch service on one of my nodes died, with error saying similar to those posted in the question when restart the service. It says the log folder to write is read-only file system. But these files and directories are indeed owned by user elasticsearch (version 5.5, deployed on Cent OS 6.5), there should not be a read-only problem.
I checked and didn't find a clue. So, I just reboot the system. After rebooting, everything goes all right without any further tuning: elasticsearch service starts on boot as configured, it finds the cluster and all the other nodes, and the cluster health status turns green after a little while.
I guess, the root reason might be some hardware failure in my case. All data and logs managed by elasticsearch cluster are stored in a 2TB SSD driver mounted on each node. And our hardware team just managed to recover from an external storage failure recently. All the nodes restarted during that recovery. Chances are there are some lagged issues caused the problem.
Just installed a clean version of mongodb on Fedora 17 64-bit, but the Mongo service wont run.
I followed these instructions during installation
Running
service mongod start
results in
Starting mongod (via systemctl): Job failed. See system journal and 'systemctl status' for details. [FAILED]
So I ran
systemctl status mongod.service
which gives me
mongod.service - SYSV: Mongo is a scalable, document-oriented database.
Loaded: loaded (/etc/rc.d/init.d/mongod)
Active: failed (Result: exit-code) since Mon, 18 Jun 2012 13:15:56 +0200; 58s ago
Process: 13584 ExecStart=/etc/rc.d/init.d/mongod start (code=exited, status=1/FAILURE)
CGroup: name=systemd:/system/mongod.service
Mongo logs in /var/log/mongo/mongod.log is empty
Thanks
How to install mongodb and mongodb-server on fedora linux (verified on f16 & f17). All commands are intended to be run in a su session.
1) make sure you have no mongodb installation lying around
# yum erase mongodb
# yum erase mongo-10gen (if it is installed)
2) install from fedora yum repository
# yum --disablerepo=* --enablerepo=fedora,updates install mongodb mongodb-server
3) start mongod (mongodb daemon)
# systemctl start mongod.service
4) verify mongod is running
# systemctl status mongod.service
# tail /var/log/mongodb/mongodb.log
# nmap -p27017 localhost
or running client
# mongo
MongoDB shell version: 2.0.2
connecting to: test
> db.test.save( { a: 1 } )
> db.test.find()
{ "_id" : ObjectId("4fdf28f09d16204d66082fa3"), "a" : 1 }
5) customize configuration
# vim /etc/mongodb.conf
# systemctl restart mongod.service
6) make mongodb service automatically start at boot
# systemctl enable mongod.service
Update for Fedora 18
When started for the first time by systemd on a slow or loaded machine, mongod service might timeout before finishing its initialization, with systemd flagging the service as failed.
Symptoms:
# journalctl -xn
-- Unit mongod.service has begun starting up.
10:38:43 local mongod[24558]: forked process: 24560
10:38:43 local mongod[24558]: all output going to: /var/log/mongodb/mongodb.log
10:40:13 local systemd[1]: mongod.service operation timed out. Terminating.
10:40:13 local systemd[1]: Failed to start High-performance, schema-free document-oriented database.
-- Subject: Unit mongod.service has failed
Very easy cure, restart the service:
# systemctl restart mongod.service
this should finish the initialization successfully and leave the daemon in running state.
I just had the same issue and in my case it was caused by installing mongodb following instructions of some websites using non-official repo. If you have the same issue and the answer above are not solving your problem, try uninstalling "mongodb-org" package and reinstalling it following the instructions of the official documentation : https://docs.mongodb.com/manual/tutorial/install-mongodb-on-red-hat/
Remainder to uninstall package in fedora:
sudo dnf remove <package-name>