I tried to create kubernetes cluster(v1.2.3) on azure with coreos cluster. I followed the documentation (http://kubernetes.io/docs/getting-started-guides/coreos/azure/)
Then I cloned the repo( git clone https://github.com/kubernetes/kubernetes). And I did a minor change in file( docs/getting-started-guides/coreos/azure/cloud_config_templates/kubernetes-cluster-main-nodes-template.yml) changed the kube version from v1.1.2 to v1.2.3.
And then I created the cluster by running the file(./create-kubernetes-cluster.js), cluster is successfully created for me. But in master node API server didn't get started..
I checked the log it was showing - Cloud provider could not be initialized: unknown cloud provider "vagrant".. I could not catch why this issue was coming..
This is my Log of -> kube-apiserver.service
-- Logs begin at Sat 2016-07-23 12:41:36 UTC, end at Sat 2016-07-23 12:44:19 UTC. --
Jul 23 12:43:06 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:06 anudemon-master-00 kube-apiserver[1964]: I0723 12:43:06.299966 1964 server.go:188] Will report 172.16.0.4 as public IP address.
Jul 23 12:43:06 anudemon-master-00 kube-apiserver[1964]: F0723 12:43:06.300057 1964 server.go:211] Cloud provider could not be initialized: unknown cloud provider "vagrant"
Jul 23 12:43:06 anudemon-master-00 systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a
Jul 23 12:43:06 anudemon-master-00 systemd[1]: kube-apiserver.service: Unit entered failed state.
Jul 23 12:43:06 anudemon-master-00 systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling restart.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: Stopped Kubernetes API Server.
Jul 23 12:43:16 anudemon-master-00 kube-apiserver[2015]: I0723 12:43:16.428476 2015 server.go:188] Will report 172.16.0.4 as public IP address.
Jul 23 12:43:16 anudemon-master-00 kube-apiserver[2015]: F0723 12:43:16.428534 2015 server.go:211] Cloud provider could not be initialized: unknown cloud provider "vagrant"
Jul 23 12:43:16 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Unit entered failed state.
Jul 23 12:43:16 anudemon-master-00 systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling restart.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: Stopped Kubernetes API Server.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:26 anudemon-master-00 kube-apiserver[2024]: I0723 12:43:26.756551 2024 server.go:188] Will report 172.16.0.4 as public IP address.
Jul 23 12:43:26 anudemon-master-00 kube-apiserver[2024]: F0723 12:43:26.756654 2024 server.go:211] Cloud provider could not be initialized: unknown cloud provider "vagrant"
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Main process exited, code=exited, status=255/n/a
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Unit entered failed state.
Jul 23 12:43:26 anudemon-master-00 systemd[1]: kube-apiserver.service: Failed with result 'exit-code'.
Jul 23 12:43:36 anudemon-master-00 systemd[1]: kube-apiserver.service: Service hold-off time over, scheduling restart.
Jul 23 12:43:36 anudemon-master-00 systemd[1]: Stopped Kubernetes API Server.
Jul 23 12:43:36 anudemon-master-00 systemd[1]: Started Kubernetes API Server.
Jul 23 12:43:36 anudemon-master-00 kube-apiserver[2039]: I0723 12:43:36.872849 2039 server.go:188] Will report 172.16.0.4 as public IP address.
Have you had a look at kuberenetes-anywhere (https://github.com/kubernetes/kubernetes-anywhere)? Much work has been done there and now probably has all the right bits to deploy out your cluster with Azure specific cloud provider integrations.
Related
I have the following service file, enable, and start files located in /home/pi/poolboy/service:
[Unit]
Description=Pool Boy
After=network-online.target
[Service]
ExecStart=/home/pi/poolboy/start
WorkingDirectory=/home/pi/poolboy
StandardOutput=inherit
StandardError=inherit
Restart=always
[Install]
WantedBy=multi-user.target
I have an "enable" script to install it:
#!/bin/bash
sudo cp /home/pi/poolboy/service/poolboy.service /lib/systemd/system/
sudo systemctl enable poolboy.service
I have a "start" script to start the service:
#!/bin/bash
sudo systemctl start poolboy.service
the actual start script that runs the application (and is called by the service) is located in /home/pi/poolboy:
#!/bin/bash
cd "$(dirname "$0")"
VENV=venv
echo 'checking for ' $VENV
if [ ! -d $VENV ]
then
echo $VENV ' does not exist... initially creating it'
python3 -m venv $VENV
echo 'activating the virtual environment'
source venv/bin/activate
echo 'installing libraries from requirements.txt'
pip3 install -r requirements.txt
else
source $VENV/bin/activate
fi
echo 'starting...'
sudo $VENV/bin/python3 poolboy.py --standalone
After running the enable script I run the start script. I get the following output in my /var/log/syslog:
Jun 19 22:10:43 poolboy systemd[1]: Started Pool Boy.
Jun 19 22:10:43 poolboy systemd[1]: poolBoy.service: Main process exited, code=exited, status=200/CHDIR
Jun 19 22:10:43 poolboy systemd[1]: poolBoy.service: Failed with result 'exit-code'.
Jun 19 22:10:43 poolboy systemd[1]: poolBoy.service: Service RestartSec=100ms expired, scheduling restart.
Jun 19 22:10:43 poolboy systemd[1]: poolBoy.service: Scheduled restart job, restart counter is at 1.
Jun 19 22:10:43 poolboy systemd[1]: Stopped Pool Boy.
Jun 19 22:10:43 poolboy systemd[1]: Started Pool Boy.
Jun 19 22:10:43 poolboy systemd[4980]: poolBoy.service: Changing to the requested working directory failed: No such file or directory
Jun 19 22:10:43 poolboy systemd[4980]: poolBoy.service: Failed at step CHDIR spawning /usr/bin/python3: No such file or directory
Jun 19 22:10:43 poolboy systemd[1]: poolBoy.service: Main process exited, code=exited, status=200/CHDIR
Jun 19 22:10:43 poolboy systemd[1]: poolBoy.service: Failed with result 'exit-code'.
Jun 19 22:10:43 poolboy systemd[1]: poolBoy.service: Service RestartSec=100ms expired, scheduling restart.
Jun 19 22:10:43 poolboy systemd[1]: poolBoy.service: Scheduled restart job, restart counter is at 2.
Jun 19 22:10:43 poolboy systemd[1]: Stopped Pool Boy.
Jun 19 22:10:43 poolboy systemd[1]: Started Pool Boy.
Jun 19 22:10:43 poolboy systemd[4981]: poolBoy.service: Changing to the requested working directory failed: No such file or directory
Jun 19 22:10:43 poolboy systemd[4981]: poolBoy.service: Failed at step CHDIR spawning /usr/bin/python3: No such file or directory
Jun 19 22:10:43 poolboy systemd[1]: poolBoy.service: Main process exited, code=exited, status=200/CHDIR
Jun 19 22:10:43 poolboy systemd[1]: poolBoy.service: Failed with result 'exit-code'.
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Service RestartSec=100ms expired, scheduling restart.
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Scheduled restart job, restart counter is at 3.
Jun 19 22:10:44 poolboy systemd[1]: Stopped Pool Boy.
Jun 19 22:10:44 poolboy systemd[1]: Started Pool Boy.
Jun 19 22:10:44 poolboy systemd[4982]: poolBoy.service: Changing to the requested working directory failed: No such file or directory
Jun 19 22:10:44 poolboy systemd[4982]: poolBoy.service: Failed at step CHDIR spawning /usr/bin/python3: No such file or directory
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Main process exited, code=exited, status=200/CHDIR
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Failed with result 'exit-code'.
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Service RestartSec=100ms expired, scheduling restart.
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Scheduled restart job, restart counter is at 4.
Jun 19 22:10:44 poolboy systemd[1]: Stopped Pool Boy.
Jun 19 22:10:44 poolboy systemd[4984]: poolBoy.service: Changing to the requested working directory failed: No such file or directory
Jun 19 22:10:44 poolboy systemd[1]: Started Pool Boy.
Jun 19 22:10:44 poolboy systemd[4984]: poolBoy.service: Failed at step CHDIR spawning /usr/bin/python3: No such file or directory
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Main process exited, code=exited, status=200/CHDIR
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Failed with result 'exit-code'.
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Service RestartSec=100ms expired, scheduling restart.
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Scheduled restart job, restart counter is at 5.
Jun 19 22:10:44 poolboy systemd[1]: Stopped Pool Boy.
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Start request repeated too quickly.
Jun 19 22:10:44 poolboy systemd[1]: poolBoy.service: Failed with result 'exit-code'.
Jun 19 22:10:44 poolboy systemd[1]: Failed to start Pool Boy.
After replacing poolBoy with poolboy, it looks better, but the application is still not starting:
Jun 20 11:09:28 poolboy systemd[1]: Started Pool Boy.
Jun 20 11:09:28 poolboy start[1373]: checking for venv
Jun 20 11:09:28 poolboy start[1373]: starting...
Jun 20 11:09:28 poolboy systemd[1]: poolboy.service: Succeeded.
Jun 20 11:09:58 poolboy systemd[1]: poolboy.service: Service RestartSec=30s expired, scheduling restart.
Jun 20 11:09:58 poolboy systemd[1]: poolboy.service: Scheduled restart job, restart counter is at 12.
Jun 20 11:09:58 poolboy systemd[1]: Stopped Pool Boy.
Jun 20 11:09:58 poolboy systemd[1]: Started Pool Boy.
Jun 20 11:09:58 poolboy start[1447]: checking for venv
Jun 20 11:09:58 poolboy start[1447]: starting...
Jun 20 11:09:58 poolboy systemd[1]: poolboy.service: Succeeded.
Jun 20 11:10:28 poolboy systemd[1]: poolboy.service: Service RestartSec=30s expired, scheduling restart.
Jun 20 11:10:28 poolboy systemd[1]: poolboy.service: Scheduled restart job, restart counter is at 13.
Jun 20 11:10:28 poolboy systemd[1]: Stopped Pool Boy.
Jun 20 11:10:28 poolboy systemd[1]: Started Pool Boy.
Jun 20 11:10:28 poolboy start[1521]: checking for venv
Jun 20 11:10:28 poolboy start[1521]: starting...
Jun 20 11:10:28 poolboy systemd[1]: poolboy.service: Succeeded.
Jun 20 11:10:58 poolboy systemd[1]: poolboy.service: Service RestartSec=30s expired, scheduling restart.
Jun 20 11:10:58 poolboy systemd[1]: poolboy.service: Scheduled restart job, restart counter is at 14.
Jun 20 11:10:58 poolboy systemd[1]: Stopped Pool Boy.
Jun 20 11:10:58 poolboy systemd[1]: Started Pool Boy.
Jun 20 11:10:59 poolboy start[1595]: checking for venv
Jun 20 11:10:59 poolboy start[1595]: starting...
Jun 20 11:10:59 poolboy systemd[1]: poolboy.service: Succeeded.
Jun 20 11:11:01 poolboy kernel: [ 458.537483]
Jun 20 11:11:01 poolboy kernel: [ 458.537513] WARN::dwc_otg_hcd_urb_dequeue:639: Timed out waiting for FSM NP transfer to complete on 5
Jun 20 11:11:13 poolboy kernel: [ 470.441646]
Jun 20 11:11:13 poolboy kernel: [ 470.441682] WARN::dwc_otg_hcd_urb_dequeue:639: Timed out waiting for FSM NP transfer to complete on 5
Jun 20 11:11:29 poolboy systemd[1]: poolboy.service: Service RestartSec=30s expired, scheduling restart.
Jun 20 11:11:29 poolboy systemd[1]: poolboy.service: Scheduled restart job, restart counter is at 15.
Jun 20 11:11:29 poolboy systemd[1]: Stopped Pool Boy.
Jun 20 11:11:29 poolboy systemd[1]: Started Pool Boy.
Jun 20 11:11:29 poolboy start[1670]: checking for venv
Jun 20 11:11:29 poolboy start[1670]: starting...
Jun 20 11:11:29 poolboy systemd[1]: poolboy.service: Succeeded.
Any suggestions?
duh... my /home/pi/poolboy/start script had an & in the last line :-/
Now working with the above files
I am using a droplet for an application and I am trying to set up my BACKEND server however I am getting these error's, but everything seems to be running however my frontend can't seem to pick it up. This server was working at one point but has since stopped working after the branch was changed to master.
Any help would be good
This looks correct as well
Mar 30 14:41:31 ids-bots node[2822]: at Server.emit (node:events:527:28)
Mar 30 14:41:31 ids-bots node[2822]: at parserOnIncoming (node:_http_server:951:12)
Mar 30 14:41:31 ids-bots node[2822]: at HTTPParser.parserOnHeadersComplete (node:_http_common:128:17)
Mar 30 22:41:20 ids-bots systemd[1]: Stopping Jem...
Mar 30 22:41:21 ids-bots systemd[1]: jem.service: Main process exited, code=dumped, status=3/QUIT
Mar 30 22:41:21 ids-bots systemd[1]: jem.service: Failed with result 'core-dump'.
Mar 30 22:41:21 ids-bots systemd[1]: Stopped Jem.
Mar 30 22:41:21 ids-bots systemd[1]: Started Jem.
Mar 30 22:41:22 ids-bots node[6418]: Jem API listening on port 3001
Mar 30 22:41:22 ids-bots node[6418]: Connected database to mongodb://127.0.0.1:27017/jem```
Unable to start Apache2 server after I modifed dir.conf file, even after changing it back to normal
I modified /etc/apache2/mods-enabled/dir.conf and changed the order and wrote index.php in
the first place before index.html, so the file contents have been modified from
"DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm" to now
"DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm". But still
changing that back to original order, also does not start apache2 server, it gives the same
error. I tried restarting and stop and start, but none seem to work.
** Here are the details of systemctl status apache2.service **
● apache2.service - LSB: Apache2 web server
Loaded: loaded (/etc/init.d/apache2; bad; vendor preset: enabled)
Drop-In: /lib/systemd/system/apache2.service.d
└─apache2-systemd.conf
Active: failed (Result: exit-code) since Tue 2020-11-24 12:28:31 IST; 5min ago
Docs: man:systemd-sysv-generator(8)
Process: 20300 ExecStop=/etc/init.d/apache2 stop (code=exited, status=0/SUCCESS)
Process: 25755 ExecStart=/etc/init.d/apache2 start (code=exited, status=1/FAILURE)
Nov 24 12:28:31 localhost apache2[25755]: *
Nov 24 12:28:31 localhost apache2[25755]: * The apache2 configtest failed.
Nov 24 12:28:31 localhost apache2[25755]: Output of config test was:
Nov 24 12:28:31 localhost apache2[25755]: AH00534: apache2: Configuration error: More than one MPM loaded.
Nov 24 12:28:31 localhost apache2[25755]: Action 'configtest' failed.
Nov 24 12:28:31 localhost apache2[25755]: The Apache error log may have more information.
Nov 24 12:28:31 localhost systemd[1]: apache2.service: Control process exited, code=exited status=1
Nov 24 12:28:31 localhost systemd[1]: Failed to start LSB: Apache2 web server.
Nov 24 12:28:31 localhost systemd[1]: apache2.service: Unit entered failed state.
Nov 24 12:28:31 localhost systemd[1]: apache2.service: Failed with result 'exit-code'.
**Here is the details of journalctl -xe**
Nov 24 12:44:42 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:42 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:43 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:43 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:44 localhost systemd[1]: Failed to start MySQL Community Server.
-- Subject: Unit mysql.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mysql.service has failed.
--
-- The result is failed.
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Unit entered failed state.
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Failed with result 'exit-code'.
Nov 24 12:44:44 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Service hold-off time over, scheduling restart.
Nov 24 12:44:44 localhost systemd[1]: Stopped MySQL Community Server.
-- Subject: Unit mysql.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mysql.service has finished shutting down.
Nov 24 12:44:44 localhost systemd[1]: Starting MySQL Community Server...
-- Subject: Unit mysql.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mysql.service has begun starting up.
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.545855Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 5000)
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.545908Z 0 [Warning] Changed limits: table_open_cache: 431 (requested 2000)
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.705792Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --expl
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.707018Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.32-0ubuntu0.16.04.1) starting as process 538
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709650Z 0 [ERROR] Could not open file '/var/log/mysql/error.log' for error logging: No suc
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709678Z 0 [ERROR] Aborting
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709703Z 0 [Note] Binlog end
Nov 24 12:44:44 localhost mysqld[5387]: 2020-11-24T07:14:44.709770Z 0 [Note] /usr/sbin/mysqld: Shutdown complete
Nov 24 12:44:44 localhost systemd[1]: mysql.service: Main process exited, code=exited, status=1/FAILURE
Nov 24 12:44:44 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:44 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:45 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
Nov 24 12:44:45 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:75]
Nov 24 12:44:45 localhost kernel: [drm:drm_mode_addfb2 [drm]] [FB:77]
I recently purchased 3 raspberry pi nodes to create a small storage cluster to test with at my home. I found a couple of procedures on setting this up so it appears folks have successfully done this!
I am running Raspbian GNU/Linux 8.0 (jessie). I'm using ceph-deploy to install the cluster and it appears to install version 10.2.5-7.2+rpi1 of the ceph ARM packages.
When I try to start the ceph-mon service I get the following error from systemd:
Dec 14 19:59:46 ceph-master systemd[1]: Starting Ceph cluster monitor daemon...
Dec 14 19:59:46 ceph-master systemd[1]: Started Ceph cluster monitor daemon.
Dec 14 19:59:47 ceph-master ceph-mon[28237]: *** Caught signal (Segmentation fault) **
Dec 14 19:59:47 ceph-master ceph-mon[28237]: in thread 756a5c30 thread_name:admin_socket
Dec 14 19:59:47 ceph-master systemd[1]: ceph-mon#ceph-master.service: main process exited, code=killed, status=11/SEGV
Dec 14 19:59:47 ceph-master systemd[1]: Unit ceph-mon#ceph-master.service entered failed state.
Dec 14 19:59:47 ceph-master systemd[1]: ceph-mon#ceph-master.service holdoff time over, scheduling restart.
Dec 14 19:59:47 ceph-master systemd[1]: Stopping Ceph cluster monitor daemon...
Dec 14 19:59:47 ceph-master systemd[1]: Starting Ceph cluster monitor daemon...
Dec 14 19:59:47 ceph-master systemd[1]: Started Ceph cluster monitor daemon.
Dec 14 19:59:49 ceph-master ceph-mon[28256]: *** Caught signal (Segmentation fault) **
Dec 14 19:59:49 ceph-master ceph-mon[28256]: in thread 75654c30 thread_name:admin_socket
Dec 14 19:59:49 ceph-master ceph-mon[28256]: ceph version 10.2.5 (c461ee19ecbc0c5c330aca20f7392c9a00730367)
Dec 14 19:59:49 ceph-master ceph-mon[28256]: 1: (()+0x4b1348) [0x54fae348]
Dec 14 19:59:49 ceph-master ceph-mon[28256]: 2: (__default_sa_restorer()+0) [0x768bb480]
Dec 14 19:59:49 ceph-master ceph-mon[28256]: 3: (AdminSocket::do_accept()+0x28) [0x550ca154]
Dec 14 19:59:49 ceph-master ceph-mon[28256]: 4: (AdminSocket::entry()+0x22c) [0x550cc458]
Dec 14 19:59:49 ceph-master systemd[1]: ceph-mon#ceph-master.service: main process exited, code=killed, status=11/SEGV
Dec 14 19:59:49 ceph-master systemd[1]: Unit ceph-mon#ceph-master.service entered failed state.
Dec 14 19:59:49 ceph-master systemd[1]: ceph-mon#ceph-master.service holdoff time over, scheduling restart.
Dec 14 19:59:49 ceph-master systemd[1]: Stopping Ceph cluster monitor daemon...
Dec 14 19:59:49 ceph-master systemd[1]: Starting Ceph cluster monitor daemon...
Dec 14 19:59:49 ceph-master systemd[1]: Started Ceph cluster monitor daemon.
Dec 14 19:59:50 ceph-master ceph-mon[28271]: *** Caught signal (Segmentation fault) **
Dec 14 19:59:50 ceph-master ceph-mon[28271]: in thread 755fcc30 thread_name:admin_socket
Dec 14 19:59:50 ceph-master systemd[1]: ceph-mon#ceph-master.service: main process exited, code=killed, status=11/SEGV
Dec 14 19:59:50 ceph-master systemd[1]: Unit ceph-mon#ceph-master.service entered failed state.
Dec 14 19:59:50 ceph-master systemd[1]: ceph-mon#ceph-master.service holdoff time over, scheduling restart.
Dec 14 19:59:50 ceph-master systemd[1]: Stopping Ceph cluster monitor daemon...
Dec 14 19:59:50 ceph-master systemd[1]: Starting Ceph cluster monitor daemon...
Dec 14 19:59:50 ceph-master systemd[1]: ceph-mon#ceph-master.service start request repeated too quickly, refusing to start.
Dec 14 19:59:50 ceph-master systemd[1]: Failed to start Ceph cluster monitor daemon.
Dec 14 19:59:50 ceph-master systemd[1]: Unit ceph-mon#ceph-master.service entered failed state.
I'm looking for guidance here as I'm not sure why this doesn't work. I am using the following URLs for my apt repos:
root#ceph-master:~# cat /etc/apt/sources.list
deb http://mirrordirector.raspbian.org/raspbian/ testing main contrib non-free rpi
root#ceph-master:~# cat /etc/apt/sources.list.d/ceph.list
deb https://download.ceph.com/debian-jewel/ jessie main
Has anyone else tried this and had similar problems? Any advice on how to proceed or work around this issue?
We have an application that is running on RHEL6/32 bit and RHEL6/64 bit. This application uses postgresql 8.4 from the beginning. Now, we want to provide support for this application on RHEL7/64 bit. RHEL7 comes with default postgresql 9.2 in its yum list and this is getting installed and its related services are running properly as well. But after installing postgresql 8.4 on RHEL7, it seems like the services are never running. Please find below the logs:
[root#linpubn218 postgres]# service postgresql status
postgresql.service - SYSV: PostgreSQL database server.
Loaded: loaded (/etc/rc.d/init.d/postgresql)
Active: failed (Result: resources) since Mon 2016-07-25 12:40:28 IST; 2h 0min ago
Docs: man:systemd-sysv-generator(8)
Jul 25 12:40:26 linpubn218.gl.avaya.com systemd[1]: Starting SYSV: PostgreSQL database server....
Jul 25 12:40:28 linpubn218.gl.avaya.com postgresql[26957]: Starting postgresql service: [ OK ]
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: PID file /var/run/postmaster-8.4.pid not readable (yet?) after start.
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: Failed to start SYSV: PostgreSQL database server..
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: Unit postgresql.service entered failed state.
Jul 25 12:40:28 linpubn218.gl.avaya.com systemd[1]: postgresql.service failed.
Jul 25 14:33:45 linpubn218.gl.avaya.com systemd[1]: Unit postgresql.service cannot be reloaded because it is inactive.
Jul 25 14:33:45 linpubn218.gl.avaya.com systemd[1]: Unit postgresql.service cannot be reloaded because it is inactive.
After looking at the logs in journalctl -xe
[root#linpubn218 postgres]# journalctl -xe
Jul 25 14:39:21 linpubn218.gl.avaya.com yum[29260]: Installed: postgresql84-libs-8.4.17-1PGDG.rhel6.x86_64
Jul 25 14:39:45 linpubn218.gl.avaya.com yum[29275]: Installed: postgresql84-8.4.17-1PGDG.rhel6.x86_64
Jul 25 14:40:01 linpubn218.gl.avaya.com useradd[29316]: failed adding user 'postgres', exit code: 9
Jul 25 14:40:02 linpubn218.gl.avaya.com CROND[29320]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Jul 25 14:40:02 linpubn218.gl.avaya.com systemd[1]: Reloading.
Jul 25 14:40:03 linpubn218.gl.avaya.com systemd[1]: Configuration file /usr/lib/systemd/system/auditd.service is marked world-inaccessible. This has no effect as config
Jul 25 14:40:03 linpubn218.gl.avaya.com yum[29309]: Installed: postgresql84-server-8.4.17-1PGDG.rhel6.x86_64
Jul 25 14:42:05 linpubn218.gl.avaya.com polkitd[819]: Registered Authentication Agent for unix-process:29459:43987285 (system bus name :1.292 [/usr/bin/pkttyagent --not
Jul 25 14:42:05 linpubn218.gl.avaya.com systemd[1]: Starting SYSV: PostgreSQL database server....
Jul 25 14:42:06 linpubn218.gl.avaya.com runuser[29473]: pam_unix(runuser-l:session): session closed for user postgres
Jul 25 14:42:08 linpubn218.gl.avaya.com postgresql[29464]: Starting postgresql service: [ OK ]
Jul 25 14:42:08 linpubn218.gl.avaya.com systemd[1]: PID file /var/run/postmaster-8.4.pid not readable (yet?) after start.
Jul 25 14:42:08 linpubn218.gl.avaya.com systemd[1]: Failed to start SYSV: PostgreSQL database server..
Can postgresql 8.4 be installed on RHEL7, which is a systemd based OS? If yes, then what should I do to remove the above error?
I noticed that in /etc/init.d/postgresql-8.4 there is a declared variable:
pidfile="/var/run/postmaster-${PGMAJORVERSION}.${PGPORT}.pid"
But in systemctl, PIDfile is not the same:
# systemctl show postgresql-8.4.service -p PIDFile
PIDFile=/var/run/postmaster-8.4.pid
So, to fix the problem edit /etc/init.d/postgresql-8.4 and replace
pidfile="/var/run/postmaster-${PGMAJORVERSION}.${PGPORT}.pid"
with
pidfile="/var/run/postmaster-${PGMAJORVERSION}.pid"
then reload systemctl:
# systemctl daemon-reload
#/etc/init.d/postgresql-8.4 start
Starting postgresql-8.4 (via systemctl): [ OK ]
Generally permissions caused this type of error
su - postgres
After that:
chmod 700 -R <data_directory>
And you should check SELinux as well.