I want to run Hazelcast for POC for future use based on docker in the aws instances.
I use the next configuration to run in on my laptop for some investigation:
docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=192.168.1.227:5701" -itd -p 5701:5701 hazelcast/hazelcast
docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=192.168.1.227:5702" -itd -p 5702:5701 hazelcast/hazelcast
It starts ok, but once try to open it in the browser I got next warnings:
docker logs -ft a91ed298117a
2020-02-02T16:30:41.846203500Z ########################################
2020-02-02T16:30:41.846284000Z # JAVA_OPTS=-Dhazelcast.mancenter.enabled=false -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/opt/hazelcast/logging.properties -XX:MaxRAMPercentage=80.0 -Dhazelcast.local.publicAddress=192.168.1.227:5702
2020-02-02T16:30:41.846346700Z # CLASSPATH=/opt/hazelcast/*:/opt/hazelcast/lib/*
2020-02-02T16:30:41.846374200Z # starting now....
2020-02-02T16:30:41.846424700Z ########################################
2020-02-02T16:30:41.846467100Z + exec java -server -Dhazelcast.mancenter.enabled=false -Djava.net.preferIPv4Stack=true -Djava.util.logging.config.file=/opt/hazelcast/logging.properties -XX:MaxRAMPercentage=80.0 -Dhazelcast.local.publicAddress=192.168.1.227:5702 com.hazelcast.core.server.StartServer
Members {size:2, ver:2} [
2020-02-02T16:30:52.360102700Z Member [192.168.1.227]:5701 - e152d11b-df3e-4c29-a363-188842fc624c
2020-02-02T16:30:52.360128200Z Member [192.168.1.227]:5702 - e7811c67-34ef-4ec5-9687-1945d7c36b69 this
2020-02-02T16:30:52.360159400Z ]
2020-02-02T16:30:52.360183200Z
2020-02-02T16:30:53.384531200Z Feb 02, 2020 4:30:53 PM com.hazelcast.core.LifecycleService
2020-02-02T16:30:53.384586000Z INFO: [192.168.1.227]:5702 [dev] [3.12.6] [192.168.1.227]:5702 is STARTED
2020-02-02T16:31:00.582731400Z Feb 02, 2020 4:31:00 PM com.hazelcast.nio.tcp.TcpIpConnection
2020-02-02T16:31:00.582871900Z WARNING: [192.168.1.227]:5702 [dev] [3.12.6] Connection[id=2, /172.17.0.3:5701->/172.17.0.1:60574, qualifier=null, endpoint=null, alive=false, type=NONE] closed. Reason: Exception in Connection[id=2, /172.17.0.3:5701->/172.17.0.1:60574, qualifier=null, endpoint=null, alive=true, type=NONE], thread=hz._hzInstance_1_dev.IO.thread-in-1
2020-02-02T16:31:00.582909200Z java.lang.IllegalStateException: REST API is not enabled.
2020-02-02T16:31:00.583013000Z at com.hazelcast.nio.tcp.UnifiedProtocolDecoder.onRead(UnifiedProtocolDecoder.java:96)
2020-02-02T16:31:00.583049600Z at com.hazelcast.internal.networking.nio.NioInboundPipeline.process(NioInboundPipeline.java:135)
2020-02-02T16:31:00.583077900Z at com.hazelcast.internal.networking.nio.NioThread.processSelectionKey(NioThread.java:369)
2020-02-02T16:31:00.583122400Z at com.hazelcast.internal.networking.nio.NioThread.processSelectionKeys(NioThread.java:354)
2020-02-02T16:31:00.583189100Z at com.hazelcast.internal.networking.nio.NioThread.selectLoop(NioThread.java:280)
2020-02-02T16:31:00.583220000Z at com.hazelcast.internal.networking.nio.NioThread.run(NioThread.java:235)
2020-02-02T16:31:00.583249400Z
2020-02-02T16:31:00.604505300Z Feb 02, 2020 4:31:00 PM com.hazelcast.nio.tcp.TcpIpConnection
Could you please help me to understand where I goes wrong?
The Hazelcast REST API is not enabled by default and that is why you get the exception in the logs. Also, keep in mind, that it does not make much sense to open Hazelcast in the browser, since it does not serve any HTTP webpage.
Saying that, you successfully run Hazelcast cluster in Docker. Now if you want to play with it, the simplest way is to either enable REST API or to use your language of choice and connect with Hazelcast client.
1. REST API
To start Hazelcast with REST API enabled, you need to add -Dhazelcast.rest.enabled=true to your JAVA_OPTS. So in your case, you can run the following commands:
docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=192.168.1.227:5701 -Dhazelcast.rest.enabled=true" -itd -p 5701:5701 hazelcast/hazelcast:3.12.6
docker run -e JAVA_OPTS="-Dhazelcast.local.publicAddress=192.168.1.227:5702 -Dhazelcast.rest.enabled=true" -itd -p 5702:5701 hazelcast/hazelcast:3.12.6
Then, you can use Hazelcast REST API, for example to add and read the value form the map:
$ curl -X POST 192.168.1.227:5701/hazelcast/rest/maps/mapName/foo -d "bar"
$ curl 192.168.1.227:5701/hazelcast/rest/maps/mapName/foo
bar
2. Hazelcast Client
There are Hazelcast Clients in most programming languages. You only need to specify 192.168.1.227:5701 and 192.168.1.227:5702 as the address of your Hazelcast cluster. For example, in Python it would look like this.
import hazelcast
config = hazelcast.ClientConfig()
config.network_config.addresses.append("192.168.1.227:5701")
config.network_config.addresses.append("192.168.1.227:5702")
client = hazelcast.HazelcastClient(config)
my_map = client.get_map("map")
my_map.put("key", "value")
client.shutdown()
Then, you can run it with:
pip install hazelcast-python-client && python client.py
Related
Just installed Docker CE following official instructions with the repository in Ubuntu 14.04
Installation went successfully, the daemon is running
$ ps aux | grep docker
[...] /usr/bin/dockerd --raw-logs [...]
My user is in the docker group:
$ groups
[...] docker
The cli can't seem to communicate (same with sudo)
$ docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
Is the docker daemon running?
The socket seems to have the correct permissions:
$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Feb 4 16:21 /var/run/docker.sock
The log seems to claim about some issues though
$ sudo tail -f /var/log/upstart/docker.log
Failed to connect to containerd: failed to dial "/var/run/docker/containerd/docker-containerd.sock": dial unix:///var/run/docker/containerd/docker-containerd.sock: timeout
/var/run/docker.sock is up
time="2018-02-04T16:22:21.031459040+01:00" level=info msg="libcontainerd: started new docker-containerd process" pid=17147
INFO[0000] starting containerd module=containerd revision=89623f28b87a6004d4b785663257362d1658a729 version=v1.0.0
INFO[0000] setting subreaper... module=containerd
containerd: invalid argument
time="2018-02-04T16:22:21.056685023+01:00" level=error msg="containerd did not exit successfully" error="exit status 1" module=libcontainerd
Any advice to make this work ?
Relog and Docker restart already done of course
As #bobbear suggested and is actually mentioned in the official doc one of the prerequisites is:
Version 3.10 or higher of the Linux kernel. The latest version of the kernel available for you platform is recommended.
After having checked my Kernel version:
$ uname -a
Linux [...] 3.2.[...]-generic [...]-Ubuntu [...] x86_64
I searched for candidates:
$ apt-cache search linux-image
And installed my new_kernel:
$ sudo apt-get install \
linux-image-new_kernel \
linux-headers-new_kernel \
linux-image-extra-new_kernel
Same situation happend on me. IS because your linux kernel version too low !!! check it use command "uname -r" , if the version below "3.10" (for example: debian 7 whezzy default version is 3.2 ) ,even you install docker-ce suceessfully, you will still can not start docker daemon success.That why! All most answers on the web tell you to 'restart' bla bla bla... but they did not consider this problem.
I am trying to start my dockerd daemon by this command - dockerd &
Then i start getting the error as below -
ERRO[0036] libcontainerd: failed to receive event from containerd: rpc error: code = 12 desc = unknown service types.API
This keeps rolling again and again and i am unable to start any container after that. If i close the session and open a new session, i could see docker ps is accessible. But i am unable to start any container. While starting the container I am getting error -
docker run hello-world
docker: Error response from daemon: unknown service types.API. ERRO[0000] error waiting for container: context canceled
Please let me know if any logs are needed.
Why do you start the docker daemon using dockerd & and not systemctl start docker.service? This is probably the cause of your problem.
In order to start the daemon at boot, you need to run systemctl enable docker.service. See Getting Started with Containers.
Note that the kernel for Red Hat Enterprise Linux 6 only supports a limited subset of the functionality needed for container support, and I don't think anyone tests either the daemon or container images on that operating system version.
I already have a service that was written for RHEL6 and there i had some custom service commands that i can execute.Please see below for the extract from the script.
case "$1" in
'start')
start
;;
'stop')
stopit
;;
'restart')
stopit
start
;;
'status')
status
;;
'AppHealthCheck')
AppHealthCheck
;;
*)
echo "Usage: $0 { start | stop | restart | status | AppHealthCheck }"
exit 1
;;
esac
All the called method have there defination...So previously in RHEL6 if i had to execute the service and see if it is healthy i used to execute service $servicename AppHealthCheck .. and it used to work but now in RHEL7 i am not able to define in service unit file if i want to check say the AppHealth...As far as the research i have done i learnt that can define what will be called for service start/stop/restart but was not able to find if we can call any custom methods in the script..Please see my service unit file below:-
[Unit]
Description=SPIRIT Agent Application
[Service]
Type=forking
ExecStart=scripts/Agent start
ExecStop=scripts/Agent stop
ExecReload=scripts/Agent restart
[Install]
Can you one please help me in resolving this issue.Please let me know if more info is required.
The systemd way is to send output to the journal so that systemctl status shows the latest log messages, and tells you if the service is running. If you want more detailed status, you would create a separate command-line command that does AppHealthCheck. It wouldn't be executed via systemctl, it'd be a separate thing.
This is how Pacemaker works, for example. systemctl status pacemaker shows if the service is running.
# systemctl status pacemaker
● pacemaker.service - Pacemaker High Availability Cluster Manager
Loaded: loaded (/usr/lib/systemd/system/pacemaker.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2016-11-10 15:28:11 GMT; 1 weeks 3 days ago
Nov 11 15:54:59 node1 crmd[4422]: notice: Operation svc1_stop_0: ok (node=node1, call=93, rc=0, cib-update=134, confirmed=true)
Nov 11 15:54:59 node1 crmd[4422]: notice: Operation svc2_stop_0: ok (node=node1, call=95, rc=0, cib-update=135, confirmed=true)
Nov 11 15:54:59 node1 crmd[4422]: notice: Operation svc3_stop_0: ok (node=node1, call=97, rc=0, cib-update=136, confirmed=true)
pcs status gives more detailed information about how it's doing.
# pcs status
Cluster name: node
Stack: corosync
Current DC: node2 (version 1.2.3) - partition with quorum
2 nodes and 3 resources configured
Online: [ node1 node2 ]
Full list of resources:
<snip>
PCSD Status:
node1: Online
node2: Online
Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: active/enabled
In RHEL7 we cannot define any custom service commands as we used to do or we can do in RHEL6 server. So even if we are calling any custom service command we have to internally call the 'service $servicename start' or 'systemctl start $servicename' so that the RHEL7 server can recognize that the service is running
I've been trying to use Docker (1.10) on Ubuntu 16.04 but installation fails because Docker Service doesn't start.
I've already tried to install docker by docker.io, docker-engine apt packages and curl -sSL https://get.docker.com/ | sh but it doesn't work.
My Host info is:
Linux Xenial 4.5.3-040503-generic #201605041831 SMP Wed May 4 22:33:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Here is systemctl status docker.service:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since sáb 2016-05-14 15:17:31 CEST; 12min ago
Docs: https://docs.docker.com
Process: 22479 ExecStart=/usr/bin/docker daemon -H fd:// (code=exited, status=1/FAILURE)
Main PID: 22479 (code=exited, status=1/FAILURE)
may 14 15:17:30 Xenial docker[22479]: time="2016-05-14T15:17:30.103601523+02:00" level=info msg="New containerd process, pid: 22485\n"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.149064723+02:00" level=error msg="devmapper: Unable to delete device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.149127439+02:00" level=warning msg="devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section."
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.153010028+02:00" level=error msg="[graphdriver] prior storage driver \"devicemapper\" failed: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31.153130839+02:00" level=fatal msg="Error starting daemon: error initializing graphdriver: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool"
may 14 15:17:31 Xenial systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
may 14 15:17:31 Xenial docker[22479]: time="2016-05-14T15:17:31+02:00" level=info msg="stopping containerd after receiving terminated"
may 14 15:17:31 Xenial systemd[1]: Failed to start Docker Application Container Engine.
may 14 15:17:31 Xenial systemd[1]: docker.service: Unit entered failed state.
may 14 15:17:31 Xenial systemd[1]: docker.service: Failed with result 'exit-code'.
Here is sudo docker daemon -D
DEBU[0000] docker group found. gid: 999
DEBU[0000] Listener created for HTTP on unix (/var/run/docker.sock)
INFO[0000] previous instance of containerd still alive (23050)
DEBU[0000] containerd connection state change: CONNECTING
DEBU[0000] Using default logging driver json-file
DEBU[0000] Golang's threads limit set to 55980
DEBU[0000] received past containerd event: &types.Event{Type:"live", Id:"", Status:0x0, Pid:"", Timestamp:0x57372cae}
DEBU[0000] containerd connection state change: READY
DEBU[0000] devicemapper: driver version is 4.34.0
DEBU[0000] devmapper: Generated prefix: docker-8:6-2101297
DEBU[0000] devmapper: Checking for existence of the pool docker-8:6-2101297-pool
DEBU[0000] devmapper: poolDataMajMin=7:0 poolMetaMajMin=7:1
DEBU[0000] devmapper: Major:Minor for device: /dev/loop0 is:7:0
DEBU[0000] devmapper: Major:Minor for device: /dev/loop1 is:7:1
DEBU[0000] devmapper: loadDeviceFilesOnStart()
DEBU[0000] devmapper: Skipping file /var/lib/docker/devicemapper/metadata/transaction-metadata
DEBU[0000] devmapper: loadDeviceFilesOnStart() END
DEBU[0000] devmapper: constructDeviceIDMap()
DEBU[0000] devmapper: constructDeviceIDMap() END
DEBU[0000] devmapper: Rolling back open transaction: TransactionID=1 hash= device_id=1
ERRO[0000] devmapper: Unable to delete device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
WARN[0000] devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.
DEBU[0000] devmapper: Initializing base device-mapper thin volume
DEBU[0000] devicemapper: CreateDevice(poolName=/dev/mapper/docker-8:6-2101297-pool, deviceID=1)
DEBU[0000] devmapper: Error creating device: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
DEBU[0000] devmapper: Error device setupBaseImage: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
ERRO[0000] [graphdriver] prior storage driver "devicemapper" failed: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
DEBU[0000] Cleaning up old mountid : start.
FATA[0000] Error starting daemon: error initializing graphdriver: devicemapper: Can't set task name /dev/mapper/docker-8:6-2101297-pool
Here is ./check-config.sh output:
warning: /proc/config.gz does not exist, searching other paths for kernel config ...
info: reading kernel config from /boot/config-4.5.3-040503-generic ...
Generally Necessary:
- cgroup hierarchy: properly mounted [/sys/fs/cgroup]
- apparmor: enabled and tools installed
- CONFIG_NAMESPACES: enabled
- CONFIG_NET_NS: enabled
- CONFIG_PID_NS: enabled
- CONFIG_IPC_NS: enabled
- CONFIG_UTS_NS: enabled
- CONFIG_DEVPTS_MULTIPLE_INSTANCES: enabled
- CONFIG_CGROUPS: enabled
- CONFIG_CGROUP_CPUACCT: enabled
- CONFIG_CGROUP_DEVICE: enabled
- CONFIG_CGROUP_FREEZER: enabled
- CONFIG_CGROUP_SCHED: enabled
- CONFIG_CPUSETS: enabled
- CONFIG_MEMCG: enabled
- CONFIG_KEYS: enabled
- CONFIG_MACVLAN: enabled (as module)
- CONFIG_VETH: enabled (as module)
- CONFIG_BRIDGE: enabled (as module)
- CONFIG_BRIDGE_NETFILTER: enabled (as module)
- CONFIG_NF_NAT_IPV4: enabled (as module)
- CONFIG_IP_NF_FILTER: enabled (as module)
- CONFIG_IP_NF_TARGET_MASQUERADE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_ADDRTYPE: enabled (as module)
- CONFIG_NETFILTER_XT_MATCH_CONNTRACK: enabled (as module)
- CONFIG_NF_NAT: enabled (as module)
- CONFIG_NF_NAT_NEEDED: enabled
- CONFIG_POSIX_MQUEUE: enabled
Optional Features:
- CONFIG_USER_NS: enabled
- CONFIG_SECCOMP: enabled
- CONFIG_CGROUP_PIDS: enabled
- CONFIG_MEMCG_KMEM: missing
- CONFIG_MEMCG_SWAP: enabled
- CONFIG_MEMCG_SWAP_ENABLED: missing
(note that cgroup swap accounting is not enabled in your kernel config, you can enable it by setting boot option "swapaccount=1")
- CONFIG_BLK_CGROUP: enabled
- CONFIG_BLK_DEV_THROTTLING: enabled
- CONFIG_IOSCHED_CFQ: enabled
- CONFIG_CFQ_GROUP_IOSCHED: enabled
- CONFIG_CGROUP_PERF: enabled
- CONFIG_CGROUP_HUGETLB: enabled
- CONFIG_NET_CLS_CGROUP: enabled (as module)
- CONFIG_CGROUP_NET_PRIO: enabled
- CONFIG_CFS_BANDWIDTH: enabled
- CONFIG_FAIR_GROUP_SCHED: enabled
- CONFIG_RT_GROUP_SCHED: missing
- CONFIG_EXT3_FS: missing
- CONFIG_EXT3_FS_XATTR: missing
- CONFIG_EXT3_FS_POSIX_ACL: missing
- CONFIG_EXT3_FS_SECURITY: missing
(enable these ext3 configs if you are using ext3 as backing filesystem)
- CONFIG_EXT4_FS: enabled
- CONFIG_EXT4_FS_POSIX_ACL: enabled
- CONFIG_EXT4_FS_SECURITY: enabled
- Network Drivers:
- "overlay":
- CONFIG_VXLAN: enabled (as module)
- Storage Drivers:
- "aufs":
- CONFIG_AUFS_FS: missing
- "btrfs":
- CONFIG_BTRFS_FS: enabled (as module)
- "devicemapper":
- CONFIG_BLK_DEV_DM: enabled
- CONFIG_DM_THIN_PROVISIONING: enabled (as module)
- "overlay":
- CONFIG_OVERLAY_FS: enabled (as module)
- "zfs":
- /dev/zfs: missing
- zfs command: missing
- zpool command: missing
If someone could please help me I would be very thankful
Update
It seems that in newer versions of docker and Ubuntu the unit file for docker is simply masked (pointing to /dev/null).
You can verify it by running the following commands in the terminal:
sudo file /lib/systemd/system/docker.service
sudo file /lib/systemd/system/docker.socket
You should see that the unit file symlinks to /dev/null.
In this case, all you have to do is follow S34N's suggestion, and run:
sudo systemctl unmask docker.service
sudo systemctl unmask docker.socket
sudo systemctl start docker.service
sudo systemctl status docker
I'll also keep the original post, that answers the error log stating that the storage driver should be replaced:
Original Post
I had the same problem, and I tried fixing it with Salva Cort's suggestion, but printing /etc/default/docker says:
# THIS FILE DOES NOT APPLY TO SYSTEMD
So here's a permanent fix that works for systemd (Ubuntu 15.04 and higher):
create a new file /etc/systemd/system/docker.service.d/overlay.conf with the following content:
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// -s overlay
flush changes by executing:
sudo systemctl daemon-reload
verify that the configuration has been loaded:
systemctl show --property=ExecStart docker
restart docker:
sudo systemctl restart docker
The following unmasking commands worked for me (Ubuntu 18). Hope it helps someone out there... :-)
sudo systemctl unmask docker.service
sudo systemctl unmask docker.socket
sudo systemctl start docker.service
I had the same problem after upgrade docker from 17.05-ce to 17.06-ce via docker-machine
Update /etc/systemd/system/docker.service.d/10-machine.conf
replace
`docker daemon` => `dockerd`
example from
[Service]
ExecStart=
ExecStart=/usr/bin/docker deamon -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic
Environment=
to
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver aufs --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=generic
Environment=
flush changes by executing:
sudo systemctl daemon-reload
restart docker:
sudo systemctl restart docker
Well, finally I fixed it
Everything you have to do is to load a different storage-driver in my case I will use overlay:
Disable Docker service: sudo systemctl stop docker.service
Start Docker Daemon (overlay driver): sudo docker daemon -s overlay
Run Demo container: sudo docker run hello-world
In order to make these changes permanent, you must edit /etc/default/docker file and add the option:
DOCKER_OPTS="-s overlay"
Next time Docker service get loaded, it will run docker daemon -s overlay
I've been able to get it working after a kernel upgrade by following the directions in this blog.
https://mymemorysucks.wordpress.com/2016/03/31/docker-graphdriver-and-aufs-failed-driver-not-supported-error-after-ubuntu-upgrade/
sudo apt-get update
sudo apt-get install linux-image-extra-$(uname -r) linux-image-extra-virtual
sudo modprobe aufs
sudo service docker restart
After viewing some of the other answers it looks like the issue was that the service wasn't running with the -s overlay options.
I also happened to notice that docker tried to start up with ${DOCKER_OPTS} at the end of the call.
I was able to export DOCKER_OPTS="-s overlay" (bc by default DOCKER_OPTS was empty) and get docker running.
I had a similar issue on a new Docker installation (version 19.03.3-rc1) on Ubuntu 18.04.3 LTS. By default /etc/docker/daemon.json file does not exist on a new installation. Following a tutorial I changed the storage driver to devicemapper by creating a new daemon.json file. It worked but then I deleted the daemon.json file thinking that it would revert to the default but that did not work and the service would not start.
Creating the /etc/docker/daemon.json file again with the default storage driver fixed it for me.
{
"storage-driver": "overlay2"
}
sudo dockerd --debug will help to fix actual pain point I fixed the same error using this at ubuntu 20 LTS
As to me, I have get this error.
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Finally I found, it the /etc/docker/daemon.json error, for I add registry-mirrors
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
# I forget to add a comma , here !!!!!!!
"registry-mirrors": ["https://docker.mirrors.ustc.edu.cn"]
}
After I add it , then systemctl restart docker, I solved it.
In my case I was getting the following error from journalctl -xe command
unable to configure the Docker daemon with file /etc/docker/daemon.json: invalid character 'â' looking for beginning of object key string
Just clean /etc/docker/daemon.json with
{
}
I had this issue today after an upgrade to the ubuntu kernel and tried numerous solutions above. However the only one that worked (Ubuntu 16.04.6 LTS) was to remove (or rename) the folder: /var/lib/docker
Please be aware, this will remove all your docker images, containers and volumes etc. So understand the implications before applying or take a backup!
There are more details here:
https://github.com/docker/for-linux/issues/162
I am trying to set up Postgresql 9.3 server on Centos 7 (installation via yum) inside a custom directory, which in my case is an encrypted partition (/custom_container/database) that is mounted on startup. For a certain reason Postgresql does not behave like it should in the manual and makes an error on service startup.
Note: It does not want to accept the PGDATA environment variable which I set, and when running
su - postgres -c '/usr/pgsql-9.3/bin/initdb'
(given that the PGDATA directory is owned by postgres:postgres) the cluster gets initialized inside the default directory /var/lib/pgsql/9.3/data/
The only way to change that is using
su - postgres -c '/usr/pgsql-9.3/bin/initdb --pgdata=$PGDATA'
Which initializes the directory inside the custom container I am using. This is something I could not figure out, as the docs say that PGDATA variable is taken on default.
Problem: When running
service postgresql-9.3 start
I get an error with the log
postgresql-9.3.service - PostgreSQL 9.3 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-9.3.service; disabled)
Active: failed (Result: exit-code) since Mon 2014-11-10 15:24:15 CET; 1s ago
Process: 2785 ExecStartPre=/usr/pgsql-9.3/bin/postgresql93-check-db-dir ${PGDATA} (code=exited, status=1/FAILURE)
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: Starting PostgreSQL 9.3 database server...
Nov 10 15:24:15 CentOS-70-64-minimal postgresql93-check-db-dir[2785]: "/var/lib/pgsql/9.3/data/" is missing or empty.
Nov 10 15:24:15 CentOS-70-64-minimal postgresql93-check-db-dir[2785]: Use "/usr/pgsql-9.3/bin/postgresql93-setup initdb" to initialize t...ster.
Nov 10 15:24:15 CentOS-70-64-minimal postgresql93-check-db-dir[2785]: See %{_pkgdocdir}/README.rpm-dist for more information.
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: postgresql-9.3.service: control process exited, code=exited status=1
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: Failed to start PostgreSQL 9.3 database server.
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: Unit postgresql-9.3.service entered failed state.
Which means that Postgresql, even though the cluster is initialized in the new $PGDATA directory (/custom_container/database) still looks for the cluster in /var/lib/pgsql/9.3/data/
Did anyone experience this Postgresql behavior before? Could it be that I forgot certain configuration options or that the problem comes from Postgresql installation?
Thank you in advance!
It appears the real problem was setting the environment variables, which I got working in the following thread:
Centos 7 environment variables for Postgres service
The issue is the PGDATA variable set inside the custom /etc/systemd/system/postgresql-9.3.service which should be created from the contents of /usr/lib/systemd/system/postgresql-9.3.service which uses the default PGDATA var.
You need to create a custom postgresql.service file in /etc/systemd/system/, which overrides the default PGDATA environment variable. Your custom service file can .include the default postgresql service file, so you only need to add what you want to change. That way, upgrades can still modify/improve? stuff in the default service file, while your change is preserved.
This is how I just did it in Centos 7:
cat <<END >/etc/systemd/system/postgresql.service
.include /lib/systemd/system/postgresql.service
[Service]
Environment=PGDATA=/mnt/postgres/data ## <== SET THIS TO YOUR WANTED $PGDATA
END
systemctl daemon-reload
systemctl restart postgresql.service
Verify :
ps -ax | grep [p]ostgres
Update:
Rather than manually creating the file and adding the .include line, you can also use the systemd built-in way:
systemctl edit postgresql.service
This will open your default editor and save your changes to /etc/systemd/system/postgresql.service.d/override.conf
try this:
## Login with postgres user
su - postgres
export PGDATA=/your_path/data
pg_ctl -D $PGDATA start &
I think the most "CentOS 7 way" to do it is to copy the service file:
sudo cp /usr/lib/systemd/system/postgresql-9.6.service /etc/systemd/system/postgresql-9.6.service
Then edit the file /etc/systemd/system/postgresql-9.6.service:
# Location of database directory
Environment=PGDATA=/mnt/volume/var/lib/pgsql/9.6/data/
Then start it sudo systemctl start postgresql-9.6 and verify:
# sudo ps -ax | grep postmaster
32100 ? Ss 0:00 /usr/pgsql-9.6/bin/postmaster -D /mnt/volume/var/lib/pgsql/9.6/data/
Try to edit file /etc/init.d/postgresql-9.3:
PGDATA=/your/custom/path