cgconfig service won't start up - linux

I have already referenced this post: Centos cgconfig fails to start
I have a centos 7 machine. I've tried commenting out and leaving in memory in the following /etc/cgconfig.conf file:
mount {
cpuset = /cgroup/cpuset;
cpu = /cgroup/cpu;
cpuacct = /cgroup/cpuacct;
memory = /cgroup/memory;
devices = /cgroup/devices;
freezer = /cgroup/freezer;
net_cls = /cgroup/net_cls;
blkio = /cgroup/blkio;
}
I've also manually created that directory structure. When I run service cgconfig start, systemctl status cgconfig.service gives me this:
cgconfig.service - Control Group configuration service
Loaded: loaded (/usr/lib/systemd/system/cgconfig.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Tue 2017-03-14 20:27:40 EDT; 18s ago
Process: 6713 ExecStart=/usr/sbin/cgconfigparser -l /etc/cgconfig.conf -L /etc/cgconfig.d -s 1664 (code=exited, status=101)
Main PID: 6713 (code=exited, status=101)
Mar 14 20:27:40 localhost.localdomain systemd[1]: Starting Control Group configuration service...
Mar 14 20:27:40 localhost.localdomain cgconfigparser[6713]: /usr/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
Mar 14 20:27:40 localhost.localdomain cgconfigparser[6713]: Error: cannot mount cpu to /cgroup/cpu: Device or resource busy
Mar 14 20:27:40 localhost.localdomain systemd[1]: cgconfig.service: main process exited, code=exited, status=101/n/a
Mar 14 20:27:40 localhost.localdomain systemd[1]: Failed to start Control Group configuration service.
Mar 14 20:27:40 localhost.localdomain systemd[1]: Unit cgconfig.service entered failed state.
Mar 14 20:27:40 localhost.localdomain systemd[1]: cgconfig.service failed.
I've also tried to look at /proc/mounts to perhaps unmount cpu.
Any help to get the cgconfig service to start would be helpful.

$ lssubsys -a
cpu,cpuacct
If is a cosubsystem like 'cpu,cpuacct', must mount in the same hierachy (for cg).

Running on Ubuntu I could resolve the same issue by installing the libcgmanager which provides the daemon cgmanager.

Related

Can't restart webmin [status 2]

I've updated webmin, but now, it refuse to restart :
● webmin.service - LSB: web-based administration interface for Unix systems
Loaded: loaded (/etc/init.d/webmin; generated; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2018-07-29 09:30:29 CEST; 12s ago
Docs: man:systemd-sysv-generator(8)
Process: 1485 ExecStart=/etc/init.d/webmin start (code=exited, status=2)
Jul 29 09:30:26 vps513135 systemd[1]: Starting LSB: web-based administration interface for Unix systems...
Jul 29 09:30:27 vps513135 perl[1486]: pam_unix(webmin:auth): authentication failure; logname= uid=0 euid=0 tty= ruser= rhost= user=root
Jul 29 09:30:29 vps513135 systemd[1]: webmin.service: Control process exited, code=exited status=2
Jul 29 09:30:29 vps513135 systemd[1]: Failed to start LSB: web-based administration interface for Unix systems.
Jul 29 09:30:29 vps513135 systemd[1]: webmin.service: Unit entered failed state.
Jul 29 09:30:29 vps513135 systemd[1]: webmin.service: Failed with result 'exit-code'.
Can someone explain me what does pam_unix(webmin:auth): authentication failure mean ?
some more infos :
root#vps513135:~# uname -a
Linux vps513135 4.9.0-7-amd64 #1 SMP Debian 4.9.110-1 (2018-07-05) x86_64 GNU/Linux
Thank you :)
SOLUTION
I tried to start like this
root#vps513135:~# /etc/webmin/start
Starting Webmin server in /usr/share/webmin
Failed to open SSL key /home/sowdowdow/domains/sow.sowdowdow.fr/ssl.key at /usr/share/webmin/miniserv.pl line 4414.
The output is a bit more clear, and finally found a solution here.
Comment out the lines related to the borked server in /etc/webmin/miniserv.conf.
#ipcert_sow.sowdowdow.fr,*.sow.sowdowdow.fr=/home/sowdowdow/domains/sow.sowdowdow.fr/ssl.cert
#ipkey_sow.sowdowdow.fr,*.sow.sowdowdow.fr=/home/sowdowdow/domains/sow.sowdowdow.fr/ssl.key
Webmin doesn't support systemctl command. Instead of that please use the following command to start Webmin service.
/etc/rc.d/init.d/webmin stop
systemctl start webmin
I tried this command and I am able to start Webmin service on my server.

Docker start failed in centos 7

Docker service running on Centos 7 failed to start, I have some docker images which I want to save at any cost. I have searched a couple of online docs and they all say to delete /var/lib/docker/ dir which I don't want to because all the images and containers stuff is there. Can someone please save me how to get docker back up and running with losing any data.
Log:
[root#BuyPandGDev01 /]# systemctl status docker.service -l
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Sun 2018-04-22 00:05:23 UTC; 19min ago
Docs: http://docs.docker.com
Process: 1539 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE)
Main PID: 1539 (code=exited, status=1/FAILURE)
Apr 22 00:05:22 BuyPandGDev01 systemd[1]: Starting Docker Application Container Engine...
Apr 22 00:05:22 BuyPandGDev01 dockerd-current[1539]: time="2018-04-22T00:05:22.068920976Z" level=info msg="libcontainerd: new containerd process, pid: 1550"
Apr 22 00:05:23 BuyPandGDev01 dockerd-current[1539]: time="2018-04-22T00:05:23.101036303Z" level=warning msg="devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section."
Apr 22 00:05:23 BuyPandGDev01 dockerd-current[1539]: time="2018-04-22T00:05:23.155223108Z" level=error msg="[graphdriver] prior storage driver \"devicemapper\" failed: devmapper: Base Device UUID and Filesystem verification failed: devicemapper: Error running deviceCreate (ActivateDevice) dm_task_run failed"
Apr 22 00:05:23 BuyPandGDev01 dockerd-current[1539]: time="2018-04-22T00:05:23.155708413Z" level=fatal msg="Error starting daemon: error initializing graphdriver: devmapper: Base Device UUID and Filesystem verification failed: devicemapper: Error running deviceCreate (ActivateDevice) dm_task_run failed"
Apr 22 00:05:23 BuyPandGDev01 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Apr 22 00:05:23 BuyPandGDev01 systemd[1]: Failed to start Docker Application Container Engine.
Apr 22 00:05:23 BuyPandGDev01 systemd[1]: Unit docker.service entered failed state.
Apr 22 00:05:23 BuyPandGDev01 systemd[1]: docker.service failed.
journalctl -xe:
[root#BuyPandGDev01 /]# journalctl -xe
-- Unit docker-storage-setup.service has begun starting up.
Apr 22 00:25:58 BuyPandGDev01 container-storage-setup[2111]: INFO: Volume group backing root filesystem could not be determined
Apr 22 00:25:58 BuyPandGDev01 container-storage-setup[2111]: ERROR: No valid volume group found. Exiting.
Apr 22 00:25:58 BuyPandGDev01 systemd[1]: docker-storage-setup.service: main process exited, code=exited, status=1/FAILURE
Apr 22 00:25:58 BuyPandGDev01 systemd[1]: Failed to start Docker Storage Setup.
-- Subject: Unit docker-storage-setup.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker-storage-setup.service has failed.
--
-- The result is failed.
Apr 22 00:25:58 BuyPandGDev01 systemd[1]: Unit docker-storage-setup.service entered failed state.
Apr 22 00:25:58 BuyPandGDev01 systemd[1]: docker-storage-setup.service failed.
Apr 22 00:25:58 BuyPandGDev01 systemd[1]: Starting Docker Application Container Engine...
-- Subject: Unit docker.service has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has begun starting up.
Apr 22 00:25:58 BuyPandGDev01 dockerd-current[2140]: time="2018-04-22T00:25:58.731142431Z" level=info msg="libcontainerd: new containe
Apr 22 00:25:59 BuyPandGDev01 dockerd-current[2140]: time="2018-04-22T00:25:59.767061431Z" level=warning msg="devmapper: Usage of loop
Apr 22 00:25:59 BuyPandGDev01 kernel: device-mapper: table: 253:1: thin: Couldn't open thin internal device
Apr 22 00:25:59 BuyPandGDev01 kernel: device-mapper: ioctl: error adding target to table
Apr 22 00:25:59 BuyPandGDev01 dockerd-current[2140]: time="2018-04-22T00:25:59.835261589Z" level=error msg="[graphdriver] prior storag
Apr 22 00:25:59 BuyPandGDev01 dockerd-current[2140]: time="2018-04-22T00:25:59.835697590Z" level=fatal msg="Error starting daemon: err
Apr 22 00:25:59 BuyPandGDev01 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Apr 22 00:25:59 BuyPandGDev01 systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
Apr 22 00:25:59 BuyPandGDev01 systemd[1]: Unit docker.service entered failed state.
Apr 22 00:25:59 BuyPandGDev01 systemd[1]: docker.service failed.
Apr 22 00:25:59 BuyPandGDev01 polkitd[703]: Unregistered Authentication Agent for unix-process:2105:147803 (system bus name :1.43, obj
lines 2751-2788/2788 (END)
Any response would be helpful and appreciated.
Thx,
kumar
This error occurred for me when I was upgrading docker. Solution that worked for me was to remove legacy docker files /var/lib/docker/ and restart the docker service. Here is the solution.
# Remove docker files
$ rm -rf /var/lib/docker/
# Restart docker via service or via systemctl
$ service docker restart
$ service docker status
$ systemctl start docker.service
$ systemctl status docker.service
I had this error also starting the docker service:
kernel: device-mapper: table: 253:1: thin: Couldn't open thin internal device
I fixed it by creating a soft link from /var/lib/docker to another location on the machine which had more disk space.
cd /var/lib/
mv docker docker.old
ln -s /path/to/big/disk/docker/ docker
Restart the service:
systemctl restart docker

calico-node rkt returns stage1-fly.aci.asc: no such file or directory

I have a CoreOS beta (1185.2.0) installed.
I have the following systemd service file to start calico-node:
[Unit]
Description=Calico per-host agent
Requires=network-online.target
After=network-online.target
[Service]
Slice=machine.slice
PermissionsStartOnly=true
Environment=ETCD_CA_CERT_FILE=/etc/ssl/etcd/ca.pem
Environment=ETCD_CERT_FILE=/etc/ssl/etcd/etcd1.pem
Environment=ETCD_KEY_FILE=/etc/ssl/etcd/etcd1-key.pem
Environment=CALICO_DISABLE_FILE_LOGGING=true
Environment=HOSTNAME=10.79.218.2
Environment=IP=10.79.218.2
Environment=FELIX_FELIXHOSTNAME=10.79.218.2
Environment=CALICO_NETWORKING=true
Environment=NO_DEFAULT_POOLS=true
Environment=ETCD_ENDPOINTS=https://coreos-2.tux-in.com:2379,https://coreos-3.tux-in.com:2379
ExecStartPre=/bin/mkdir /var/run/calico
ExecStart=/usr/bin/rkt run --inherit-env --stage1-from-dir=stage1-fly.aci --volume=var-run-calico,kind=host,source=/var/run/calico --volume=modules,kind=host,source=/lib/modules,readOnly=false --mount=volume=modules,target=/lib/modules --volume=dns,kind=host,source=/etc/resolv.conf,readOnly=true --volume=etcd-tls-certs,kind=host,source=/etc/ssl/etcd,readOnly=true --mount=volume=dns,target=/etc/resolv.conf --mount=volume=etcd-tls-certs,target=/etc/ssl/etcd --mount=volume=var-run-calico,target=/var/run/calico --trust-keys-from-https quay.io/calico/node:v0.22.0
KillMode=mixed
Restart=always
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
welp.. the systemd fails with:
● calico-node.service - Calico per-host agent
Loaded: loaded (/etc/systemd/system/calico-node.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit-hit) since Tue 2016-10-25 04:51:15 UTC; 9min ago
Process: 1970 ExecStart=/usr/bin/rkt run --inherit-env --stage1-from-dir=stage1-fly.aci --volume=var-run-calico,kind=host,source=/var/
Process: 4307 ExecStartPre=/bin/mkdir /var/run/calico (code=exited, status=1/FAILURE)
Main PID: 1970 (code=exited, status=1/FAILURE)
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: Failed to start Calico per-host agent.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Unit entered failed state.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Failed with result 'exit-code'.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Service hold-off time over, scheduling restart.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: Stopped Calico per-host agent.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Start request repeated too quickly.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: Failed to start Calico per-host agent.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Unit entered failed state.
Oct 25 04:51:15 coreos-2.tux-in.com systemd[1]: calico-node.service: Failed with result 'start-limit-hit'.
I tried setting the environment variables on terminal and running the rkt command and I got the error message
image: using image from file /usr/lib/rkt/stage1-images/stage1-fly.aci
run: open /usr/lib/rkt/stage1-images/stage1-fly.aci.asc: no such file or directory
I think that error may relate to the following configuration file at /etc/rkt/paths.d/paths.json
{
"rktKind": "paths",
"rktVersion": "v1",
"stage1-images": "/usr/lib/rkt/stage1-images"
}
I need the paths configuration file later on for kubernetes.
any ideas? the asc file really doesn't exist there.
/usr/lib is a dynamic link to /usr/lib64. rkt configured there not to search for certificates for container images at /usr/lib64 and not /usr/lib.
it seems that by default this configuration is already set properly, so just removing the file /etc/rkt/paths.d/paths.json resolves the issue.
full answer at https://github.com/coreos/rkt/issues/3320

Can not start keystone service

I installed packstack on my fresh installation of Fedora 21 with all updates. When I run
packstack --allinone I received this error:
ERROR : Error appeared during Puppet run: 192.168. 1.*_keystone.pp Error:
Could not start Service[keystone]: Execution of '/sbin/service openstack-keystone
start'` returned 1: Redirecting to /bin/systemctl start openstack-keystone.service
You will find full trace in log /var/tmp/packstack/20141223-022613-whLvTs/manifests
/192.168.1.*_keystone.pp.log
And this is the log:
Notice: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder#services]:
Dependency Service[keystone] has failures: true
Warning: /Stage[main]/Cinder::Keystone::Auth/Keystone_user_role[cinder#services]:
Skipping because of failed dependencies
Notice: Finished catalog run in 13.02 seconds
With systemctl status openstack-keystone.service get this:
openstack-keystone.service - OpenStack Identity Service (code-named Keystone)
Loaded: loaded (/usr/lib/systemd/system/openstack-keystone.service; disabled)
Active: failed (Result: start-limit) since Tue 2014-12-23 19:47:36 EET; 1min 59s ago
Process: 22526 ExecStart=/usr/bin/keystone-all (code=exited, status=1/FAILURE)
Main PID: 22526 (code=exited, status=1/FAILURE)
Dec 23 19:47:35 localhost.localdomain systemd[1]: Failed to start OpenStack...
Dec 23 19:47:35 localhost.localdomain systemd[1]: Unit openstack-keystone.s...
Dec 23 19:47:35 localhost.localdomain systemd[1]: openstack-keystone.servic...
Dec 23 19:47:36 localhost.localdomain systemd[1]: start request repeated to...
Dec 23 19:47:36 localhost.localdomain systemd[1]: Failed to start OpenStack...
Dec 23 19:47:36 localhost.localdomain systemd[1]: Unit openstack-keystone.s...
Dec 23 19:47:36 localhost.localdomain systemd[1]: openstack-keystone.servic...
This can happen due SELinux avc denial because of a missing policy.
You can try to put SELinux to permissive mode:
# setenforce 0
A similar bug

Can't set "max connections" - parameter of memcached higher than 4096 (exited status 71)

My start parameter for memcached are:
-m 900 -p 11211 -t 5 -l 127.0.0.1 -r 200000 -c 4096
If "-c" (max connections) is more than 4096, memcached won't start.
memcached.service - memcached daemon
Loaded: loaded (/usr/lib/systemd/system/memcached.service; enabled)
Active: failed (Result: exit-code) since Mon 2014-10-13 13:25:15 CEST; 17s ago
Process: 17667 ExecStart=/usr/sbin/memcached $MEMCACHED_PARAMS (code=exited, status=71)
Main PID: 17667 (code=exited, status=71)
Oct 13 13:25:15 openSUSE-131-64-minimal systemd[1]: Starting memcached daemon...
Oct 13 13:25:15 openSUSE-131-64-minimal systemd[1]: Started memcached daemon.
Oct 13 13:25:15 openSUSE-131-64-minimal systemd[1]: memcached.service: main process exited, code=exited, status=71/n/a
Oct 13 13:25:15 openSUSE-131-64-minimal systemd[1]: Unit memcached.service entered failed state.
Does someone know what could cause these problem?

Resources