What starts this docker process on my laptop? - linux

Every time I boot up my Lubuntu 16.04 laptop I can see I have a running docker container:
$ ps -ef | grep docker
root 1724 1 3 21:17 ? 00:01:30 /usr/bin/dockerd -H fd://
root 1774 1724 0 21:17 ? 00:00:04 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc
root 4750 1774 0 21:17 ? 00:00:00 docker-containerd-shim 72541a4648b890132985daf2357d1130b8b5208cf12ede607b93ab2987629719 /var/run/docker/libcontainerd/72541a4648b890132985daf2357d1130b8b5208cf12ede607b93ab2987629719 docker-runc
stephane 10755 1793 0 22:07 pts/0 00:00:00 grep docker
It serves a Jenkins application on the port 80 and requesting localhost/ in the browser redirects to http://localhost/login?from=%2F and shows a Jenkins warning page:
Unlock Jenkins
To ensure Jenkins is securely set up by the administrator, a password has been written to the log (not sure where to find it?) and this file on the server:
A wget request shows:
$ wget localhost/
--2017-05-23 22:09:55-- http://localhost/
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2017-05-23 22:09:55 ERROR 403: Forbidden.
How can I know which service is firing up this docker process ?
I looked in the /etc/init.d/ directory:
$ l /etc/init.d/
alsa-utils* checkroot-bootclean.sh* halt* mattermostd* nginxd* rc* single* uuidd*
anacron* checkroot.sh* hostname.sh* mountall-bootclean.sh* ntp* rc.local* skeleton whoopsie*
apachedsd* console-setup* httpd* mountall.sh* ondemand* rcS* ssh* x11-common*
apparmor* cron* hwclock.sh* mountdevsubfs.sh* openvpn* README tomcatd*
apport* cups* irqbalance* mountkernfs.sh* php-fpm* reboot* udev*
avahi-daemon* cups-browsed* keyboardd* mountnfs-bootclean.sh* plymouth* redis* ufw*
bluetooth* dbus* killprocs* mountnfs.sh* plymouth-log* resolvconf* umountfs*
bootmisc.sh* docker* kmod* mysqld* postfix* rsync* umountnfs.sh*
cgroupfs-mount* dropboxd* lightdm* networking* pppd-dns* rsyslog* umountroot*
checkfs.sh* grub-common* mariadbd* network-manager* procps* sendsigs* urandom*
The /etc/init.d/docker is mine and removing it from the directory, a reboot still comes up with a running docker process.
I removed the /etc/init.d/docker file, rebooted, and there is a docker process:
$ ps -ef | grep docker
root 1560 1 5 22:15 ? 00:00:06 /usr/bin/dockerd -H fd://
root 1645 1560 0 22:15 ? 00:00:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --shim docker-containerd-shim --runtime docker-runc
root 4644 1645 0 22:15 ? 00:00:00 docker-containerd-shim 069db46cca05d43c35f05ff50aaa836507cbf69e4e3d9443b6b859d0edb5b076 /var/run/docker/libcontainerd/069db46cca05d43c35f05ff50aaa836507cbf69e4e3d9443b6b859d0edb5b076 docker-runc
stephane 5520 1741 0 22:17 pts/0 00:00:00 grep docker
So I looked up for anything docker in all these files, but found nothing named docker:
$ cd /etc/init.d/
[stephane#stephane-ThinkPad-X301 init.d]
$ grep.sh docker
[stephane#stephane-ThinkPad-X301 init.d]
This docker process is there every time I start my laptop, even when off line.
What starts this docker process ?

Lubuntu 16.04 comes with systemd by default. At some point you must have started up a jenkins instance in docker - it's hard to tell exactly what started the process initially. However, systemd would be what is currently causing it to start. In order to stop it from running, run the following commands:
systemctl status docker <- Find out of systemctl thinks docker is running.
It'll likely show something like this:
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2017-05-21 22:59:46 EDT; 1 day 17h ago
Docs: http://docs.docker.com
Main PID: 1314 (dockerd-current)
Tasks: 14 (limit: 8192)
CGroup: /system.slice/docker.service
└─1314 /usr/bin/dockerd-current --add-runtime oci=/usr/libexec/docker/docker-runc-current --default-runtime=oci --containerd /run/containerd.sock --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --log-driver=journald
To stop it, run systemctl stop docker and then systemctl disable docker. As a last resort if this doesn't work, you can run systemctl mask docker.

Docker is being started by systemd in your environment. You can disable the entire engine by running:
sudo systemctl disable docker
sudo systemctl stop docker
You can also stop only the container that is running (the shim and Jenkins application):
sudo docker ps # lists the running containers along with their container id
sudo docker update --restart=no $container_id
sudo docker stop $container_id
If you know that you do not need this container and want to permanently delete it, you can run this instead of the above two last commands:
sudo docker rm -f $container_id
The -f switch also stops the container if it's currently running.
Edit: from your comment, your container is running under swarm mode which is redeploying it. To stop that first find the stack or service that is running it.
sudo docker stack ls
sudo docker service ls
If you see a stack listed, you can remove that with:
sudo docker stack rm $stack_name
If there are no stacks listed, or they don't apply to this container, you can delete the service with:
sudo docker service rm $service_name

Related

Failing to start an app as service through systemd using userdata

I want to start my app as service from systemd but the app is not starting.
My unit file appstart.service looks like this:
[Unit]
Description=Application start
[Service]
Type=simple
User=ec2-user
ExecStart=/usr/bin/bash /home/ec2-user/project/restartScript.sh
SyslogIdentifier=App_start
[Install]
WantedBy=multi-user.target
RestartScript.sh should start the java app:
#!/bin/bash
export SPRING_PROFILES_ACTIVE="tst,development"
cd /home/ec2-user/project
pkill java
/usr/bin/java -jar /home/ec2-user/project/app.jar >>/home/ec2-user/project/web.log 2>>/home/ec2-user/project/web-error.log &
I am starting the app as a service this way using User Data on AWS EC2 instance:
#!/bin/bash
mkdir /home/ec2-user/project
cd /home/ec2-user/project
sudo wget -P /home/ec2-user/project/ https://tst.s3.eu-west-1.amazonaws.com/app.jar
chown -R ec2-user:ec2-user /home/ec2-user/project
sudo wget -P /home/ec2-user/project/ https://tst.s3.eu-west-1.amazonaws.com/restartScript.sh
sudo chmod 755 /home/ec2-user/project/restartScript.sh
cd /etc/systemd/system/
sudo wget /etc/systemd/system/ https://tst.s3.eu-west-1.amazonaws.com/appstart.service
sudo su
systemctl daemon-reload
systemctl enable appstart.service
systemctl start appstart.service
exit
The output I am getting when I start the EC2 instance this way is:
$ systemctl status appstart.service
● appstart.service - Application start
Loaded: loaded (/etc/systemd/system/appstart.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Thu 2022-08-25 13:35:52 UTC; 4min 19s ago
Process: 7328 ExecStart=/usr/bin/bash /home/ec2-user/project/restartScript.sh (code=exited, status=0/SUCCESS)
Main PID: 7328 (code=exited, status=0/SUCCESS)
Aug 25 13:35:52 ip-x-x-x-x.tst.local systemd[1]: Started Application start.
When I try to do
systemctl start appstart.service
Nothing changes. The application is not working.
Any idea why is this happening?
OS on the machine:
$ cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
Nothing seems to be wrong with this, the service is running and the application runs and finishes successfully. The Active state of the service becomes inactive (dead) in case of failure it should be Failed and Main PID is (code=exited, status=0/SUCCESS).
To verify that the service is running correct, write this line somewhere in RestartScript.sh:
echo "Test" > test.txt
After starting the service you will find the created file near RestartScript.sh file.
I managed to resolve the issue by changing the WantedBy section in appstart.service file to default.target.

Falcon sensor fails to start the agent

I am trying to install falcon-sensor(version:4.16.0) on a Debian machine. When I try to start the agent it doesn't start up.
I checked the logs of falcon-sensor and here is what it says :
2019 unable to initialize dynamic libraries. (2309) [144]
I checked the log of falconctl and here is what it says :
Invalid file /opt/CrowdStrike/falconstore length: 0 (2277) [568]
I tried finding answers through googling but I could not find any.
Any help on this would be really helpful
Thanks in advance.
falcon-sensor has libssl.so.1.0.0 as dependency, but that version is missing on Debian Stretch (there is version 1.0.2).
I just created a symlink to the existing version:
ln -s libssl.so.1.0.2 /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0
and now it works.
Make sure you performed the basic steps correctly:
1 ) Download falcon-sensor.rpm to your machine.
2 ) sudo yum install -y falcon-sensor.rpm .
3 ) sudo /opt/CrowdStrike/falconctl -s --cid=<Your-CID> .
4 ) service falcon-sensor start.
Check status:
[ec2-user#ip-172-21-80-18 ~]$ service falcon-sensor status
Redirecting to /bin/systemctl status falcon-sensor.service
● falcon-sensor.service - CrowdStrike Falcon Sensor
Loaded: loaded (/usr/lib/systemd/system/falcon-sensor.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2020-09-05 13:48:34 UTC; 1min 6s ago
Process: 2746 ExecStart=/opt/CrowdStrike/falcond (code=exited, status=0/SUCCESS)
Process: 2729 ExecStartPre=/opt/CrowdStrike/falconctl -g --cid (code=exited, status=0/SUCCESS)
Main PID: 2747 (falcond)
Tasks: 19
Memory: 4.5M
CGroup: /system.slice/falcon-sensor.service
├─2747 /opt/CrowdStrike/falcond
└─2749 falcon-sensor
systemd[1]: Starting CrowdStrike Falcon Sensor...
falconctl[2729]: cid="<Your-CID>".
falcond[2747]: starting
Started CrowdStrike Falcon Sensor.
falcon-sensor[2749]: No traceLevel set via falconctl defaulting to none
falcon-sensor[2749]: LogLevelUpdate: none = trace level 0.
View process:
ps -e | grep falcon-sensor
2749 ? 00:00:00 falcon-sensor
Notice that all commands should be executed with sudo Or else you see the error below:
$ service falcon-sensor restart #< --- No root permission
Redirecting to /bin/systemctl restart falcon-sensor.service
Failed to restart falcon-sensor.service: The name org.freedesktop.PolicyKit1 was not provided by any .service files
See system logs and 'systemctl status falcon-sensor.service' for details.
I follow install steps 1~3 below without issue, but have not get a CID, please let met know how to get it
1 ) Download falcon-sensor.rpm to your machine.
2 ) sudo yum install -y falcon-sensor.rpm .
3 ) sudo /opt/CrowdStrike/falconctl -s --cid=<Your-CID>
4 ) service falcon-sensor start.

systemd unit for pgagent

I want to make a systemd unit for pgagnent.
I found only init.d script on this page http://technobytz.com/automatic-sql-database-backup-postgres.html, but I don't know how to exec start-stop-daemon in systemd.
I have written that unit:
[Unit]
Description=pgagent
After=network.target postgresql.service
[Service]
ExecStart=start-stop-daemon -b --start --quiet --exec pgagent --name pgagent --startas pgagent -- hostaddr=localhost port=5432 dbname=postgres user=postgres
ExecStop=start-stop-daemon --stop --quiet -n pgagent
[Install]
WantedBy=multi-user.target
But I get errors like:
[/etc/systemd/system/pgagent.service:14] Executable path is not absolute, ignoring: start-stop-daemon --stop --quiet -n pgagent
What is wrong with that unit?
systemd expects the ExecStart and ExecStop commands to include the full path to the executable.
start-stop-daemon is not necessary for services under systemd management. you will want to have it execute the underlying pgagent commands.
look at https://unix.stackexchange.com/questions/220362/systemd-postgresql-start-script for an example
If you installed pgagent with yum or apt-get, it should have created the systemd file for you. For example, on RHEL 7 (essentially CentOS 7), you can install PostgreSQL 12 followed by pgagent
sudo yum install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
sudo yum install postgresql12
sudo yum install postgresql12-server
sudo yum install pgagent_12.x86_64
This installs PostgreSQL to /var/lib/pgsql/12 and pgagent_12 to /usr/bin/pgagent_12
In addition, it creates a systemd file at /usr/lib/systemd/system/pgagent_12.service
View the status of the service with systemctl status pgagent_12
Configure it to auto-start, then start it, with:
sudo systemctl enable pgagent_12
sudo systemctl start pgagent_12
Most likely the authentication will fail, since the default .service file has
ExecStart=/usr/bin/pgagent_12 -s ${LOGFILE} hostaddr=${DBHOST} dbname=${DBNAME} user=${DBUSER} port=${DBPORT}
Confirm with sudo tail /var/log/pgagent_12.log which will show
Sat Oct 12 19:35:47 2019 WARNING: Couldn't create the primary connection [Attempt #1]
Sat Oct 12 19:35:52 2019 WARNING: Couldn't create the primary connection [Attempt #2]
Sat Oct 12 19:35:57 2019 WARNING: Couldn't create the primary connection [Attempt #3]
Sat Oct 12 19:36:02 2019 WARNING: Couldn't create the primary connection [Attempt #4]
To fix things, we need to create a .pgpass file that is accessible when the service starts. First, stop the service
sudo systemctl stop pgagent_12
Examining the service file with less /usr/lib/systemd/system/pgagent_12.service shows it has
User=pgagent
Group=pgagent
Furthermore, /etc/pgagent/pgagent_12.conf has
DBNAME=postgres
DBUSER=postgres
DBHOST=127.0.0.1
DBPORT=5432
LOGFILE=/var/log/pgagent_12.log
Examine the /etc/passwd file to look for the pgagent user and its home directory: grep "pgagent" /etc/passwd
pgagent:x:980:977:pgAgent Job Schedule:/home/pgagent:/bin/false
Thus, we need to create a .pgpass file at /home/pgagent/.pgpass to define the postgres user's password
sudo su -
mkdir /home/pgagent
chown pgagent:pgagent /home/pgagent
chmod 0700 /home/pgagent
echo "127.0.0.1:5432:postgres:postgres:PasswordGoesHere" > /home/pgagent/.pgpass
chown pgagent:pgagent /home/pgagent/.pgpass
chmod 0600 /home/pgagent/.pgpass
The directory and file permissions are important. If you're having problems, you can enable debug logging by editing the service file at /usr/lib/systemd/system/pgagent_12.service to enable debug logging by updating the ExecStart command to have -l 2
ExecStart=/usr/bin/pgagent_12 -l 2-s ${LOGFILE} hostaddr=${DBHOST} dbname=${DBNAME} user=${DBUSER} port=${DBPORT}
After changing a .service file, things must be reloaded with sudo systemctl daemon-reload (systemd will inform you of this requirement if you forget it).
Keep starting/stopping the service and checking /var/log/pgagent_12.log Eventually, it will start properly and sudo systemctl status pgagent_12 will show
● pgagent_12.service - PgAgent for PostgreSQL 12
Loaded: loaded (/usr/lib/systemd/system/pgagent_12.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-10-12 20:18:18 PDT; 13s ago
Process: 6159 ExecStart=/usr/bin/pgagent_12 -s ${LOGFILE} hostaddr=${DBHOST} dbname=${DBNAME} user=${DBUSER} port=${DBPORT} (code=exited, status=0/SUCCESS)
Main PID: 6160 (pgagent_12)
Tasks: 1
Memory: 1.1M
CGroup: /system.slice/pgagent_12.service
└─6160 /usr/bin/pgagent_12 -s /var/log/pgagent_12.log hostaddr=127.0.0.1 dbname=postgres user=postgres port=5432
Oct 12 20:18:18 prismweb3 systemd[1]: Starting PgAgent for PostgreSQL 12...
Oct 12 20:18:18 prismweb3 systemd[1]: Started PgAgent for PostgreSQL 12.

httpd/apachectl service fails to start on RHEL 7

I'm having some trouble launching my Apache server from RHEL 7 (Amazon ec2). My larger goal is to host a Flask application from the ec2 instance using an Anaconda environment, but right now I'm just concerned with getting the httpd service started properly.
I've found a number of similar questions posted here, here, here, etc., but none seem to address the exact problem I'm experiencing.
I'm following this tutorial down to the last > character, but the commands
sudo apachectl restart
and
sudo service httpd restart
both result in errors and direct me to examine systemctl status httpd.service for more information. The output of that file is as follows:
httpd.service - The Apache HTTP Server
Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2018-04-06 21:00:42 UTC; 4s ago
Docs: man:httpd(8)
man:apachectl(8)
Process: 32166 ExecStop=/bin/kill -WINCH ${MAINPID} (code=exited, status=1/FAILURE)
Process: 32165 ExecStart=/usr/sbin/httpd $OPTIONS -DFOREGROUND (code=exited, status=0/SUCCESS)
Main PID: 32165 (code=exited, status=0/SUCCESS)
[long ec2 ip address] systemd[1]: Starting The Apache HTTP Server...
[long ec2 ip address] httpd[32165]: httpd (pid 28220) already running
[long ec2 ip address] kill[32166]: kill: cannot find process ""
[long ec2 ip address] systemd[1]: httpd.service: control process exited, code=exited status=1
[long ec2 ip address] systemd[1]: Failed to start The Apache HTTP Server.
[long ec2 ip address] systemd[1]: Unit httpd.service entered failed state.
[long ec2 ip address] systemd[1]: httpd.service failed.
The output of journalctl -xe returns the same.
Some information about my system (don't know whether or not any of this will be helpful, but I figured it would be best to include it):
Apache Version: Apache/2.4.6 (Red Hat Enterprise Linux) configured
$ /usr/bin/python -V
Python 2.7.5
$ sudo yum install mod_wsgi
Package mod_wsgi-3.4-12.el7_0.x86_64 already installed and latest version
$ service httpd configtest
Syntax OK
$ sudo chkconfig --levels 235 httpd on
Note: Forwarding request to 'systemctl enable httpd.service'
The command sudo netstat -lnp | grep :80 returns tcp 0 0 :::0 :::* LISTEN 28220/httpd
I am now noticing that the file /etc/init.d/httpd does not exist.
Anybody have a hint? If this question has been asked before, please direct me to it. I've searched all over, with no luck thus far.
Cheers.
Try killing the old pid. Looks like something is still running under httpd. Try doing a ps -ef | grep httpd to see what is running and kill it using sudo kill -9 processid (e.g. sudo kill -9 13254).

Failed to connect to containerd: failed to dial

Just installed Docker CE following official instructions with the repository in Ubuntu 14.04
Installation went successfully, the daemon is running
$ ps aux | grep docker
[...] /usr/bin/dockerd --raw-logs [...]
My user is in the docker group:
$ groups
[...] docker
The cli can't seem to communicate (same with sudo)
$ docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock.
Is the docker daemon running?
The socket seems to have the correct permissions:
$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Feb 4 16:21 /var/run/docker.sock
The log seems to claim about some issues though
$ sudo tail -f /var/log/upstart/docker.log
Failed to connect to containerd: failed to dial "/var/run/docker/containerd/docker-containerd.sock": dial unix:///var/run/docker/containerd/docker-containerd.sock: timeout
/var/run/docker.sock is up
time="2018-02-04T16:22:21.031459040+01:00" level=info msg="libcontainerd: started new docker-containerd process" pid=17147
INFO[0000] starting containerd module=containerd revision=89623f28b87a6004d4b785663257362d1658a729 version=v1.0.0
INFO[0000] setting subreaper... module=containerd
containerd: invalid argument
time="2018-02-04T16:22:21.056685023+01:00" level=error msg="containerd did not exit successfully" error="exit status 1" module=libcontainerd
Any advice to make this work ?
Relog and Docker restart already done of course
As #bobbear suggested and is actually mentioned in the official doc one of the prerequisites is:
Version 3.10 or higher of the Linux kernel. The latest version of the kernel available for you platform is recommended.
After having checked my Kernel version:
$ uname -a
Linux [...] 3.2.[...]-generic [...]-Ubuntu [...] x86_64
I searched for candidates:
$ apt-cache search linux-image
And installed my new_kernel:
$ sudo apt-get install \
linux-image-new_kernel \
linux-headers-new_kernel \
linux-image-extra-new_kernel
Same situation happend on me. IS because your linux kernel version too low !!! check it use command "uname -r" , if the version below "3.10" (for example: debian 7 whezzy default version is 3.2 ) ,even you install docker-ce suceessfully, you will still can not start docker daemon success.That why! All most answers on the web tell you to 'restart' bla bla bla... but they did not consider this problem.

Resources