I was trying to debug a failed test job in a CircleCI workflow which had a config similar to this:
integration_tests:
docker:
- image: my-group/our-custom-image:latest
- image: postgres:9.6.11
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: db
steps:
- stuff && things
When I ran the job with SSH debugging and SSH'ed to where the CircleCI app told me, I found myself in a strange maze of twisty little namespaces, all alike. I ran ps awx and I could see processes from both the two docker containers:
root#90c93bcdd369:~# ps awwwx
PID TTY STAT TIME COMMAND
1 pts/0 Ss 0:00 /dev/init -- /bin/sh
6 pts/0 S+ 0:00 /bin/sh
7 pts/0 Ss+ 0:00 postgres
40 ? Ssl 0:02 /bin/circleci-agent ...
105 ? Ss 0:00 postgres: checkpointer process
106 ? Ss 0:00 postgres: writer process
107 ? Ss 0:00 postgres: wal writer process
108 ? Ss 0:00 postgres: autovacuum launcher process
109 ? Ss 0:00 postgres: stats collector process
153 pts/1 Ss+ 0:00 bash "stuff && things"
257 pts/1 Sl+ 0:31 /path/to/our/application
359 pts/2 Ss 0:00 -bash
369 pts/2 R+ 0:00 ps awwwx
It seems like what they did was somehow "merged" the cgroup namespaces of the two docker containers into a third namespace, under which the shell they provided me resides. Because pid 7 is running from one docker container, and pid 257 is the application running inside the my-group/our-custom-image:latest container.
The cgroup view from /proc shows some kind of merged cgroups going on, it looks like?
root#90c93bcdd369:~# cat /proc/7/cgroup
12:devices:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
11:blkio:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
10:memory:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
9:hugetlb:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
8:perf_event:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
7:net_cls,net_prio:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
6:rdma:/
5:cpu,cpuacct:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
4:cpuset:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
3:pids:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
2:freezer:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
1:name=systemd:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/d8fc5294708fd4cf91fa405d6462571e1dc56413b55a6b6e5790b8f158fee632
0::/system.slice/containerd.service
root#90c93bcdd369:~# cat /proc/257/cgroup
12:devices:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
11:blkio:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
10:memory:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
9:hugetlb:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
8:perf_event:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
7:net_cls,net_prio:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
6:rdma:/
5:cpu,cpuacct:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
4:cpuset:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
3:pids:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
2:freezer:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
1:name=systemd:/docker/94376e7880579c6bde0622017594fcdb8d5767788bb4790c0f014db282198577/90c93bcdd3693a918adddf62939c5b31e86868864edabe7347a268149e797f43
0::/system.slice/containerd.service
Is there a standard Docker feature, or some cgroup feature being used to produce this magic? Or is this some custom, proprietary CircleCI feature?
Related
I'm writing test for electron application in typescript.
Inside application there are register listener for SIGTERM
process.on('SIGTERM', async () => {
console.log('before exit');
await this.exit(); //some inner function can't reach this statement anyway
});
Locally everything fine, but on CI when app running inside docker container looks like it didn't receive SIGTERM.
For starting application I'm using child_process.spawn
import type { ChildProcess } from 'child_process';
let yarnStart: ChildProcess = spawn('yarn', 'start', { shell: true });
// 'start' is just script in package.json
I try to kill application three different way and none of them works. Application didn't receive SIGTERM no before exit and after manually stopping ci-build in final-step ps aux showing my process.
// 1-way
yarnStart.kill('SIGTERM');
// 2-way
process.kill(yarnStart.pid, 'SIGTERM');
// 3-way
import { execSync } from 'child_process';
execSync(`kill -15 ${yarnStart.pid}`);
Why nodejs can't properly send SIGTERM inside docker container?
Only difference - locally I have debian-9(stretch) and image based on debian-10(buster). Same nodejs version 12.14.1. I will try to build container with stretch to see how it will behave, but skeptical about that this will help.
UPD
There is kind of difference in processes initiation(due to running scripts on CI in container, any instruction runs with /bin/sh -c)
when you execute ps aux you will see
//locally
myuser 101457 1.3 0.1 883544 58968 pts/8 Sl+ 10:32 0:00 /usr/bin/node /usr/share/yarn/bin/yarn.js start
myuser 101468 1.6 0.2 829316 69456 pts/8 Sl+ 10:32 0:00 /usr/bin/node /usr/share/yarn/lib/cli.js start
myuser 101479 1.6 0.2 829576 69296 pts/8 Sl+ 10:32 0:00 /usr/bin/node /usr/share/yarn/lib/cli.js start:debug
myuser 101490 0.2 0.0 564292 31140 pts/8 Sl+ 10:32 0:00 /usr/bin/node /home/myuser/myrepo/electron-app/node_modules/.bin/electron -r ts-node/register ./src/main.ts
myuser 101497 143 1.4 9215596 485132 pts/8 Sl+ 10:32 0:35 /home/myuser/myrepo/node_modules/electron/dist/electron -r ts-node/register ./src/main.ts
//container
root 495 0.0 0.0 2392 776 ? S 09:05 0:00 /bin/sh -c yarn start
root 496 1.0 0.2 893240 74336 ? Sl 09:05 0:00 /usr/local/bin/node /opt/yarn-v1.22.5/bin/yarn.js start
root 507 1.7 0.2 885588 68652 ? Sl 09:05 0:00 /usr/local/bin/node /opt/yarn-v1.22.5/lib/cli.js start
root 518 0.0 0.0 2396 712 ? S 09:05 0:00 /bin/sh -c yarn start:debug
root 519 1.7 0.2 885336 68608 ? Sl 09:05 0:00 /usr/local/bin/node /opt/yarn-v1.22.5/lib/cli.js start:debug
root 530 0.0 0.0 2396 780 ? S 09:05 0:00 /bin/sh -c electron -r ts-node/register ./src/main.ts
root 531 0.3 0.0 554764 32080 ? Sl 09:05 0:00 /usr/local/bin/node /opt/ci/jobfolder/job_id_423/electron-app/node_modules/.bin/electron -r ts-node/register ./src/main.ts
root 538 140 1.5 9072388 520824 ? Sl 09:05 0:26 /opt/ci/jobfolder/job_id_423/node_modules/electron/dist/electron -r ts-node/register ./src/main.ts
And actually killing process with
// 1-way
yarnStart.kill('SIGTERM');
works but it kills only /bin/sh -c yarn start and his child_process /usr/local/bin/node /opt/yarn-v1.22.5/bin/yarn.js start that actually spawn application still hanging
/bin/sh -c which comes with a load of problems, one of them notably that you’ll never see a signal in your application. And descendant processes create they own /bin/sh -c
I find a solution in https://stackoverflow.com/a/33556110/4577788
Also find alternative solution(but it didn't work for me, maybe becaus of specifics of nodejs execution).
You can kill all the processes belonging to the same process tree using the Process Group ID. More datailed info can be found here https://stackoverflow.com/a/15139734/4577788
When I try to execute execSync('kill -- -942'); or execSync('kill -- "-942"');
Error occure kill: illegal number -, didn't find why it occure and hove to fix it.
On my Windows machine, I started a Docker container from docker compose. My entrypoint is a Go filewatcher that runs a task of a taskmanager on every filechange. The executed task builds and runs the Go program.
But before I can build and run the program again after filechanges I have to kill the previous running version. But every time I kill the app process, the container is also gone.
The goal is to kill only the svc1 process with PID 74 in this example. I tried pkill -9 svc1 and kill $(pgrep svc1). But every time the parent processes are killed too.
The commandline output from inside the container:
root#bf073c39e6a2:/app/cmd/svc1# ps -aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 2.5 0.0 104812 2940 ? Ssl 13:38 0:00 /go/bin/watcher
root 13 0.0 0.0 294316 7576 ? Sl 13:38 0:00 /go/bin/task de
root 74 0.0 0.0 219284 4908 ? Sl 13:38 0:00 /svc1
root 82 0.2 0.0 18184 3160 pts/0 Ss 13:38 0:00 /bin/bash
root 87 0.0 0.0 36632 2824 pts/0 R+ 13:38 0:00 ps -aux
root#bf073c39e6a2:/app/cmd/svc1# ps -afx
PID TTY STAT TIME COMMAND
82 pts/0 Ss 0:00 /bin/bash
88 pts/0 R+ 0:00 \_ ps -afx
1 ? Ssl 0:01 /go/bin/watcher -cmd /go/bin/task dev -startcmd
13 ? Sl 0:00 /go/bin/task dev
74 ? Sl 0:00 \_ /svc1
root#bf073c39e6a2:/app/cmd/svc1# pkill -9 svc1
root#bf073c39e6a2:/app/cmd/svc1
Switching to the containerlog:
task: Failed to run task "dev": exit status 255
2019/08/16 14:20:21 exit status 1
"dev" is the name of the task in the taskmanger.
The Dockerfile:
FROM golang:stretch
RUN go get -u -v github.com/radovskyb/watcher/... \
&& go get -u -v github.com/go-task/task/cmd/task
WORKDIR /app
COPY ./Taskfile.yml ./Taskfile.yml
ENTRYPOINT ["/go/bin/watcher", "-cmd", "/go/bin/task dev", "-startcmd"]
I expect only the process with the target PID is killed and not the parent process that spawned it it.
You can use process manager like "supervisord" and configure it to re-execute your script or the command even if you killed it's process which will keep your container up and running.
I'm running an rsync daemon (providing a mirror for the SaneSecurity signatures).
rsync is started like this (from runit):
/usr/bin/rsync -v --daemon --no-detach
And the config contains:
use chroot = no
munge symlinks = no
max connections = 200
timeout = 30
syslog facility = local5
transfer logging = no
log file = /var/log/rsync.log
reverse lookup = no
[sanesecurity]
comment = SaneSecurity ClamAV Mirror
path = /srv/mirror/sanesecurity
read only = yes
list = no
uid = nobody
gid = nogroup
But what I'm seeing is a lot of "lingering" rsync processes:
# ps auxwww|grep rsync
root 423 0.0 0.0 4244 1140 ? Ss Oct30 0:00 runsv rsync
root 2529 0.0 0.0 11156 2196 ? S 15:00 0:00 /usr/bin/rsync -v --daemon --no-detach
nobody 4788 0.0 0.0 20536 2860 ? S 15:10 0:00 /usr/bin/rsync -v --daemon --no-detach
nobody 5094 0.0 0.0 19604 2448 ? S 15:13 0:00 /usr/bin/rsync -v --daemon --no-detach
root 5304 0.0 0.0 11156 180 ? S 15:15 0:00 /usr/bin/rsync -v --daemon --no-detach
root 5435 0.0 0.0 11156 180 ? S 15:16 0:00 /usr/bin/rsync -v --daemon --no-detach
root 5797 0.0 0.0 11156 180 ? S 15:19 0:00 /usr/bin/rsync -v --daemon --no-detach
nobody 5913 0.0 0.0 20536 2860 ? S 15:20 0:00 /usr/bin/rsync -v --daemon --no-detach
nobody 6032 0.0 0.0 20536 2860 ? S 15:21 0:00 /usr/bin/rsync -v --daemon --no-detach
root 6207 0.0 0.0 11156 180 ? S 15:22 0:00 /usr/bin/rsync -v --daemon --no-detach
nobody 6292 0.0 0.0 20544 2744 ? S 15:23 0:00 /usr/bin/rsync -v --daemon --no-detach
root 6467 0.0 0.0 11156 180 ? S 15:25 0:00 /usr/bin/rsync -v --daemon --no-detach
root 6905 0.0 0.0 11156 180 ? S 15:29 0:00 /usr/bin/rsync -v --daemon --no-detach
(it's currently 15:30)
So there's processes (not even having dropped privileges!) hanging around since 15:10, 15:13 and the like.
And what are they doing?
Let's check:
# strace -p 5304
strace: Process 5304 attached
select(4, [3], NULL, [3], {25, 19185}^C
strace: Process 5304 detached
<detached ...>
# strace -p 5797
strace: Process 5797 attached
select(4, [3], NULL, [3], {48, 634487}^C
strace: Process 5797 detached
<detached ...>
This happended with both rsync from Ubuntu Xenial as well as installed from PPA (currently using rsync 3.1.2-1~ubuntu16.04.1york0 )
One process is created for each connection. Before a client selects the module the process does not know if it should drop privileges.
You can easily create such a process.
nc $host 873
You will notice that the connection will not be closed after 30s because the timeout is just a disk i/o timeout. The rsync client have a --contimeout option, but it seems that a server side option is missing.
In the end, I resorted to invoking rsync from (x)inetd instead of running it standalone.
service rsync
{
disable = no
socket_type = stream
wait = no
user = root
server = /usr/bin/timeout
server_args = -k 60s 60s /usr/bin/rsync --daemon
log_on_failure += USERID
flags = IPv6
}
As an additional twist, I wrapped the rsync invocation with timeout, adding another safeguard against long-running processes.
I connect to a remote server using SSH
I was compiling using cmake and then make, it's not common to have a progress percentage in compilation process, but this time it has. I was watching the compilation process until my internet connection failed, so puTTY closed the session and I had to connect again to my server. I though that all the progress was lost, but i first make sure by watching the processes list by ps aux command, and I noticed that the processes related to the compilation are still running:
1160 tty1 Ss+ 0:00 /sbin/mingetty tty1
2265 ? Ss 0:00 sshd: root#pts/1
2269 pts/1 Ss 0:00 -bash
2353 pts/1 S+ 0:00 make
2356 pts/1 S+ 0:00 make -f CMakeFiles/Makefile2 all
2952 ? S 0:00 pickup -l -t fifo -u
3085 ? Ss 0:00 sshd: root#pts/0
3089 pts/0 Ss 0:00 -bash
3500 pts/1 S+ 0:01 make -f src/compiler/CMakeFiles/hphp_analysis.dir/bui
3509 pts/1 S+ 0:00 /bin/sh -c cd /root/hiphop/hiphop-php/src/compiler &&
3510 pts/1 S+ 0:00 /usr/bin/g++44 -DNO_JEMALLOC=1 -DNO_TCMALLOC=1 -D_GNU
3511 pts/1 R+ 0:03 /usr/libexec/gcc/x86_64-redhat-linux6E/4.4.4/cc1plus
3512 pts/0 R+ 0:00 ps ax
I would like to know if is possible to watch the current progress of the compilation by watching the previously closed terminal output. Something similar like 'cat /dev/vcsa1' or something
As per the comment above, you should have used screen.
As it is, you could try to peek at the file descriptors used by sshd and the shell that you started, but I don't think this will get you very far.
We have a convention whereby developers get into a server with their own username, and then sudo su - django where django is the user our apps run under.
I need to find out which developer is running a script as django. With ps faux :
root 26438 0.0 0.0 90152 3320 ? Ss 10:38 0:00 \_ sshd: fred [priv]
fred 26444 0.0 0.0 90152 1852 ? S 10:38 0:00 | \_ sshd: fred#pts/0
fred 26445 0.0 0.0 66052 1560 pts/0 Ss 10:38 0:00 | \_ -bash
root 27923 0.0 0.0 101052 1336 pts/0 S 10:46 0:00 | \_ su - django
django 27924 0.0 0.0 66188 1752 pts/0 S 10:46 0:00 | \_ - bash
django 31760 0.0 0.5 227028 42320 pts/0 S+ 11:10 0:01 | \_ python target_script.py
I can easily see what fred is up to. However I need to write a script to act on this info, and I can find no way to pull out "fred" and "target_script.py" in one line with ps ... euser,ruser,suser,fuser all say "django." Will I need to fumble through this ps faux output to get the info I need?
I found this old post when trying to find the same basic information. The easiest way I found was to use the "loginuid" file under /proc/[pid]. For example:
cat /proc/${processid}/loginuid
Sorry for resurrecting such and old post, but maybe someone will find it useful.
You used su - django. The " - " will make the new shell a login shell (see manpage of su), which let the child process forget its parent uids. That's why euser,ruser,suser,fuser all say "django".
So yes, you may have to fumble through the parent process id, or through "ps faux".