Gitlab CI - gitlab-runner run as root - gitlab

I new on continous integration on iOS,
I try to run build with gitlab-runner and use shell as executor but I got issue that pod cannot run as root I am sure that I am not installing cocoapods with sudo and I try run whoami at before_script and that's right my runner run as root
any one got same issue ?and how to fix it ?

Register the runner without sudo, and that should set the gitlab-runner to run as your current user.
So steps should be:
sudo curl --output /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-darwin-amd64
sudo chmod +x /usr/local/bin/gitlab-runner
gitlab-runner register ...
gitlab-runner install
Remember to stop your sudo gitlab-runner service otherwise you could have multiple runners on the same machine fighting for the same jobs.

Here is documentation for how to use sudo and gitlab-runner user.
I am not sure, but I think it creates multiple runners.
On CentOS 8 I modified the gitlab-runner.service and changed the --user option to root.
Here is the default configuration:
/usr/bin/gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --user gitlab-runner
or
root#server# cat /etc/systemd/system/gitlab-runner.service
[Unit]
Description=GitLab Runner
After=syslog.target network.target
ConditionFileIsExecutable=/usr/bin/gitlab-runner
[Service]
StartLimitInterval=5
StartLimitBurst=10
ExecStart=/usr/bin/gitlab-runner "run" "--working-directory" "/home/gitlab-runner" "--config" "/etc/gitlab-runner/config.toml" "--service" "gitlab-runner" "--user" "gitlab-runner"
Restart=always
RestartSec=120
[Install]
WantedBy=multi-user.target
and I changed to this:
[Unit]
Description=GitLab Runner
After=syslog.target network.target
ConditionFileIsExecutable=/usr/bin/gitlab-runner
[Service]
StartLimitInterval=5
StartLimitBurst=10
ExecStart=/usr/bin/gitlab-runner "run" "--working-directory" "/home/gitlab-runner" "--config" "/etc/gitlab-runner/config.toml" "--service" "gitlab-runner" "--user" "root"
User=root
Group=root
Restart=always
RestartSec=120
[Install]
WantedBy=multi-user.target
So this part --user gitlab-runner to --user root
NOTE
Absolutely I did not have security concerns, and did it for test, plase make sure you are considering security part.

Related

ExecStart path to installed packages with python-poetry

What's the ExecStart path to the installed packages for a systemd unit file like that for python-poetry instead of PIP/venv on Ubuntu 20.04?
[Unit]
Description=Gunicorn Daemon for FastAPI Demo Application
After=network.target
[Service]
User=demo
Group=www-data
WorkingDirectory=/home/demo/fastapi_demo
ExecStart=/home/demo/fastapi_demo/venv/bin/gunicorn -c gunicorn_conf.py app:app
[Install]
WantedBy=multi-user.target
This line I need to replace I think:
ExecStart=/home/demo/fastapi_demo/venv/bin/gunicorn -c gunicorn_conf.py app:app

Systemd not starting dependent service on slow device

I have an interesting problem that I have a reproducer for. Using a container to compartmentalize this system and make it reproducible, I can have it run successfully on my powerful laptop, but when running on a slow raspberry Pi it fails.
::::::::::::::
A.service
::::::::::::::
[Unit]
Description=Service A
After=B.service
BindsTo=B.service
[Service]
Type=simple
Restart=always
RestartSec=1
ExecStartPre=/bin/sleep 1
ExecStart=/bin/sleep 100
ExecStartPost=/bin/sleep 1
TimeoutStartSec=10s
[Install]
WantedBy=multi-user.target
::::::::::::::
B.service
::::::::::::::
[Unit]
Description=Service A
After=C.service
BindsTo=C.service
[Service]
Type=simple
Restart=always
RestartSec=1
ExecStartPre=/bin/sleep 1
ExecStart=/bin/sleep 100
ExecStartPost=/bin/sleep 1
TimeoutStartSec=10s
[Install]
WantedBy=multi-user.target
::::::::::::::
C.service
::::::::::::::
[Unit]
Description=Service A
[Service]
Type=simple
Restart=always
RestartSec=1
ExecStartPre=/bin/sleep 1
ExecStart=/bin/sleep 100
ExecStartPost=/bin/sleep 1
TimeoutStartSec=10s
[Install]
WantedBy=multi-user.target
::::::::::::::
Dockerfile
::::::::::::::
FROM ubuntu:18.04
RUN DEBIAN_FRONTEND=noninteractive apt update && apt install -y systemd init socat
COPY *.service /etc/systemd/system/
#RUN systemctl enable A.service
ENTRYPOINT ["/sbin/init"]
::::::::::::::
run.sh
::::::::::::::
docker build -t service .
docker stop -t 0 service && docker rm service
docker run -d --name service --privileged --cap-add SYS_ADMIN service
#docker run -d --cpus="0.3" --name service --privileged --cap-add SYS_ADMIN service
sleep 3
docker exec -it service service A start
sleep 1
docker exec -it service service A status
docker exec -it service service B status
docker exec -it service service C status
What the intent here is that there are 3 services: A, B, and C. The dependency is as follows: A->B->C. When starting service A, B should be started which then in turn starts C. The services are dummy services in this case and I've tried adding delays pre and post service, but the problem persists.
On my powerful laptop, I can somewhat reproduce the issue by adding "--cpus=0.3" to the 'docker run' line.
Any ideas on what could be the culprit?
I have discovered that service has an interesting "feature":
# avoid deadlocks during bootup and shutdown from units/hooks
# which call "invoke-rc.d service reload" and similar, since
# the synchronous wait plus systemd's normal behaviour of
# transactionally processing all dependencies first easily
# causes dependency loops
if ! systemctl --quiet is-active multi-user.target; then
sctl_args="--job-mode=ignore-dependencies"
fi
Obviously, if systemctl is launched with --job-mode=ignore-dependencies, it is less likely to work :-).
As expected, the following sequence works:
docker run -d --name service --privileged --cap-add SYS_ADMIN service
docker exec -ti service systemctl start multi-user.target
docker exec -it service service A start
Obviously, the best option is to replace service A start by systemctl start A. BTW, service is specific to Ubuntu while systemctl is common to nearly any Linux distribution.
I think that any service manually started in a docker container is impacted by this issue.
However, I still don't explain why it works on your powerful laptop.

Start a Docker container at startup in Linux on Azure

I have a Linux Virtual Machine on Azure. On this machine I installed Docker. At the startup, I want to run a Docker container. For that, I created a startup_script.sh in the tmp folder with this content
sudo docker run -d -p 8787:8787 -e USER=rstudio
-e PASSWORD=mypassword myacr.azurecr.io/mycontainer
then I run this command
chmod u+x /tmp/startup_script.sh
Then, under etc/systemd/system I created a service
[Unit]
Description=Run script at startup after network becomes reachable
After=default.target
[Service]
Type=simple
RemainAfterExit=yes
ExecStart=/tmp/startup_script.sh
TimeoutStartSec=0
[Install]
WantedBy=default.target
Then, run
systemctl daemon-reload
systemctl enable run-at-startup.service
When I restart the machine, the Docker container is not running.
Docker recommends that you use its restart policies, and avoid using process managers like systemctl to start containers (https://docs.docker.com/config/containers/start-containers-automatically/).
First, you need to make sure that Docker Daemon (i.e. Docker service) start on boot.
On Debian and Ubuntu, the Docker service is configured to start on boot by default. To automatically start Docker and Containerd on boot for other distros, use the commands below:
sudo systemctl enable docker.service
sudo systemctl enable containerd.service
If you're on Windows, make sure that you ticked Start Docker Desktop when you log in in Docker Desktop settings.
Then, for each container you want to start on boot, you need to use the --restart flag when running the container, e.g.:
sudo docker run --restart always -d -p 8787:8787 -e USER=rstudio
-e PASSWORD=mypassword myacr.azurecr.io/mycontainer

Running npm start from ExecStart in systemctl service file

I have the following service file:
[Unit]
Description=MyApp
After=syslog.target network.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=always
RestartSec=1
WorkingDirectory=/opt/nodejs-sites/MyApp
ExecStart=/usr/bin/npm start
Environment=NODE_ENV=development
User=root
Group=root
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=MyApp
[Install]
WantedBy=multi-user.target
Here is the error from /var/log/syslog
Oct 14 13:00:55 devu18 systemd[1]: Started myapp.
Oct 14 13:00:55 devu18 systemd[3203]: myapp.service: Changing to the requested working directory failed: No such file or directory
Oct 14 13:00:55 devu18 systemd[3203]: myapp.service: Failed at step CHDIR spawning /usr/bin/npm: No such file or directory
Oct 14 13:00:55 devu18 systemd[1]: myapp.service: Main process exited, code=exited, status=200/CHDIR
Oct 14 13:00:55 devu18 systemd[1]: myapp.service: Failed with result 'exit-code'.
I for the life of me can't figure out why it's complaining of cannot find the file. npm start from the same working directory works just fine, no problems. Am I missing some permissions or +x or something somewhere?
This error might happen due to the fact that the executable was run without environment.
Here is a quick fix recipe
You can fix that by creating a Bash Script to do everything you need.
script.sh
#! /bin/bash
source ${HOME}/.bashrc
cd /absolute/path/to/my/project
export NODE_ENV=development
npm start
Now change it's mode to be executable
chmod +x script.sh
Now we can create our service (located for instance in /etc/systemd/system/my-project.service)
my-project.service
[Unit]
Description=My Project
After=syslog.target network.target
[Service]
Type=simple
Restart=always
RestartSec=1
ExecStart=/path/to/my/script.sh
User=root
[Install]
WantedBy=multi-user.target
And now let's run it and enable it
systemctl start my-project
systemctl enable my-project
Troubleshooting (Ubuntu 18.04 LTS)
This happened to me lately that for some reason systemctl couldn't effectively source .bashrc leaving me with an error npm error does not exists This is a workaround that helped me out.
Change the script.sh file's content into the following
#! /bin/bash
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"
cd /absolute/path/to/my/project
export NODE_ENV=development
npm start
Now your npm service should hopefully work.
This solution was tested on:
Ubuntu 20.04 LTS, with Node version 14.16.0 installed via NVM (Node Version Manager)
Ubuntu 18.04 LTS, with Node version 12.22.6 installed via NVM (Node Version Manager)
CentOS 7, with Node version 12.22.6 installed via NVM (Node Version Manager)
/bin/npm --prefix $PATH start
This worked for me

Coreos Systemd Unit File - keep the container running

I am running CoreOS Stable 776.4.0.
I want to keep a container running all the time. But I cannot get it to work. When I expect the container to restart when it is killed. But it does not. I got it working before. But I don't remember how I did it.
Please help me!
I kill it by docker stop proxy
Restart=always will continuously stop and start the container.
This is my systemd unit file.
[Unit]
Description=nginx reverse proxy
After=docker.service
Requires=docker.service
[Service]
TimeoutStartSec=0
Restart=on-failure
ExecStartPre=-/usr/bin/docker stop proxy
ExecStartPre=-/usr/bin/docker rm proxy
ExecStart=/usr/bin/docker run -d --name proxy -p 80:80 -v
/var/run/docker.sock:/tmp/docker.sock:ro zhex900/nginx-proxy
[Install]
WantedBy=multi-user.target
Your immediate problem is this:
ExecStart=/usr/bin/docker run -d --name proxy -p 80:80 -v
/var/run/docker.sock:/tmp/docker.sock:ro zhex900/nginx-proxy
You are passing the -d option to the docker client, which means "start the container in the background and return immediately". Because the client exits, systemd interprets this as a failure and will attempt to restart the service.
The simplest solution is to remove the -d from the command line.
Another option is to not use systemd, and to simply start the container with docker run --restart=always ..., which will cause Docker to ensure that the container is running, even after a reboot.
Sorry, I asked a stupid question. The problem was I was running the container as a daemon. Remove -d solved the problem.
ExecStart=/usr/bin/docker run --name proxy -p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock:ro zhex900/nginx-proxy

Resources