How to run gitlab-runner locally - gitlab

How to run gitlab-runner locally on macOs?
Hi,
I would like to run gitlab-runner locally. I have gitlab-runner on my mac and I have gitlab-ci.yml. On CI gitlab-runner works as I expect, but it's not working when I call it from terminal.
gitlab-runner --debug exec shell lint_project
Output from terminal
one of the question is why "executor not supported"?
Thank you

Instead of shell command, try to use docker:
gitlab-runner --debug exec docker lint_project
For me it's good.

Related

How to get back to shell in nodejs:latest docker image?

I'm newbie to docker, I tried this command
docker run -it node:latest
then, I was in the node REPL,
Welcome to Node.js v16.3.0.
Type ".help" for more information.
>
I tried control+c ,but this quit the image,
Is there any way to go to the shell in this image?
In order to overwrite the entry point of the docker image you're using, you will need to use the --entrypoint flag in the run command.
docker run -it --entrypoint bash node:latest
For better understanding on how to work with already running docker container you can refer to the following question

Issue docker commands on Jenkins slave

I have a Jenkins master running on Windows Server 2016. I need to be able to run linux containers to run some automated e2e tests. For reasons I won't get into, I cannot enable hyper-v on this machine. This is preventing me from installing lcow and docker on my Jenkins master
What I've done instead is setup a Ubuntu 18.04 VM in virtualbox and installed docker there. I've configured the VM as a Jenkins slave using ssh to login as the jenkins user. I've setup and configured everything for this user to be able to run docker commands without using sudo. If I manually ssh into the server as the jenkins user I can run docker commands without an issue. Everything works the way you would expect.
I've then setup a test build to check that everything was working correctly. The problem is that when I try to run docker commands using the Execute Shell build step I'm getting a docker: not found error. From what I can tell, the build is running as the correct user. I added who -u to the build step so I could check which user the build was running as.
Here is the output from my build:
[TEST - e2e - TEST] $ /bin/sh -xe /tmp/jenkins16952572249375249520.sh
+ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
+ docker run hello-world
/tmp/jenkins16952572249375249520.sh: 3: /tmp/jenkins16952572249375249520.sh: docker: not found
As I mentioned, the jenkins user has been added to the docker group and Docker has been added to $PATH (/snap/bin/):
jenkins#jenkins-docker-slave:~$ which docker
/snap/bin/docker
jenkins#jenkins-docker-slave:~$ $PATH
-bash:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory
jenkins#jenkins-docker-slave:~$ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
jenkins#jenkins-docker-slave:~$ cat /etc/group | grep docker
docker:x:1001:qctesting,jenkins
As you can see by this snippet I can successfully run docker commands by logging into the server as the jenkins user:
jenkins#jenkins-docker-slave:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
I have also configured the path to docker in the slaves node properties as I thought it would fix my issue. As you can see I have both git and docker listed. Git commands are working just fine. It is only the docker commands that are giving me problems. I have tried both /snap/bin and /snap/bin/docker with no luck.
I am trying to build a jenkins job that will clone a git repo, spin up the containers I need using docker-compose and some build parameters I pass in at build time, and run my e2e tests against any environment (qa, staging, production, etc.). I just can't get the jenkins slave to run the docker commands. What am I missing? How can I get the slave to recognize that docker is already installed on the system and the user has the correct permissions to execute those commands.
NOTE: I am NOT trying to run docker in docker. Practically all questions/documentation I've found on running docker commands on a jenkins slave describe how to solve this issue by running the slave in a docker container and installing the docker client in the slave container. That is not what I'm trying to accomplish. I am trying to ssh from a jenkins master into a jenkins slave that already has docker installed and run docker commands on that server as the jenkins user.
I finally figured this out thanks to the answer for this question. After reading that answer I realized I had installed the wrong version of docker on Ubuntu. I removed the previous installation and installed the correct docker package using sudo curl -sSL https://get.docker.com/ | sh. I then restarted my jenkins slave and everything started working.

setting up a docker container with a waiting bash to install npm modules

I'm trying to do something pretty trivial. For my dev environment, I wish to be able to have a shell in my container so I can run commands like npm install or npm run xxx.
(I do not want to install my npm modules during build, since I want to map them to the host so that my editor is able to find them on the host. I do not want to execute npm install on the host, since I don't want the host to have to install npm).
So even though in a production container I would instruct my container to just run node, in my developer container I want to have an always waiting bash.
If I set entrypoint to /bin/bash, the container immediately exits. This means that I can't attach to it anymore (since it stopped) and starting it will just immediately exit it again.
I tried writing a small .sh to just loop and start /bin/bash again, but using that in my ENTRYPOINT yields an error that it can't find the .sh file, even though I know it is in the container.
Any ideas?
You can use docker exec to run commands in a given container.
# Open an interactive bash shell in my_container
docker exec -it my_container bash
Alternatively, you can use docker run to create a new container to run a given command.
# Create a container with an interactive bash shell
# Delete the container after exiting
docker run -it --rm my_image bash
Also, from the question I get the sense you are still in the process of figuring out how Docker works and how to use it. I recommend using the info from this question to determine why your container is exiting when you set the entrypoint to /bin/bash. Finding out why it's not behaving as you expect will help you to understand Docker better.
I'm not sure what command you are trying to run, but here's my guess:
Bash requires a tty, so if you try to run it in the background without allocating one for it to attach to, it will kill it self.
If you're wanting to run bash in the background, make sure to allocate a tty for it to wait on.
As an example, docker run -d -it ubuntu will start a bash terminal in the background that you can docker attach to in the future.

Running commands for docker container

This is how I'm running a command in a docker container:
$ docker run -it --rm --name myapp myimage:latest
$ node --version
Is it possible to to run this as one command? Can I pass a command to the docker run-command?
Something like
$ docker run -it --rm --name myapp myimage:latest "node --version"
Of course this is just a simple example. Later I will execute some more complex commands...
The "docker run" command essentially will make the container run, and execute the "CMD" or "ENTRYPOINT" in the Dockerfile. Unless - the command in your dockerfile does not run a command prompt - "running" the container may not get you the prompt.
For example if you want that everytime you run the container - it gets you the command prompt then have the line below in your Dockerfile.:
CMD ["bash"]
If you want to run the same commands everytime you run the command - then you could create a script file with your commands, copy them to the container, and execute the script file as a CMD directive.
The general form of the command is actually:
docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
See documentation for more details.
The answer to the minimal example in your question is simply:
docker run -it --rm --name myapp myimage:latest node --version
If you want to run multiple commands in sequence, you can:
Run container
Execute your commands against the running container using docker exec
Remove it
I am having some trouble understanding what you are trying to do.
you can just :
docker run -d --name myapp myimage:latest -f /dev/null and then your container is up and you can run any command in it
you can pass a command to the docker run but once the command ends the container will exit

Gitlab-ci installed and running as a root

I've installed gitlab-ci using manual of the gitlab-ci site on centos 6.7, for some reason everytime my computer has a reboot the process is being run from root using gitlab-runner user.
And every time I kill the process and launch it back from my local user,
Is there a way to cause the process to run on local user instead of root permanently , where can I change that?
When doing ps -ef |grep gitlab you get the following:
/usr/bin/gitlab-ci-multi-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner
Thanks

Resources