I've installed gitlab-ci using manual of the gitlab-ci site on centos 6.7, for some reason everytime my computer has a reboot the process is being run from root using gitlab-runner user.
And every time I kill the process and launch it back from my local user,
Is there a way to cause the process to run on local user instead of root permanently , where can I change that?
When doing ps -ef |grep gitlab you get the following:
/usr/bin/gitlab-ci-multi-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner
Thanks
Related
How to run gitlab-runner locally on macOs?
Hi,
I would like to run gitlab-runner locally. I have gitlab-runner on my mac and I have gitlab-ci.yml. On CI gitlab-runner works as I expect, but it's not working when I call it from terminal.
gitlab-runner --debug exec shell lint_project
Output from terminal
one of the question is why "executor not supported"?
Thank you
Instead of shell command, try to use docker:
gitlab-runner --debug exec docker lint_project
For me it's good.
I'm seeing below error with docker running in rhel7 on top of virtual box
I'm just trying to use hello-world image
[root#localhost ~]# docker run hello-world
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/hello\": stat /hello: no such file or directory": unknown.
It seems you may not have execute rights where you are trying to run the Docker.
Try using a directory that you have execute rights to such as your Home directory.
Or you may need to run the chmod +x on the directory you are running the docker command on.
try this command
sudo service docker restart
it usually helps in Ubuntu
The / cannot be used as a volume.
change \"/hello\": to \"hello\":
I suppose it is in Dockerfile?
check fromlatest.io for Dockerfile errors
I have a Jenkins master running on Windows Server 2016. I need to be able to run linux containers to run some automated e2e tests. For reasons I won't get into, I cannot enable hyper-v on this machine. This is preventing me from installing lcow and docker on my Jenkins master
What I've done instead is setup a Ubuntu 18.04 VM in virtualbox and installed docker there. I've configured the VM as a Jenkins slave using ssh to login as the jenkins user. I've setup and configured everything for this user to be able to run docker commands without using sudo. If I manually ssh into the server as the jenkins user I can run docker commands without an issue. Everything works the way you would expect.
I've then setup a test build to check that everything was working correctly. The problem is that when I try to run docker commands using the Execute Shell build step I'm getting a docker: not found error. From what I can tell, the build is running as the correct user. I added who -u to the build step so I could check which user the build was running as.
Here is the output from my build:
[TEST - e2e - TEST] $ /bin/sh -xe /tmp/jenkins16952572249375249520.sh
+ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
+ docker run hello-world
/tmp/jenkins16952572249375249520.sh: 3: /tmp/jenkins16952572249375249520.sh: docker: not found
As I mentioned, the jenkins user has been added to the docker group and Docker has been added to $PATH (/snap/bin/):
jenkins#jenkins-docker-slave:~$ which docker
/snap/bin/docker
jenkins#jenkins-docker-slave:~$ $PATH
-bash:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory
jenkins#jenkins-docker-slave:~$ who -u
jenkins pts/0 2018-08-10 16:43 . 10072 (10.0.2.2)
jenkins#jenkins-docker-slave:~$ cat /etc/group | grep docker
docker:x:1001:qctesting,jenkins
As you can see by this snippet I can successfully run docker commands by logging into the server as the jenkins user:
jenkins#jenkins-docker-slave:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
I have also configured the path to docker in the slaves node properties as I thought it would fix my issue. As you can see I have both git and docker listed. Git commands are working just fine. It is only the docker commands that are giving me problems. I have tried both /snap/bin and /snap/bin/docker with no luck.
I am trying to build a jenkins job that will clone a git repo, spin up the containers I need using docker-compose and some build parameters I pass in at build time, and run my e2e tests against any environment (qa, staging, production, etc.). I just can't get the jenkins slave to run the docker commands. What am I missing? How can I get the slave to recognize that docker is already installed on the system and the user has the correct permissions to execute those commands.
NOTE: I am NOT trying to run docker in docker. Practically all questions/documentation I've found on running docker commands on a jenkins slave describe how to solve this issue by running the slave in a docker container and installing the docker client in the slave container. That is not what I'm trying to accomplish. I am trying to ssh from a jenkins master into a jenkins slave that already has docker installed and run docker commands on that server as the jenkins user.
I finally figured this out thanks to the answer for this question. After reading that answer I realized I had installed the wrong version of docker on Ubuntu. I removed the previous installation and installed the correct docker package using sudo curl -sSL https://get.docker.com/ | sh. I then restarted my jenkins slave and everything started working.
According to the documentation here: http://pm2.keymetrics.io/docs/usage/startup/#startup-systems-support
You can use the command pm2 startup ubuntu -u nodeapps to resurrect all saved pm2 jobs on server startup.
I ran this command as the nodeapps user. Then I was given a sudo su command to run. I logged out of nodeapps, used sudo su to log into the system as root, and ran the command:
sudo su -c "env PATH=$PATH:/usr/bin pm2 startup ubuntu -u nodapps --hp /home/nodeapps"
The processes did not restart on server restart. I found this question on Stack Overflow: Ubuntu 14.04 - pm2 startup not starting after reboot.
In the script /etc/init.d/pm2-init.sh I found the line that question recommended addressing:
export PATH=/usr/bin:$PATH
export PM2_HOME="/home/nodeapps/.pm2"
But it looks correct to me so I didn't change anything.
I then found this question: pm2 Startup not starting up on Ubuntu
and in my boot logs I find the following line:
Starting pm2
/usr/bin/env: node: No such file or directory
I know that 'node' on Ubuntu is actually 'nodejs'. Could this be the reason?
If it is, what can I do to make the startup command look for nodejs instead of node.
Alternatively, could this be a $PATH problem? If it is, how can I add the correct path to root (at least I think it should be added to root)
I don't know if it will help you but I use in this way:
As a non-root user
pm2 startup -u <YOUR_NON_ROOT_USER>
Copy line showed like
env PATH=$PATH:/usr/bin pm2 startup systemd -u delivery --hp /home/delivery
As a root execute
env PATH=$PATH:/usr/bin pm2 startup systemd -u delivery --hp /home/delivery
Back to non root user and type:
pm2 start <YOUR /PATH/TO/INDEX.JS> --name <YOU_APPLICATION_NAME>
As a non-root type:
pm2 save
reboot
sudo reboot
As a non-root user type the commando bellow to check if it works
pm2 status
PS: Change as needed.
I hope it will be useful for you or someone.
(Posted on behalf of the OP).
In fact that was the problem. Fixed via creating a symlink (as root):
ln -s /usr/bin/nodejs /usr/sbin/node
I have recently started using Jenkins for integration. All was well until I was running jobs on master node without shell command but I have to run jobs on master as well as slave node which contains shell commands to. I am not able to run those shell commands as root user. I have tried
Using SSH Keys.
Setting user name in shell commands.
Using sudo.
I am getting permission denied error every time I use any of the above methods.
I would suggest against running the jenkins user as root. This could expose the operating system and all of the repo's which jenkins can build.
Running any script as root is a security risk, but a slightly safer method would be to grant the jenkins user sudo access to only run the one script, without needing a password.
sudo visudo
and add the following:
jenkins ALL = NOPASSWD: /var/lib/jenkins/jobs/[job name]/workspace/script
Double check your path via the console log of a failed build script. The one shown here is the default.
Now within the jenkins task you can call sudo $WORKSPACE/your script
You need to modify the permission for jenkins user so that you can run the shell commands.
You can install the jenkins as as service (download the rpm package), You might need to change the ports because by default it runs http on 8080 and AJP on 8009 port.
Following process is for CentOS
1. Open up the this script (using VIM or other editor):
vim /etc/sysconfig/jenkins
2. Find this $JENKINS_USER and change to “root”:
$JENKINS_USER="root"
3. Then change the ownership of Jenkins home, webroot and logs:
chown -R root:root /var/lib/jenkins
chown -R root:root /var/cache/jenkins
chown -R root:root /var/log/jenkins
4) Restart Jenkins and check the user has been changed:
service jenkins restart
ps -ef | grep jenkins
Now you should be able to run the Jenkins jobs as the root user and all the shell command will be executed as root.
For Linux try to follow these steps:-
This worked for me.
Change Jenkins user: sudo vi /etc/default/jenkins
Change user root or your user that you use to access to your files:
$JENKINS_USER="root"
Execute using the user that you setup before:
sudo chown -R root:root /var/lib/jenkins
sudo chown -R root:root /var/cache/jenkins
sudo chown -R root:root /var/log/jenkins
Run as a services:
service jenkins restart
Or
systemctl jenkins restart
You can execute jenkins has a process and disable headless mode for Linux with UI.
/etc/alternatives/java -Djava.awt.headless=false -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenkins.war --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war --httpPort=8080 --debug=5 --handlerCountMax=100 --handlerCountMaxIdle=20
Validate Jenkins is running currently:
ps -ef | grep jenkins
Another option is to set up a jenkins "Slave" that is actually running as root on the master and restrict it to tied jobs, then point your job at that slave. Far from ideal but certainly a quick solution.
Or you can change the permission of docker.sock. Make sure your docker container is running the user as root
docker exec <jenkinsContainerID> chmod 777 /var/run/docker.sock
You just need to run the shell command on Linux machine using Root privileges from Jenkins.
Steps :
1) sudo vi /etc/sudoers
2) Add line :
jenkins ALL=NOPASSWD:/path of script/
3) From Jenkins,run the script on remote shell using sudo .
for eg : sudo ps -ef
4) Build Jenkins job now.
This job runs the script on Linux machine using root privileges.