When I git push I have to go to the server the gitlab-runner is configured on and enter in the terminal "sudo gitlab-runner run" or "gitlab-runner run". and then the pipeline would start. And I have to be in the runtime-platform all the time. This defeats the point of a pipeline.
It used to work so that when I git pushed, the pipeline would start automatically, I didn't have to enter the command at all. So is there a way to set that up?
There are the following two methods to achieve that.
nohup gitlab-runner run & will keep your runner running as a user process and uses a config file that defaults to /home/<user>/.gitlab-runner/config.toml (see man nohup)
sudo systemctl start gitlab-runner will start as a service, and uses a config file that defaults to /etc/gitlab-runner/config.toml (see man systemctl)
In both cases, you will be able to logout and the runner will stay active.
The gitlab documentation is pretty clear :
GitLab offers a continuous integration service. For each commit or push to trigger your CI pipeline, you must:
Add a .gitlab-ci.yml file to your repository’s root directory.
Ensure your project is configured to use a Runner.
.gitlab-ci.yml part
You need to create a file named .gitlab-ci.yml in the root directory of your repository. The script part depends of what you want to do in the job.
Runner part
You need to install and configure the runner.
The most simple is to use shared runner
Related
I added a new virtualbox runner to my gitlab self hosted solution and I'm getting this warning on it:
Runner has never contacted this instance
and it nevers runs any jobs
Bouncing the runner will definitely help, else re-register the runner.
Also, you should check the status of the Runner with the below command.
gitlab-runner status
If you are using the runner in Windows Server, then go to the path where you have stored the .exe file and run the below command:
.\<.exe> status
If the runner is in stopped state, the start the runner by using the same commands but just replace status with start.
I recently wanted to move a Gitlab runner that I had set up for my self-hosted Gitlab instance from being a project runner (i.e. running jobs only for a project) to being a group runner (so it could also run jobs for other projects in the same group). I wanted to retain the /etc/gitlab-runner/config.toml settings that I had painstakingly hand-written.
Luckily I backed up config.toml, because sudo gitlab-runner unregister -t ... -u ... deleted the whole configuration from config.toml.
In order to get the same config registered under the group instead of the project, I had to:
Register the runner in a paused state with a dummy configuration, with the group's registration token:
sudo gitlab-runner register \
--non-interactive \
--url <URL HERE>
--registration-token <TOKEN HERE> \
--executor docker \
--docker-image docker:dind \
--paused
Go into the new config.toml that this created and copy the runner's individual runner token.
Overwrite config.toml with my desired configuration.
Edit the config.toml and plug in the new individual runner token.
Start the Gitlab runner service (sudo systemctl start gitlab-runner).
Unpause the runner in the Gitlab web UI.
Even after doing all this, the Gitlab instance still sees the runner under the name it registered with in the dummy config, rather than the name in the config.toml.
Trying the --config option to gitlab-runner register didn't work at all; I think that just tells it where to save the config. It still prompted me for new settings to use instead of reading from the config.toml I pointed it at.
The Gitlab documentation on runner registration is all written around one shot gitlab-runner register commands with loads of options on them that essentially specify the whole config on the command line. I really don't want to translate my config.toml manually into a command line that turns around and rebuilds it (minus any comments, of course).
I can't believe that this is really the right workflow to re-register a runner with a new project/group/Gitlab instance, or to create a copy of a runner from a saved config. What am I missing here? How can I create a new Gitlab runner from an existing config.toml file?
There isn't an easy way to do what you want, from what I can find in the GitLab documentation and some open issues that they have.
Here is an issue that describes something similar to what you want:
https://gitlab.com/gitlab-org/gitlab-runner/issues/3540
Here is what I think is GitLab's goal with how to register runners:
https://gitlab.com/gitlab-org/gitlab-ce/issues/40693
I believe that the only thing that you can't change from the .toml file is the name of the runner, and maybe not the tags either. Then name is only created when you register the runner. I read something somewhere that you can change the tags of a shared runner, but I can't find it now.
Here is a workaround to make the process of registering a bit more automatic:
https://gitlab.com/gitlab-org/gitlab-runner/issues/3553#note_108527430
He used this API:
curl --request POST "https://gitlab.com/api/v4/runners" --form "token=<registration-token>" --form "description=test-1-20150125-test" --form "tag_list=ruby,mysql,tag1,tag2"
Then he got the following response back:
{"id":401513,"token":"<runner-token>"}
He could then inject the runner-token into his already pre-made .toml file.
For you, it would have been possible to use the registration token for your group, and then to write in the description/name of the runner and the tags. You could then have re-used your config.toml and only changed the runner-token, and it should have worked.
A gitlab runner can be registered with multiple projects and/or groups. This will just append the configurations in /etc/gitlab-runner/config.toml (with sudo). Can we just do the following steps:
Un-register the gitlab-runner associated with the "project"
Register the the gitlab-runner associated with the "group"
config.toml stores all the configuration which is passed to gitlab-runner register including any environment variables which are listed under gitlab-runner register -h command.
I am not sure why do you need to save the config.toml.
Also, I believe one source of confusion could be gitlab-runner-token VS gitlab-runner-registration-token. The registration-token can NOT be used inside config.toml which may be the reason why you failed after just a replacement. If you do not want to use gitlab-runner register command and just update the config.toml then follow the steps defined in the above ans to fetch the gitlab-runner-token and use it in config.toml. We can then try and stop and start the gitlab-runner service using sudo service gitlab-runner stop and sudo service gitlab-runner start
We store runner configurations in a repository for recovery.
To restore a runner we:
install gitlab-runner (see https://docs.gitlab.com/runner/install/) on the new node,
move the stored configuration to /etc/gitlab-runner/config.toml, and
restart the runner service, e.g. sudo service gitlab-runner restart on ubuntu.
So far, this procedure was very reliable.
I have probably been Googling this the wrong way...
I have a Node express server.
I want to deploy it using Jenkins to ec2.
What are my options?
if I want to upload the code manually, I use ssh... but I want it to
be done from Jenkins.
Yes the server is on a git repo.
I would like a devops flow
I recommend you , to do it step by step :
Step 1 : Configure correctly a jenkins job ready to build your app in remote ec2 machine.
Install this plugin in your jenkins platform : Publish Over SSH Plugin
Using this plugin add new remote server under Publish over SSH section in Manage Jenkins >> Configure System option.
Now create some jenkins job. Then, in build section add a step called : Send files or execute commands over SSH
Just select your configured server and enter your commands in Exec command section :
For a simple node js express, this code could be enough or just copy-paste your existing code:
https://gist.github.com/jrichardsz/38b335f6a5dc8c67a386fd5fb3c6200e
That is all. Just test with "build" option and verify if your application goes well.
At this point, this job is functional. The flow could be :
push your changes to your git provider
login to jenkins and manually execute the created job (this step is replaced with webhook configuration)
Note: If and only if this step has no errors, you can start with the following step.
Step 2 : Implement a simple devops flow by configuring a webhook in your git provider, which automatically triggers the jenkins job (create in step 1) when you perform a git push.
This guide could be help you with the required configurations :
https://jrichardsz.github.io/devops/devops-with-git-and-jenkins-using-webhooks
You'll have to use AWS CodeDeploy jenkins plugin. This apply to any type of code. Node, java etc
See AWS article
Setting Up the Jenkins Plugin for AWS CodeDeploy
Jenkins Plugin
Github Link
I have built a nodejs app and now I want to deploy it into openshift.
I don't want to use github because I should create private repository which I cannot.
Also I cannot use 'rhc' since I am new user.
Is there any way to do that?
I cannot find any tutorial about that.
For OpenShift 3, you can use a binary input source build.
First create a binary input build.
oc new-build --name myapp --strategy=source --binary --image-stream=nodejs:latest
Now start a new build and upload source code from the current directory.
oc start-build myapp --from-dir=.
Once the build has completed, deploy the image created by the build.
oc new-app myapp
You can then expose the service.
oc expose svc/myapp
Every time you want to make a change, you will need to run the same oc start-build command in the directory where your source code is.
Is there any other code repo you are using? SVN? If SVN, you can use pipelines with Jenkins.
If not, put the nodejs app in a docker container and push it to the docker hub.
I don't see anybody to suggest this so I will do - you can equally well deploy code from gitlab, pagure, bigbucket, or any other git hosting service.
In fact you can even run your own git server inside OpenShift.
oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/gitserver/gitserver-persistent.yaml
oc env dc/git -p ALLOW_ANON_GIT_PULL=false
oc policy add-role-to-user edit -z git
oc get route # to see your git server URL
Now you should be able to push/pull from that server using your OpenShift username and token (also any other users you add to the project). From buildconfigs and other pods you can also use simply git as the hostname of your git server, because this should resolve to the IP of the service with the same name (again only within the same OpenShift project).
Read the template YAML (the URL after oc create) for more options you can use like REQUIRE_GIT_AUTH.
Of course it is good to keep a git mirror/backup somewhere else as with any other git service.
HTH
P.S. Forgot to say, you need to install an OpenShift v3 cluster by yourself or subscribe to OpenShift Online (which unfortunately may take awhile ATM).
I'm gonna deploy a Node.js mobile web application on two remote servers.(Linux OS)
I'm using SVN server to manage my project source code.
To simply and clearly manage the app, I decided to use Jenkins.
I'm new to Jenkins so it was a quite difficult task installing and configuring Jenkins.
But I couldn't find how to set up Jenkins to build remote servers simultaneously.
Could you help me?
You should look into supervisor. It's language and application type agnostic, it just takes care of (re-) starting application.
So in your jenkins build:
You update your code from SVN
You run your unit tests (definitely a good idea)
You either launch an svn update on each host or copy the current content to them (I'd recommend this because there are many ways to make SVN fail and this allows to include SVN_REVISION in the some .JS file for instance)
You execute on each host: fuser -k -n tcp $DAEMON_PORT, this will kill the currently running application with the port $DAEMON_PORT (the one you use in your node.js's app)
And the best is obviously that it will automatically start your node.js at system's startup (provided supervisor is correctly installed (apt-get install supervisor on Debian)) and restart it in case of failure.
A node.js supervisord's subconfig looks like this:
# /etc/supervisor/conf.d/my-node-app.conf
[program:my-node-app]
user = running-user
environment = NODE_ENV=production
directory = /usr/local/share/dir_app
command = node app.js
stderr_logfile = /var/log/supervisor/my-node-app-stderr.log
stdout_logfile = /var/log/supervisor/my-node-app-stdout.log
There are many configuration parameters.
Note: There is a node.js's supervisor, it's not the one I'm talking about and I haven't tested it.
per Linux OS, you need to ssh to your hosts to run command to get application updated:
work out the workflow of application update in shell script. Especially you need to daemonize your node app so that a completed jenkins job execution will not kill your app when exits. Here's a nice article to tell how to do this: Running node.js Apps With Upstart, or you can refer to pure nodejs tech like forever. Assume you worked out a script under /etc/init.d/myNodeApp
ssh to your Linux OS from jenkins. so you need to make sure the ssh private key file has been copied to /var/lib/jenkins/.ssh/id_rsa with the ownership of jenkins user
Here's an example shell step in jenkins job configuration:
ssh <your application ip> "service myNodeApp stop; cd /ur/app/dir; svn update; service myNodeApp restart"