How does GitLab CI of GitLab.com know where to find a registered local runner? - gitlab

I have created a test pipeline on Gitlab.com with my personal user account. I created a local gitlab runner container and have registered it with tags for that project. It runs normally and I need to understand a specific point here. How does Gitlab CI on the internet know how to reach the runner on my laptop? It detects a public ip address which belongs to the place where I am doing this test, while I am setting behind a router which does natting and give me a private ip address. Further more, the runner runs inside a docker container.
How is this working exactly?

It is your local runner that connects to GitLab, not the other way around. You can connect to GitLab through your browser and with git. It the same way, gitlab-runner uses a web API to check for jobs a few times a second. This makes it so that no special configuration is necessary to get a gitlab-runner up and running, as access to webpages (port 80) is open on almost all devices.
Here is a little more information that I found:
https://gitlab.com/gitlab-org/gitlab-runner/issues/1230

Related

How does a linux shell gitlab-runner access servers behind a vpn?

I've been experimenting with gitlab runners and noticed shell runners installed on linux systems sitting behind a VPNs can be accessed without any networking issues or regards to the firewall.
If I wanted to set up a kubernetes runner in the same environment, that involves adding a publicly accessible endpoint.
How does the gitlab runner get around my vpn and firewall when using the shell runner?
Note that my VPN/Firewall limits incoming traffic, but not outgoing. Is the gitlab-runner making requests out to github to get instructions on when to run pipelines?
Is the gitlab-runner making requests out to github to get instructions on when to run pipelines?
Yes, the runners initiate the connection, not the GitLab instance. That's why runners can be behind a firewall/VPN/whatever. As long as the runner can connect to GitLab instance it works.

Run Jenkins Azure ACI Docker Agents on Same Vnet as Host

Question
How do I specify what virtual-network (vnet) or subnet an Azure Docker Instances (ACI) runs on with the Azure Container Agents Plugin for Jenkins?
Assumptions
In order to get lots of data transferred between two machines in Azure, those machines ought to be in the same vnet.
It is possible to get ACI's to run within a subnet of a vnet to get this fast communication.
Background
I'm running an Azure VM with Jenkins on it. This same VM also has Nexus installed on it for proxying/caching 3rd party dependencies. I'm running my builds on Docker Containers that are dynamically created as needed and destroyed afterwards for cost savings. This ACI creation/destruction introduces a problem in that the local .m2 cache does not survive one build to the next. Nexus is being used to fix this problem by facilitating fast access to 3rd party dependencies.
But for Nexus to really solve this problem, it seems as if the ACI's need to be in the same vnet as Nexus. I'd also like the advantage of not needing to open up ports to the world, but can pass data around within the vnet without having to open ports from that vnet to the internet.
My problem is that I seem to have no control over which vnet or subnet the ACI's run on with the plugin I'm using (Azure Container Agents Plugin).
I've found instructions on how to specify the subnet on an ACI in general (link), but that doesn't help me as I need a solution that works with the Jenkins Plugin I'm using.
But perhaps this plugin will not work for my purposes and I need to abandon it for another approach. If so, suggestions?
AFAIK Azure Container Agents Plugin for Jenkins currently doesn't support specifying what virtual network (vnet) an ACI runs on.
I think you should raise an issue here to see if you get better response.
And yes, you may need to abandon this approach of using ACI Jenkins agents and as a workaround (for now) should go with VM's as Jenkins agents
or run the Jenkins jobs on Jenkins master itself so that local .m2 cache would survive one build to the next.
Related References:
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-jenkins
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-jenkins?toc=%2Fen-us%2Fazure%2Fjenkins%2Ftoc.json&bc=%2Fen-us%2Fazure%2Fbread%2Ftoc.json
Hope this helps!!

Jenkins login fails to complete

I am working with systems behind a corporate firewall.
Jenkins is installed in a Google cloud server and I initially had external IP of the server, open for access in Google Cloud Firewall rules.
Recently a change request came from the corporate security team, for which I had to block the public/external IP access to Jenkins. The VPN allows me to connect to Jenkins on Internal IP.
Since then I am able to open the Jenkins console-login page...but after I submit the credentials, the login of Jenkins is failing to complete... This is forcing me use RDP to the remote Jenkins server to access the Jenkins console.
The login form gets submitted/redirected to "/j_acegi_security_check" url. This eventually responds as "net::ERR_CONNECTION_ABORTED"
Any pointer would be much appreciated, to fix this issue.

TFS on premise deploy website to same machine

I am trying to deploy a website to the same machine it was built on. It builds everything correctly and then gets stuck at this line. Deployment started for machine: 192.168.1.201 with port 5985. I get the error message that I cannot connect to the remote machine. I am very confused on how to get this last step setup.
Image of setup for WinRm deploy - enter image description here
It seems you are using IIS Web App Deployment Using WinRM extension on TFS 2017. From your screenshot, you may need to check the items below and correct them to have another try.
In Machines parameter, try to specify comma separated list of machine FQDNs/IP addresses along with port.
In Admin Login and Password parameters, you need to specify a domain or Local administrator and corresponding password of the target host.
In Web Deploy Package parameter, you need to specify the location of the web deploy zip package file on the target machine or on a UNC path that is accessible to the administrator credentials of the machine.
Detailed documentation for this task you can refer to this website: https://github.com/Microsoft/vsts-rm-extensions/blob/master/Extensions/IISWebAppDeploy/Src/Tasks/IISWebAppDeploy/README_IISAppDeploy.md

Managing inter instance access on EC2

We are in the process of setting up our IT infrastructure on Amazon EC2.
Assume a setup along the lines of:
X production servers
Y staging servers
Log collation and Monitoring Server
Build Server
Obviously we have a need to have various servers talk to each other. A new build needs to be scp'd over to a staging server. The Log collator needs to pull logs from production servers. We are quickly realizing we are running into trouble managing access keys. Each server has its own key pair and possibly its own security group. We are ending up copying *.pem files over from server to server kind of making a mockery of security. The build server has the access keys of the staging servers in order to connect via ssh and push a new build. The staging servers similarly has access keys of the production instances (gulp!)
I did some extensive searching on the net but couldnt really find anyone talking about a sensible way to manage this issue. How are people with a setup similar to ours handling this issue? We know our current way of working is wrong. The question is - what is the right way ?
Appreciate your help!
Thanks
[Update]
Our situation is complicated by the fact that at least the build server needs to be accessible from an external server (specifically, github). We are using Jenkins and the post commit hook needs a publicly accessible URL. The bastion approach suggested by #rook fails in this situation.
A very good method of handling access to a collection of EC2 instances is using a Bastion Host.
All machines you use on EC2 should disallow SSH access to the open internet, except for the Bastion Host. Create a new security policy called "Bastion Host", and only allow port 22 incoming from the bastion to all other EC2 instances. All keys used by your EC2 collection are housed on the bastion host. Each user has their own account to the bastion host. These users should authenticate to the bastion using a password protected key file. Once they login they should have access to whatever keys they need to do their job. When someone is fired you remove their user account to the bastion. If a user copies keys from the bastion, it won't matter because they can't login unless they are first logged into the bastion.
Create two set of keypairs, one for your staging servers and one for your production servers. You can give you developers the staging keys and keep the production keys private.
I would put the new builds on to S3 and have a perl script running on the boxes to pull the lastest code from your S3 buckets and install them on to the respective servers. This way, you dont have to manually scp all the builds into it everytime. You can also automate this process using some sort of continuous build automation tools that would build and dump the build on to you S3 buckets respectively. Hope this helps..

Resources