Install Gitlab runner on ECS Cluster - gitlab

We are running AWS ECS/ECR with Gitlab CI as CI/CD.
Due to load increase we are searching for the best way to autoscale the runner on AWS.
I know Gitlab supports Autoscaling for its ci-runner. https://docs.gitlab.com/runner/configuration/runner_autoscale_aws/
But I'm wondering if it would be possible to leverage the ECS cluster on AWS for this purpose. Has anybody ever set up the runner on an ECS cluster with Loadbalancer and Autoscaling on ECS side and could provide some insights regarding such a setup?
merci in advance
A

Since each CI task is usually run in a container (or other isolated environment), this would require the GitLab runner to natively talk to ECS to spin up new containers for jobs
I don't think it's gonna happen; GL supports Kubernetes, which is more versatile in that it's not tied to AWS, and also support EC2 autoscaling for AWS. ECS sounds like a lot of extra effort for a very narrow benefit.

Related

MacOS, Window Runners Autoscaling on GitLab through AWS ECS Fargate

I have setup the Gitlab self hosted instance having one runner registered as "linux runner manager" which is running jobs of Gitlab Pipelines on ECS fargate by using docker image as task, after job completed on Gitlab, that container will be destroyed.
I want to achieve Complete Autoscaling setup for runners of Gitlab where I have Linux, Macos & windows runner managers building these 3 different platforms pipelines based on docker containers inside ECS Fargate cluster.
I have achieved this setup in case of linux by following this official documentation of Gitlab.
Here is the desired rough diagram of what i have in mind
In current achieved setup I have these two instances, one gitlab master node and one linux manager instance which is explained in above diagram.
I have two files on runner manager
one is config.toml having connection gitlab and token stuff.
and the other one is fargate.toml having connection ecs fargate.
Lastly i have cluster ECS cluster having Fargate Arch.
When the job is being triggered on Gitlab Pipeline, it invoke container in ECS and create new task once job is completed it destroy the task, so in this way ECS cluster will have empty task bucket unless there is triggered pipeline active from gitlab.
The container image currently which I am using for building jobs on ECS is available here
I have gone through official documentation of ECS fargate as well
Seems like only window is supported.
I am wondering if whatever I am looking is even achievable in this case. i.e. is there any official docker files as gitlab runners for windows and macOS is available to support my current architecture? or any customization of files on runner managers end?

Running Kubernetes across cloud providers

Our goal is to run kubernetes in AWS and Azure with minimal customization (setting up kubernetes managed env), support and maintenance. We need portability of containers across cloud providers.
Our preferred cloud provider is AWS. We are planning on running containers in EKS. We wanted to understand the customization effort required to run these containers in AKS.
Would you recommend choosing a container management platform like Pivotal Cloud Foundry or Redhat OpenShift or run them on AWS EKS or AKS where customization is less to run containers across different cloud providers.
You need to define a common set of storage classes that map to similar volume types on each provider. If you are using some kind of provider based Ingress controller those can vary so I would recommend using an internal one like nginx or traefik. If you are using customization annotations for things like networking those can vary, but using those is pretty rare. Others k8s is k8s.

How to apply Kubernetes cluster to an existing azure virtual mechines

I have an existing azure virtual machines that deployed 30 docker containers.
So I have decided to use Kubernetes service/cluster to manage deploy dockers container on that existing azure virtual machines.
I have also deploy azure registry to store docker images.
Is it possible way?
Please help to give me your opinion?
If you are familiar with Ansible then the best way is probably Kubespray. It is capable of creating clusters almost of any complexity and also contains many features that other cluster management tools like kubeadm don't have.

Using Kubernetes as docker containers orchestration in PCF

I have a requirement to use Docker containers in PCF deployed in Azure.
And now we want to use kubernetes as container orchestration.
Does kubernetes can be used here ?
Or PCF will take care of the container orchasteration ?
Which one would be the better approach here ?
PKS Is Pivotal's answer to running kubernetes in PCF (regardless of IaaS)
Pivotal Cloud Foundry (PCF) is a sophisticated answer from Microsoft to current cloud expectations. PCF offers the best platform to run Microsoft based technology like .NET, and smoothly supports enterprise Java application. You can run Kubernetes there with fine results, but to achieve comfortable orchestration and management of containers I suggest reading about GKE or setting up your own Kubernetes cluster using kubespray utility.

Changes required to run Kubernetes provisioning script (Kubernetes + CoreOS in Microsoft Azure) to run in production environment?

I am planning to deploy a production deployment with 'Kubernetes + CoreOS' in Microsoft. And planning to run couple of micro services in the cluster. My plan is to have 5 nodes, I will have 5/6 pods to run each will have 3-5 instances. I was following the official documentation of Kubernetes, I found https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/getting-started-guides/coreos/azure/README.md is really helpful, the script works awesome. But I don't think that its production ready for my use case , as
the deployed VMs are not assigned to Availability Sets
Not able to specify an existing Virtual Network, Resources, location etc.
I am a newbie in this field. Can someone help me out to let me know what all steps to be taken to make this a real production environment ?
the deployed VMs are not assigned to Availability Sets
It is true indeed, as an author and maintainer of the guide, I will welcome a pull-request to enable this, which should be quite easy and probably similar how affinity groups are currently handled.
Not able to specify an existing Virtual Network, Resources, location etc.
This is a very good point, however it's probably best to refactor current ad-hoc JavaScript wrapping to something more streamlined with Azure Resource Manager, which hasn't been generally available at the time I implemented that integration.

Resources