How to manage patching on multiple AWS accounts with different schedules - linux

I'm looking for the best way to manage patching Linux systems across AWS accounts with the following things to consider:
Separate schedules to roll patches through Dev, QA, Staging and Prod sequentially
Production patches to be released on approval, not automatic
No newer patches can be deployed to Production than what was already deployed to lower environments (as new patches come out periodically throughout the month)
We have started by caching all patches in all environments on the first Sunday of every month. The goal there was to then install patches from cache. This helps prevent un-vetted patches being installed in prod.
Most, not all, instances are managed by OpsWorks, but there are numerous OpsWorks stacks. We have some other instances managed by Chef Server. Still others are not managed, but are just simple EC2 instances created from the EC2 console. This means, using recipes means we have to kick off approved patches on a stack-by-stack basis or instance-by-instance basis. Not optimal.
More recently, we have looked at the new features of SSM using a central AWS account to manage instances. However, this causes problems with some applications because the AssumeRole for SSM adds credentials to the .aws/config file that interferes with other tasks we need to run.
We have considered other tools, such as Ansible, but we would like to explore staying within the toolset we currently have which is largely OpsWorks and Chef Server. I'm looking for ideas that are more on a higher level, an architecture of how one would approach this scenario.
Thanks for any thoughts or ideas.

This sounds like one of the exact scenarios RunCommand was designed for.
You can create multiple groups of servers with different schedules based on tags. More importantly, you don't need to rely on secret/keys being deployed anywhere.

Related

How many agents should I have?

I trying to build a branch-based GitOps declarative infrastructure for Kubernetes. I plan to create clusters on a cloud provider with crossplane, and those clusters will be stored in Gitlab. However, as I start building, I seem to be running into gitlab-agent sprawl.
Every application I will be deploying to each of my environments is stored in a separate git repo, and I'm wondering if I need a separate agent for each repo and environment. For example, I have my three clusters prod, stage, and dev, and my three apps, API, kafka, and DB. I've started with three agents per repo (gitlab-agent-api-prod, gitlab-agent-kafka-stage, ...), Which seems a bit excessive. Do I really need 9 agents?
Additionally, I now have to install as many agents as I have apps onto each of my clusters, which already eats up significant resources. I'd imagine I can get away with one gitlab agent per cluster, I am just not seeing how that is done. Any help would be appreciated!
PS: if anyone has a guide on how to automatically add gitlab agents to new clusters created with crossplane, I'm all ears. Thanks!

Host multiple services that need same ports open on gitlab ci

this is an issue I've been postponing for a while but I need to get fixed at some point.
Basically I have two services which I have containerized and registered in my gitlab registers. The two services represent different versions of the same program. In my automated tests I have 2 test suites which are testing for backwards compatibility with these services. The issue that I have is that when my automated tests run, only one service can run fine because there is a conflict over the ports they use I assume and so gitlab can't run all of the services at the same time.
Is there a way to get around this without making the ports specifiable in the code? This option would take the most amount of time and I'd rather leave it as a last resort.
It seems after some digging that making the ports configurable is my only option. As per the kubernetes documentation "You cannot use several services using the same port (e.g., you cannot have two mysql services at the same time)." https://docs.gitlab.com/runner/executors/kubernetes.html

Linux Servers Patching - GCP

I wanted patch my Linux instances, hosted on Google Cloud Platform.
Is there any native tool available on Google Cloud Platform, like Azure Update Manager, or do we have to use a 3rd party tool?
Yes, At this moment GCP doesn't have a product that fulfills patch management like Azure update management. However, there are some other workarounds, on how to manage the patch updates of a large number of VMs.
a). Set up a startup script in order to execute certain maintenance routines. However, restarting the VM is necessary. Startup scripts can perform many actions, such as installing software, performing updates, turning on services, and any other tasks defined in the script [1] .
b). If we want to patch large number of instances, a Managed Instance Group [2] could also be an alternative, as the managed instance group automatic updater safely deploy new versions of software to instances in MIG and supports a flexible range of rollout scenarios. Also, we can control the speed and scope of deployment as well as the level of disruption to service. [3]
c). We could use OS Inventory Management [4] to collect and view operating system details for VM instances. These details include operating system information such as hostname, operating system, and kernel version as well as installed packages, and available package updates for the operating system. The process is described here [5].
d). Finally, there's also the possibility of setting up automatic security updates directly in CentOS or Redhat 7.
I hope the above information is useful.
RESOURCES:
[1] https://cloud.google.com/compute/docs/startupscript
[2] https://cloud.google.com/compute/docs/instance-groups/#managed_instance_groups
[3] https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
[4] https://cloud.google.com/compute/docs/instances/os-inventory-management
[5] https://cloud.google.com/compute/docs/instances/view-os-details#query-inventory-data
Thank you all, who shared your knowledge!!!
GCP does not have any such package manager currently. If you would like to patch your servers you would have to setup a cronjob (either with crontab or another cron service like a GKE cronjob) to run the appropriate update command.
I think it was released after this question was asked (April 2020ish), but GCP now offers a native VM patch service called OS Patch Management for their VMs. You can learn more about it here: https://cloud.google.com/compute/docs/os-patch-management

How to make a cluster of GitLab instances

Is it possible to create a cluster of multiple GitLab instances (multiple machines)? My instance is over utilized and I would like to add other machines, but at the same for the user should be transparent to access his project, he doesn't care which instance it will be hosted on.
What could be the best solution to help the users?
I'm on GitLab Community Edition 10.6.4
Thanks for your help,
Leonardo
I reckon you are talking about scaling GitLab server, not GitLab runners.
GitLab Omnibus is a fairly complex system with multiple components, some are stateless and some are stateful.
If you currently have everything on the same server, the easiest option is to scale up (move to bigger machine).
If you can't, you can extract stateful components to host them separately: PostgreSQL, Redis, files to NFS.
Funnily you can make performance worse here.
Next step you can scale out the stateless side.
But it is in no way an easy task.
I'd suggest to start with setting up proper monitoring to see where are your limitations (CPU, RAM, IO) and bottle-necks (in which components).
See docs, including some examples of scaling:
https://docs.gitlab.com/ee/administration/high_availability/
https://about.gitlab.com/solutions/high-availability/
https://docs.gitlab.com/charts/
https://docs.gitlab.com/ee/development/architecture.html
https://docs.gitlab.com/ee/administration/high_availability/gitlab.html

Handling multiple environments, spinning up environment for testing in Azure Devops

We have a large-scale service architecture with multiple features. Currently there are 3 environments created apart from production, due to which there are issues in sharing environments across multiple teams working on the same application but different features.
What would be nice is if every programmer is able to test his/her code in an integrated environment before merging to master.
Is there a way in Azure DevOps where in we can spin up a new temporary environment for testing and can be teared down after the code is merged to master.
I am not sure how to get started on this. It would be great if someone can help me.

Resources