Linux Servers Patching - GCP - linux

I wanted patch my Linux instances, hosted on Google Cloud Platform.
Is there any native tool available on Google Cloud Platform, like Azure Update Manager, or do we have to use a 3rd party tool?

Yes, At this moment GCP doesn't have a product that fulfills patch management like Azure update management. However, there are some other workarounds, on how to manage the patch updates of a large number of VMs.
a). Set up a startup script in order to execute certain maintenance routines. However, restarting the VM is necessary. Startup scripts can perform many actions, such as installing software, performing updates, turning on services, and any other tasks defined in the script [1] .
b). If we want to patch large number of instances, a Managed Instance Group [2] could also be an alternative, as the managed instance group automatic updater safely deploy new versions of software to instances in MIG and supports a flexible range of rollout scenarios. Also, we can control the speed and scope of deployment as well as the level of disruption to service. [3]
c). We could use OS Inventory Management [4] to collect and view operating system details for VM instances. These details include operating system information such as hostname, operating system, and kernel version as well as installed packages, and available package updates for the operating system. The process is described here [5].
d). Finally, there's also the possibility of setting up automatic security updates directly in CentOS or Redhat 7.
I hope the above information is useful.
RESOURCES:
[1] https://cloud.google.com/compute/docs/startupscript
[2] https://cloud.google.com/compute/docs/instance-groups/#managed_instance_groups
[3] https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
[4] https://cloud.google.com/compute/docs/instances/os-inventory-management
[5] https://cloud.google.com/compute/docs/instances/view-os-details#query-inventory-data
Thank you all, who shared your knowledge!!!

GCP does not have any such package manager currently. If you would like to patch your servers you would have to setup a cronjob (either with crontab or another cron service like a GKE cronjob) to run the appropriate update command.

I think it was released after this question was asked (April 2020ish), but GCP now offers a native VM patch service called OS Patch Management for their VMs. You can learn more about it here: https://cloud.google.com/compute/docs/os-patch-management

Related

Is it possible to change the image running on a virtual machine in Azure without recreating it

Our development environment has a bunch of virtual machines running different versions of our software. I want to be able to replace the Managed Image that is running on a VM, without having to destroy and recreate it.
The images are created using packer, which provisions them with the correct software and dependencies.
Example of Current Workflow:
Machine A is running on Managed Image v2.5, which runs software with a dependency on Tomcat 10.
To fix a bug in v2.2, which depends on Tomcat 9 and thus cannot run on the same VM without changing the dependencies, I have to:
Destroy the VM
Recreate it using the same arguments (name, size, etc) but based on Managed Image v2.2
Attach the network interface and disks
Restart it
If feel like there should be an easier solution to this, where it is possible to hot-swap the images, without recreating the full virtual machines. I've looked into swapping the OS disk, but I couldn't figure out a solution that would work with Managed Images instead of VHD's.
As per official article, its not Supported.
Microsoft does not support an upgrade of the operating system of a
Microsoft Azure virtual machine. Instead, you should create a new
Azure virtual machine that is running the supported version of the
operating system that is required and then migrate the workload.
Official article : https://support.microsoft.com/en-us/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines
Instead you can use windows server migration tool that would assist you in roles and feature's migration
Install, use, and remove Windows Server migration tools
Similar issue discussed at Can you do in-place updating/upgrading of Azure VM Operating System?

How are OS patches with critical security update on GCE, GKE and AI Platform notebook?

Is there complete documentation that explains if and how critical security updates are applied to an OS image on the following IaaS/PaaS?
GCE VM
GKE (VM of in a cluster)
VM on which is running AI Platorm notebook
In which cases is the GCP team taking care of these updates and in which cases should we take care of it?
For example, in the case of a GCE VM (Debian OS) the documentation seems to indicate that no patches are applied at all and no reboots are done.
What are people doing to keep GCE or other VMs up to date with critical security updates, if this is not managed by GCP? Will just restarting the VM do the trick? Is there some special parameter to set in the YAML template of the VM? I guess for GKE or AI notebook instances, this is managed by GCP since this is PaaS, right? Are there some third party tools to do that?
As John mentioned, for the GCE Vm instances, you are responsible for all of the packages updates and it is handled like in any other System:
Linux: sudo apt/yum update/upgrade
Windows: Windows update
There are some internal tools in each GCE image that could help you to automatically update your system:
Windows: automatic updates are enabled by default
RedHat/Centos systems: you can use yum-cron tool to enable automatic updates
Debian: using the tool unattended-upgrade
As per GKE, I think this is done when you upgrade your cluster version, the version of the master is upgraded automatically (since it is Google managed), but the nodes should be done by you. The node update can be automated, please see the second link below for more information.
Please check the following links for more details on how the Upgrade process works in GKE:
Upgrading your cluster
GKE Versioning and upgrades
As per "VM on which is running AI Platform notebook", I don't understand what do you mean by this. Could you provide more details

While creating new Azure Function App in what scenario do I select operating system other than Windows?

We created and tested several Azure Function Apps hosted at Windows. While creating new Azure Function App in what scenario do I select OS other than Windows? Meaning Linux or Docker.
I created test instances for all three OS selection options and basic settings of each of them appear to be very close.
Linux or Docker is useful if your functions have dependencies that only work on Linux/Docker. For example, some node.js native libraries only work on Linux, and will never work on Windows.
If you don't need Linux for anything specific, then I suggest sticking to Windows since that is currently (at the time of writing) the best and most supported environment for running Azure Functions.
Azure Functions 2.0 runtime is based on .NET Core, so it is cross-platform. If you choose Linux/Docker, Functions runtime will be deployed on Linux.
2.0 is still in preview, so Linux/Docker are not supported in production yet. For now, Consumption Plan (pay per call) is not supported.
See The Azure Functions on Linux Preview. Quote:
Functions on Linux can be hosted in a dedicated App Service tier in 2 different modes:
You bring the Function App code and we provide and manage the container, no specific Docker related knowledge required.
You bring your own Docker container including the Azure Functions runtime 2.0, specific dependencies, and Function App code.
For consumptions mode, the cold start varies a little bit among the OS.
It looks like, although the average time is very close between Windows and Linux, the best and worst cases are much better for Linux... which kind of makes sense.
Check this as a good reference: https://mikhail.io/serverless/coldstarts/azure/
Now, if you are deploying to a dedicated Apps Service Plan, it plays a bigger role. Linux Plans are cheaper than Windows Plans due to the OS licensing cost.

How to manage patching on multiple AWS accounts with different schedules

I'm looking for the best way to manage patching Linux systems across AWS accounts with the following things to consider:
Separate schedules to roll patches through Dev, QA, Staging and Prod sequentially
Production patches to be released on approval, not automatic
No newer patches can be deployed to Production than what was already deployed to lower environments (as new patches come out periodically throughout the month)
We have started by caching all patches in all environments on the first Sunday of every month. The goal there was to then install patches from cache. This helps prevent un-vetted patches being installed in prod.
Most, not all, instances are managed by OpsWorks, but there are numerous OpsWorks stacks. We have some other instances managed by Chef Server. Still others are not managed, but are just simple EC2 instances created from the EC2 console. This means, using recipes means we have to kick off approved patches on a stack-by-stack basis or instance-by-instance basis. Not optimal.
More recently, we have looked at the new features of SSM using a central AWS account to manage instances. However, this causes problems with some applications because the AssumeRole for SSM adds credentials to the .aws/config file that interferes with other tasks we need to run.
We have considered other tools, such as Ansible, but we would like to explore staying within the toolset we currently have which is largely OpsWorks and Chef Server. I'm looking for ideas that are more on a higher level, an architecture of how one would approach this scenario.
Thanks for any thoughts or ideas.
This sounds like one of the exact scenarios RunCommand was designed for.
You can create multiple groups of servers with different schedules based on tags. More importantly, you don't need to rely on secret/keys being deployed anywhere.

Cloud environment on Windows Azure platform

I've got 6 web sites, 2 databases and 1 cloud environment setup on my account
I used the cloud to run some tasks via Windows Task Manager, everything was installed on my D drive but between last week and today the 8 of March my folder containing the "exe" to run as been removed.
Also I've installed SVN tortoise to get the files deployed and it not installed anymore
I wonder if somebody has a clue about my problem
Best Regards
Franck merlin
If you're using Cloud Services (web/worker roles), these are stateless virtual machines. That is: Windows Azure provides the operating system, then brings your deployment package into the environment after bootup. Every single virtual machine instance booted this way starts from a clean OS image, along with the exact same set of code bits from you.
Should you RDP into the box and manually install anything, anything you install is going to be temporary at best. Your stuff will likely survive reboots. However, if the OS needs updating (especially the underlying host OS), your changes will be lost as a fresh OS is brought up.
This is why, with Cloud Services, all customizations should be done via startup tasks or the OnStart() event. You should never manually install anything via RDP since:
Your changes will be temporary
Your changes won't propagate to additional instances; you'll be required to RDP into every single box to perform the same changes.
You may want to download the Azure Training Kit and look through some of the Cloud Service labs to get a better feel for startup tasks.
In addition to what David said, check out http://blogs.msdn.com/b/kwill/archive/2012/10/05/windows-azure-disk-partition-preservation.aspx for the scenarios where the different drives will be destroyed.
Also take a look at http://blogs.msdn.com/b/kwill/archive/2012/09/19/role-instance-restarts-due-to-os-upgrades.aspx which points you to the RSS feed and MSDN article where you can see that a new OS is currently being deployed.

Resources