Does bare-metal coreos etcd2 support the templating feature of coreos-cloudinit? - coreos

If the platform environment supports the templating feature of
coreos-cloud it is possible to automate etcd configuration with the
$private_ipv4 and $public_ipv4 fields.
...
Note: The $private_ipv4 and $public_ipv4 substitution variables
referenced in other documents are only supported on Amazon EC2, Google
Compute Engine, OpenStack, Rackspace, DigitalOcean, and Vagrant.
CoreOS Documentation
If I run this on bare-metal, does this mean that I have to set the IP addresses manually? The values won't be replaced? And if so, how would I do that?

Related

How do I configure Linux swap space on AWS Elastic Beanstalk running AWS Linux 2?

The answer to Can I configure Linux swap space on AWS Elastic Beanstalk? (from 2016) shows how to configure Linux swap space for an AWS Elastic Beanstalk environment using .ebextensions configuration files.
However, the AWS docs Customizing software on Linux servers has this note for the newer Amazon Linux 2 platforms:
On Amazon Linux 2 platforms, instead of providing files and commands in .ebextensions configuration files, we highly recommend that you use Buildfile. Procfile, and platform hooks whenever possible to configure and run custom code on your environment instances during instance provisioning.
How do I configure swap space using this more modern configuration approach?
Buildfile and Procfile are not suited for that. They serve different purposes - running short and long running commands.
I would use the platform hooks for that. Specifically, prebuild:
Files here run after the Elastic Beanstalk platform engine downloads and extracts the application source bundle, and before it sets up and configures the application and web server.
The rationale is that it's better to create swap now, before the application starts configuring. If the swap creation operation fails, you get notified fast, rather then after you setup your application.
From the SO link, you could put 01_add-swap-space.sh into .platform/hooks/prebuild/ folder. Please make sure that 01_add-swap-space.sh is executable (chmod +x) before you package your appliaction into a zip.

How are OS patches with critical security update on GCE, GKE and AI Platform notebook?

Is there complete documentation that explains if and how critical security updates are applied to an OS image on the following IaaS/PaaS?
GCE VM
GKE (VM of in a cluster)
VM on which is running AI Platorm notebook
In which cases is the GCP team taking care of these updates and in which cases should we take care of it?
For example, in the case of a GCE VM (Debian OS) the documentation seems to indicate that no patches are applied at all and no reboots are done.
What are people doing to keep GCE or other VMs up to date with critical security updates, if this is not managed by GCP? Will just restarting the VM do the trick? Is there some special parameter to set in the YAML template of the VM? I guess for GKE or AI notebook instances, this is managed by GCP since this is PaaS, right? Are there some third party tools to do that?
As John mentioned, for the GCE Vm instances, you are responsible for all of the packages updates and it is handled like in any other System:
Linux: sudo apt/yum update/upgrade
Windows: Windows update
There are some internal tools in each GCE image that could help you to automatically update your system:
Windows: automatic updates are enabled by default
RedHat/Centos systems: you can use yum-cron tool to enable automatic updates
Debian: using the tool unattended-upgrade
As per GKE, I think this is done when you upgrade your cluster version, the version of the master is upgraded automatically (since it is Google managed), but the nodes should be done by you. The node update can be automated, please see the second link below for more information.
Please check the following links for more details on how the Upgrade process works in GKE:
Upgrading your cluster
GKE Versioning and upgrades
As per "VM on which is running AI Platform notebook", I don't understand what do you mean by this. Could you provide more details

Kubernetes cluster Nodes not creating automatically when other lost in Kubespray

I have successfully deployed a multi master Kubernetes cluster using the repo https://github.com/kubernetes-sigs/kubespray and everything works fine. But when I stop/terminate a node in the cluster, new node is not joining to the cluster.I had deployed kubernetes using KOPS, but the nodes were created automatically, when one deletes. Is this the expected behaviour in kubespray? Please help..
It is expected behavior because kubespray doesn't create any ASGs, which are AWS-specific resources. One will observe that kubespray only deals with existing machines; they do offer some terraform toys in their repo for provisioning machines, but kubespray itself does not get into that business.
You have a few options available to you:
Post-provision using scale.yml
Provision the new Node using your favorite mechanism
Create an inventory file containing it, and the etcd machines (presumably so kubespray can issue etcd certificates for the new Node
Invoke the scale.yml playbook
You may enjoy AWX in support of that.
Using plain kubeadm join
This is the mechanism I use for my clusters, FWIW
Create a kubeadm join token using kubeadm token create --ttl 0 (or whatever TTL you feel comfortable using)
You'll only need to do this once, or perhaps once per ASG, depending on your security tolerances
Use the cloud-init mechanism to ensure that docker, kubeadm, and kubelet binaries are present on the machine
You are welcome to use an AMI for doing that, too, if you enjoy building AMIs
Then invoke kubeadm join as described here: https://kubernetes.io/docs/setup/independent/high-availability/#install-workers
Use a Machine Controller
There are plenty of "machine controller" components that aim to use custom controllers inside Kubernetes to manage your node pools declaratively. I don't have experience with them, but I believe they do work. That link was just the first one that came to mind, but there are others, too
Our friends over at Kubedex have an entire page devoted to this question

How can I connect to AWS machines with ease from my laptop?

I need to connect to aws servers but have hard time figuring out the IP or the ssh key. Is there a way i can use to easly connect ?
I would like a tool that will help me connect to the machines.
The DevOps have developed a tool to use with knife.
https://github.com/bluevine-dev/system/tree/master/misc/scripts/knife-con
Knife plugin which allows an elastic and easy-to-use interface to manage SSH connections to AWS EC2 instances and Chef nodes from within the same scope.
Some features of knife-con:
Dynamic Chef environments and AWS accounts manipulation with one, easy to understand and single config
Supports in-depth search of Chef node attributes - based on your needs
Dynamic search use cases: you can choose whater to use the combined interface (Chef and AWS), only AWS, or only Chef
Parallel SSH support on search results - you can pass a single shell command to the search results
Fast and easy search results selection with a beautiful UI
Automatic VPN switching - multi platform support (OPENVPN, TUNNELBLICK)
How the plugin works?
The plugin accepts multiple options such as regular expressions or Chef role patterns - during the search phase the plugin will prefer to grab it's results from a node search on the server - It will attempt to locate nodes which have checked-in during a pre-set healthy period - the default healthy period which can be overwritten is 24 hours. If no results are returned from the server, the plugin will attempt to locate AWS instances based on the search pattern. The plugin offers an easy to use and understandable interface to manage nodes and instances results - which will allow you to choose the relevant node or instance result and establish an SSH session from within your local machine.

coreos - how to get cloud-config from remote url?

I'm trying to install a cores in a vm and I'm not sure how I can load the cloud-config.yml file from a remote URL, without having to use coreos-cloudinit since it's deprecated.
Is there a way to do that?
coreos-cloudinit is deprecated in favour of ignition. Ignition Configs can indeed be loaded from remote URLs, typically via a coreos.config.url= kernel command-line parameter.
However different methods exist for specific providers, so it's better to check the Supported Platforms doc and look for the relevant platform.

Resources