migrating ContainerOS (CoreOS) alpha from etcd2 to etcd3 - coreos

I have Container Linux by CoreOS 1353.1.0 installed and it uses etcd2 by default. I can't even find an etcd3 service file (systemctl | grep etcd only shows etcd2.service).
I wanna play with etcd3. especially because it's the default storage backend for kubernetes 1.6.
is there any way (easy or hard) to migrate from etcd2 to etcd3? when I say migrate.. I don't mind recofiguring my ignition file and reinstalling the all OS.
any information regarding the issue would be greatly appreciated.
how come ContainerOS Alpha doesn't come with etcd3?!?!
thanks!

I can use etcd-member.service to start a container image with any version that I want. it's cool !:)
more info at https://coreos.com/etcd/docs/latest/getting-started-with-etcd.html#setting-up-etcd

Related

Hazelcast's IMap stopped working after upgrading to version 5.1.1 on K8S

We have an "cache" (javax.cache.Cache) implementation that is a wrapper of Hazelcast's IMap. We use a composite Object key.
We upgraded from version 3.12.5 to 5.1.1. When I deploy the system on a local Windows machine, all works well. But when I deploy the system into an Kubernetes environment, the map just "does not work". Values do not get persisted into the map (after a put operation). An Hazelcast cluster does get formed so it does not seem to be an auto discovery issue. I also have another K8S env in which it does work properly.
I enabled Hazelcast's diagnostic mode and it does not seem to show me anything useful. I do not get any error or warn messages from the com.hazelcast.* package. The same issue happened also when I tried version 4.x.
I am trying to explore ways which will help to the realise what is the issue here. Thank you.
Turns out it is a bug. Hazelcast recommends to use the value of 0 instead.
I had the same problem but i was migrating from hazelcast 3.11.1 to 5.1.2, I found the IMap in "com.hazelcast.map" not in "com.hazelcast.core"

AWS - Steps/Consideration for migrating to latest generation of ec2 from an older generation

I am looking at upgrading from older generation ec2 to the latest generation for example m3.medium to m5.large. Are there any steps or considerations that need to be made when making this change? I couldn't find any documentation that is linux specific, only windows specific: https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/migrating-latest-types.html.
If anybody could help me I would really appreciate it.
To upgrade the instance type on AWS I usually follow this official doc
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-resize.html
However it also depends on the complexity of application running on your instance viz. just an apache server or a database? To handle that situation as #Marcin mentioned, always start with AMI of your instance and try doing any change before applying directly on production instances.
screenshot from the official doc

How are OS patches with critical security update on GCE, GKE and AI Platform notebook?

Is there complete documentation that explains if and how critical security updates are applied to an OS image on the following IaaS/PaaS?
GCE VM
GKE (VM of in a cluster)
VM on which is running AI Platorm notebook
In which cases is the GCP team taking care of these updates and in which cases should we take care of it?
For example, in the case of a GCE VM (Debian OS) the documentation seems to indicate that no patches are applied at all and no reboots are done.
What are people doing to keep GCE or other VMs up to date with critical security updates, if this is not managed by GCP? Will just restarting the VM do the trick? Is there some special parameter to set in the YAML template of the VM? I guess for GKE or AI notebook instances, this is managed by GCP since this is PaaS, right? Are there some third party tools to do that?
As John mentioned, for the GCE Vm instances, you are responsible for all of the packages updates and it is handled like in any other System:
Linux: sudo apt/yum update/upgrade
Windows: Windows update
There are some internal tools in each GCE image that could help you to automatically update your system:
Windows: automatic updates are enabled by default
RedHat/Centos systems: you can use yum-cron tool to enable automatic updates
Debian: using the tool unattended-upgrade
As per GKE, I think this is done when you upgrade your cluster version, the version of the master is upgraded automatically (since it is Google managed), but the nodes should be done by you. The node update can be automated, please see the second link below for more information.
Please check the following links for more details on how the Upgrade process works in GKE:
Upgrading your cluster
GKE Versioning and upgrades
As per "VM on which is running AI Platform notebook", I don't understand what do you mean by this. Could you provide more details

Kubernetes cluster Nodes not creating automatically when other lost in Kubespray

I have successfully deployed a multi master Kubernetes cluster using the repo https://github.com/kubernetes-sigs/kubespray and everything works fine. But when I stop/terminate a node in the cluster, new node is not joining to the cluster.I had deployed kubernetes using KOPS, but the nodes were created automatically, when one deletes. Is this the expected behaviour in kubespray? Please help..
It is expected behavior because kubespray doesn't create any ASGs, which are AWS-specific resources. One will observe that kubespray only deals with existing machines; they do offer some terraform toys in their repo for provisioning machines, but kubespray itself does not get into that business.
You have a few options available to you:
Post-provision using scale.yml
Provision the new Node using your favorite mechanism
Create an inventory file containing it, and the etcd machines (presumably so kubespray can issue etcd certificates for the new Node
Invoke the scale.yml playbook
You may enjoy AWX in support of that.
Using plain kubeadm join
This is the mechanism I use for my clusters, FWIW
Create a kubeadm join token using kubeadm token create --ttl 0 (or whatever TTL you feel comfortable using)
You'll only need to do this once, or perhaps once per ASG, depending on your security tolerances
Use the cloud-init mechanism to ensure that docker, kubeadm, and kubelet binaries are present on the machine
You are welcome to use an AMI for doing that, too, if you enjoy building AMIs
Then invoke kubeadm join as described here: https://kubernetes.io/docs/setup/independent/high-availability/#install-workers
Use a Machine Controller
There are plenty of "machine controller" components that aim to use custom controllers inside Kubernetes to manage your node pools declaratively. I don't have experience with them, but I believe they do work. That link was just the first one that came to mind, but there are others, too
Our friends over at Kubedex have an entire page devoted to this question

Can we set up PHP 7.x and Python 3.x in docker together?

I am new to docker. I am trying to build docker containers with PHP 7.1, Python 3.5 which will communicate with common database server which indeed another container.
I want to know if this approach is possible and if yes how can it achieved? Or else what will be valid approach?
Thanks.
I advise you to use docker-compose with 3 services (Pyhton 3.5, PHP 7.1 and your DB) and link them together. You can find more details on the "links" statement on this page: https://docs.docker.com/compose/compose-file/#links
By linking them you can reach all containers more efficiently.
and you can create a docker network and connect your containers to it for isolates it. And reach them by their name, It's more secure and it's a good practice.
I hope that will help you.

Resources