How does Terraform deal with CoreOS/etcd2 on node failure? - coreos

I have been using terraform to create a CoreOs Cluster on Digital Ocean just fine. My question was addressed here but nearly a year has passed
which seems like 10 on a fast pace projects like etcd2 and terraform. IMHO, if the master fails terraform will create another instance with the exact same configurantion, but according to the free discovery coreos service the cluster will be full and all the slaves will have the wrong ip to connect to the etcd2 master. In the case of a minion failure, the master ip wont be an issue, but I still wont be able to join a full cluster.
How does terraform deal with this kind of problem? Is there a solution or am I still binded to a hacky solution like the link above?
If I run terraform taint node1. It there a way to notify the dicovery service this change?

Terraform doesn't replace configuration management tools like Ansible, Chef and Puppet.
This can be solved using a setup where, say, a Ansible run is triggered to reconfigure the slaves when the master is reprovisioned. The ansible inventory in this case, would have been update by terraform with the right ip, and the slave ansible role can pick this up and configure appropriately.
There are obviously other ways to do this, but it is highly recommended that you couple a proper CM tool with Terraform and propagate such changes.

Related

Is it possible to use aks custom headers with the azurerm_kubernetes_cluster resource?

I'm looking at setting up a cluster of GPU nodes on AKS.
In order to avoid installing the nvidia device daemonset manually, apparently I can register for the GPUDedicatedVHDPreview and send UseGPUDedicatedVHD=true with AKS custom headers (https://learn.microsoft.com/en-us/azure/aks/gpu-cluster).
I understand how to do this on the command line, but I don't understand how I can do it using the azurerm_kubernetes_cluster terraform provider.
Is it even possible?
It looks like, at the time of writing, this isn't possible yet, as indicated by this open issue: https://github.com/terraform-providers/terraform-provider-azurerm/issues/6793

Is there any way to run a script inside alredy existing infrastructre using terraform

I want to run a script using terraform inside an existing instance on any cloud which is pre-created .The instance was created manually , is there any way to push my script to this instance and run it using terraform ?
if yes ,then How can i connect to the instance using terraform and push my script and run it ?
I believe ansible is a better option to achieve this easily.
Refer the example give here -
https://docs.ansible.com/ansible/latest/modules/script_module.html
Create a .tf file and describe your already existing resource (e.g. VM) there
Import existing thing using terraform import
If this is a VM then add your script to remote machine using file provisioner and run it using remote-exec - both steps are described in Terraform file, no manual changes needed
Run terraform plan to see if expected changes are ok, then terraform apply if plan was fine
Terraform's core mission is to create, update, and destroy long-lived infrastructure objects. It is not generally concerned with the software running in the compute instances it deploys. Instead, it generally expects each object it is deploying to behave as a sort of specialized "appliance", either by being a managed service provided by your cloud vendor or because you've prepared your own machine image outside of Terraform that is designed to launch the relevant workload immediately when the system boots. Terraform then just provides the system with any configuration information required to find and interact with the surrounding infrastructure.
A less-ideal way to work with Terraform is to use its provisioners feature to do late customization of an image just after it's created, but that's considered to be a last resort because Terraform's lifecycle is not designed to include strong support for such a workflow, and it will tend to require a lot more coupling between your main system and its orchestration layer.
Terraform has no mechanism intended for pushing arbitrary files into existing virtual machines. If your virtual machines need ongoing configuration maintenence after they've been created (by Terraform or otherwise) then that's a use-case for traditional configuration management software such as Ansible, Chef, Puppet, etc, rather than for Terraform.

Kubernetes cluster Nodes not creating automatically when other lost in Kubespray

I have successfully deployed a multi master Kubernetes cluster using the repo https://github.com/kubernetes-sigs/kubespray and everything works fine. But when I stop/terminate a node in the cluster, new node is not joining to the cluster.I had deployed kubernetes using KOPS, but the nodes were created automatically, when one deletes. Is this the expected behaviour in kubespray? Please help..
It is expected behavior because kubespray doesn't create any ASGs, which are AWS-specific resources. One will observe that kubespray only deals with existing machines; they do offer some terraform toys in their repo for provisioning machines, but kubespray itself does not get into that business.
You have a few options available to you:
Post-provision using scale.yml
Provision the new Node using your favorite mechanism
Create an inventory file containing it, and the etcd machines (presumably so kubespray can issue etcd certificates for the new Node
Invoke the scale.yml playbook
You may enjoy AWX in support of that.
Using plain kubeadm join
This is the mechanism I use for my clusters, FWIW
Create a kubeadm join token using kubeadm token create --ttl 0 (or whatever TTL you feel comfortable using)
You'll only need to do this once, or perhaps once per ASG, depending on your security tolerances
Use the cloud-init mechanism to ensure that docker, kubeadm, and kubelet binaries are present on the machine
You are welcome to use an AMI for doing that, too, if you enjoy building AMIs
Then invoke kubeadm join as described here: https://kubernetes.io/docs/setup/independent/high-availability/#install-workers
Use a Machine Controller
There are plenty of "machine controller" components that aim to use custom controllers inside Kubernetes to manage your node pools declaratively. I don't have experience with them, but I believe they do work. That link was just the first one that came to mind, but there are others, too
Our friends over at Kubedex have an entire page devoted to this question

Set cassandra.yaml settings like seeds through a script

What is the best way to set yaml settings? I am using docker containers and want to automate the process of setting cassandra.yaml settings like seeds, listen_address & rpc_address.
I have seen something like this in other yaml tools: <%= ENV['envsomething'] %>
Thanks in advance
I don't know about the "best" way but when I set up a scripted cluster of cassandra servers on a few vagrant vms I used puppet to set the seed and so on in cassandra.yaml.
I did write some scripting than used puppetdb to keep track of the addresses of the hosts but this wasn't terrifically successful. The trouble was the node that came up first only had itself in the list of seeds and so tended to make a cluster on it's own. Then the rest would come up as a seperate cluster. So I had to take down the solo node, clear it out and restart it with correct config
If I did it now I would set the addresses as static ip, then use them to fill in the templates for the cassandra.yaml files on all the nodes. Then hopefully the nodes would come up with the right idea about the other cluster members.
I don't have any experience with Docker but they do say the way to use puppet+Docker is to use puppet on the Docker container before starting it up
Please note that you need a lot of memory to make this work. I had a machine with 16GB and that was a bit dubious.
Thank you for information.
I was considering using https://github.com/go-yaml/yaml
But this guy did the trick: https://github.com/abh1nav/docker-cassandra
Thanks
If you're running Cassandra in Docker use this as an example: https://github.com/tobert/cassandra-docker You can override cluster name/seeds when launching so whatever config management tool you use for deploying your containers could do something similar.

Can my Vagrant VMs use manifests with storeconfigs without a Puppet master?

I'm trying to set up a multi-VM Vagrant environment that spins up an OpenStack controller and a few OpenStack compute nodes and then provisions a VM or two on the compute nodes.
Yes, I'm talking about VMs running on VMs. It sounds a little crazy, but this multi-VM Vagrant environment has already been created at https://github.com/lorin/openstack-ansible and it works perfectly, as I describe at http://wiki.greptilian.com/openstack
I can only draw inspiration from that GitHub repo and its corresponding tech talk, however, because it uses Ansible as the Vagrant provisioner while I have a requirement to use Puppet.
I'd like to use the official Puppet Labs module for OpenStack at https://github.com/puppetlabs/puppetlabs-openstack but it uses storeconfigs, so I get errors like this because my Vagrantfile doesn't include a VM to serve as a Puppet master:
warning: You cannot collect exported resources without storeconfigs being set; the collection will be ignored on line 142 in file /tmp/vagrant-puppet/modules-0/nova/manifests/init.pp
Resource type anchor doesn't exist at /tmp/vagrant-puppet/modules-0/nova/manifests/db/mysql.pp:18 on node controller.example.com.
I suppose I could tweak my Vagrantfile to spin up a Puppet master along with the OpenStack hosts, but I'm not sure how I'd do that and it seems to introduce extra complexity into the Vagrant environment.
I'm wondering if can do this in with "masterless" Puppet instead. A post at http://semicomplete.com/presentations/puppet-at-loggly/puppet-at-loggly.pdf.html suggests it's possible, saying, "puppet –environment prerun manifests/prerun.pp ... makes storeconfigs work ... puppet –storeconfigs manifests/site.pp ... This is the main puppet run" but I'm confused about the implementation details.
Can anyone point me to a Vagrant repo that runs "masterless" Puppet but uses storeconfigs?
You'll need to configure your storeconfigs with a DB that all vagrant VMs can reach. Loggly used AmazonRDS, but you can use other DBs as puppet docs show. Assuming you have a DB that all VMs can reach and you run puppet with storeconfigs option and you have the correct DB connection info configured in puppet, you should be good.

Resources