I am currently using terraform to create k8s cluster which is working perfectly fine. Once the nodes are provisioned, I want to run a few bash commands on any one of the node. So far, null_resource seems like an option since it is a cluster and we are unaware of the node names/IPs. However, I am unable to determine what should be the value of connection block since azurerm_kubernetes_cluster does not export the IP address of the load balancer or the vm names. The question mark needs the correct value in the below:
resource "null_resource" "cluster" {
triggers = { "${join(",", azurerm_kubernetes_cluster.k8s.id)}" }
connection = { type = ssh
user = <user>
password = <password>
host = <?>
host_key = <pub_key>
}
}
Any help!
AKS does not expose the nodes of it to the Internet. And you just can connect the nodes through the master of the cluster. If you want to run a few bash commands in the nodes, you can use the SSH connection that makes a pod as a helper to connect to the nodes, see the steps about SSH node access.
Also, you can add the NAT rules for the nodes in the Load Balancer, then you can also SSH to the nodes through the Load Balancer public IP. But it's not a secure way. So I do not suggest this way.
Would recommend just running a daemonset that performs the bash commands on the nodes. As any scale or update operations are going to remove or not have the updated config you are performing on the nodes.
There was no straightforward solution for this one. Static IP was not the right way to do it and hence, I ended up writing a wrapper around terraform. I did not want to run my init scripts on every node that comes up but only one of the nodes. So essentially, now that wrapper communicates with terraform to first deploy only one node which executes cloud-init. After this, it recalls the function to scale terraform and brings up rest of the desired number of instances. In the cloud-init script, I have a check of kubectl get no where if I receive the size as more than one node, I simply skip the cloud-init commands.
Related
Background Info
I have a Node.js app running on a Managed Instance Group of VM's on GCP's Compute Engine.
New VM's are generated from a template with a startup script. The script does the usual stuff, install Node.js, curl, git clone the app code, etc.
This application is set to auto-scale, which is why I need configurations to happen pro grammatically - namely setting host and port in the .env file for the Node.js project.
How I have tried to solve the problem
The only way I can think about doing this programmatically in my startup.sh script is by running the command: gcloud compute instaces list
This returns something like this
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
VM-1 us-central1-a n1-standard-1 XX.XXX.X.X XX.XXX.XXX.XXX RUNNING
VM-2 us-central1-a n1-standard-1 XX.XXX.X.X XX.XXX.XXX.XXX RUNNING
VM-3 us-central1-a n1-standard-1 XX.XXX.X.X XX.XXX.XXX.XXX RUNNING
Then from that, I would want to pull out the EXTERNAL_IP of the current machine the script is running on, and somehow write it to an environment variable / write it to the .env file for my project.
The .env file only has this information in it:
{
"config" : {
"host" : "VM_EXTERNAL_IP"
"port" : 3019
}
}
This method, I think, would require some sort of regex to grab the correct IP out of the commands output, then stored in Environment variable, then written to .env.
This seems like unnecessary work as I surely not the first person to want to do something like this. Is there a standard way of doing this? I'm not a Node.js expert by any means, and even less of a GCP expert. Perhaps there is a mechanism on GCP to deal with this? - some metadata API that can easily grab the IP to use in code? Maybe on the Node.js side there is a better way of configuring the host? Any recommendations is appreciated.
There are many methods to determine the external IP address. Note that your instance does not have an external public IP address. Google implements a 1-to-1 NAT which handles IP address translation which maps the public IP address to the private IP address.
The CLI supports the command line option --format json. You can use tools that parse json such as jq.
gcloud compute instances list --format json | jq -r ".[].networkInterfaces[0].accessConfigs[0].natIP"
Get your public IP address from several sources, which may or may not be the same as your instance:
https://checkip.amazonaws.com/.
curl https://checkip.amazonaws.com/
Use the CLI with options to get just the external address
gcloud compute instances describe [INSTANCE_NAME] --format='get(networkInterfaces[0].accessConfigs[0].natIP)'
Read the metadata server from inside the instance:
curl http://metadata/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip -H "Metadata-Flavor: Google"
Once you have determined the method to get your external IP address, you can use tools such as awk to replace the value in your file.
https://www.grymoire.com/Unix/Awk.html
I have a kubernetes cluster running on GCE.
I created a setup in which I have 2 pods glusterfs-server-1 and glusterfs-server-2 that are my gluster server.
The 2 glusterfsd daemon correctly communicate and I am able to create replicated volumes, write files to them and see the files correctly replicated on both pods.
I also have 1 service called glusterfs-server that automatically balances the traffic between my 2 glusterfs pods.
From inside another pod, I can issue mount -t glusterfs glusterfs-server:/myvolume /mnt/myvolume and everything works perfectly.
Now, what I really want is being able to use the glusterfs volume type inside my .yaml files when creating a container:
...truncated...
spec:
volumes:
- name: myvolume
glusterfs:
endpoints: glusterfs-server
path: myvolume
...truncated...
Unfortunately, this doesn't work. I was able to find out why it doesn't work:
When connecting directly to a kubernetes node, issuing a mount -t glusterfs glusterfs-server:/myvolume /mnt/myvolume does not work, this is because from my node's perspective glusterfs-server does not resolve to any IP address. (That is getent hosts glusterfs-server returns nothing)
And also, due to how glusterfs works, even directly using the service's IP will fail as glusterfs will still eventually try to resolve the name glusterfs-server (and fail).
Now, just for fun and to validate that this is the issue, I edited my node's resolv.conf (by putting my kube-dns IP address and search domains) so that it would correctly resolve my pods and services ip addresses. I then was finally able to successfully issue mount -t glusterfs glusterfs-server:/myvolume /mnt/myvolume on the node. I was then also able to create a pod using a glusterfs volume (using the PodSpec above).
Now, I'm fairly certain modifying my node's resolv.conf is a terrible idea: kubernetes having the notion of namespaces, if 2 services in 2 different namespaces share the same name (say, glusterfs-service), a getent hosts glusterfs-service would resolve to 2 different IPs living in 2 different namespaces.
So my question is:
What can I do for my node to be able to resolve my pods/services IP addresses?
You can modify resolv.conf and use the full service names to avoid collisions. Usually are like this: service_name.default.svc.cluster.local and service_name.kube-system.svc.cluster.local or whatever namespace is named.
I have setup a cloud test bed using OpenStack. I used the 3 node architecture.
The IP assigned to each node is as given below
Compute Node : 192.168.9.19/24
Network Node : 192.168.9.10/24
Controller Node : 192.168.9.2/24
The link of instance created is like this :
http://controller:6080/vnc_auto.html?token=2af0b9d8-0f83-42b9-ba64-e784227c119b&title=hadoop14%28f53c0d89-9f08-4900-8f95-abfbcfae8165%29
At first this instance was accessible only when I substitutes controller:8090 with 192.168.9.2:8090. I solved this by setting a local DNS server and resolving 192.168.9.2 to controller.local. Now instead of substituting the IP it works when I substitute controller.local.
Is there any other way to do it?? Also how can I access this instance from another subnet other than 192.168.9.0/24, without specifying the IP.
If I understood your question correctly, yes there is another way, you don't need to set up a DNS server!
On the machine that you would like to access the link, perform the operations below:
Open /etc/hosts file with a text editor.
Add this entry: 192.168.9.2 controller
Save the file, and that's it.
I suggest you to do these on all your nodes so that you can use these hostnames on your OpenStack configuration files instead of their IPs. This would also save you from tons of modifications if you have to make a change on the subnet IPs.
So for example your /etc/hosts files on your nodes should look like these:
#controller
192.168.9.2 controller
#network
192.168.9.10 network
#compute
192.168.9.19 compute
I'm using node.js and AWS with autoscaling. A javascript SDK solution is preferable but at this point I'll take anything.
I'm hoping this is super easy to do and that I'm just an idiot, but how does one go about getting the public IP addresses of instances that are undergoing the scaling event?
I'm trying to keep a list of active public IP's within a specific application tier so I can circumvent ELB for websocket connections, but I can't figure out how to programmatically get the public IP addresses of the instances that have just been added/removed.
For me, and Sensu Client config, I add a basic sensu client config in the base AMI for my instances with "this_hostname" "this_ip" and "this_role." Then I just add some simple sed's in my cloudformation user_data script that curl the aws endpoint for the public ip's as the instance boots. Each cloudformation script sets/exports the APP_TYPE(downcased) in the same user_data script prior to my sed lines, so I reuse that as the role for sensu:
"sed -i \"s/this_hostname/$(curl http://169.254.169.254/latest/meta-data/public-ipv4)/\" /etc/sensu/conf.d/client.json\n",
"sed -i \"s/this_ip/$(hostname -i)/\" /etc/sensu/conf.d/client.json\n",
"sed -i \"s/this_role/${APP_TYPE,,}/\" /etc/sensu/conf.d/client.json\n",
You can also use the internal IP for both, or external for both hostname/IP, to which you can see examples of both above...
For Shutdown, I use a simple /etc/rc0.d/S01Instance_Termination script that is symbolically linked from /etc/init.d/instance_termination that runs a similar curl to remove itself from the host on instance shutdown:
http://pastebin.com/6He1mQTH
I'm using Memcached on each of my EC2 web server instances. I am not sure how to configure the various hostnames for the memcache nodes at the server level.
Consider the following example:
<?php
$mc = new Memcached()
$mc->addServer('node1', 11211);
$mc->addServer('node2', 11211);
$mc->addServer('node3', 11211);
How are node1, node2, node3 configured?
I've read about a few setups to configure the instance with hostname and update /etc/host with these entries. However, I'm not familiar enough with configuring such things.
I'm looking for a solution that scales - handles adding and removing instances - and automatic.
The difficulty with this is keeping an updated list of hosts within your application. When hosts could be added and removed, keeping this list up to date may be a challenge. You may be able to use some sort of proxy which would help by giving you a constant endpoint for your application.
If you can't use a proxy, I have a couple ideas.
If the list of hosts is static, assign an elastic ip to each memcached host. Within ec2 region, this will resolve to the local IP address of the host its associated with. With this idea, you have a constant list of hosts that your application can use.
If you are going to add/remote hosts on a regular basis, you need to be able dynamically update the lists of hosts your application will use. You can query the EC2 api for instances with a certain tag, then get the IP addresses for all of those instances. Cache the list in memory or on disk and load it with your application. If you run this every minute, any host changes should propagate within 1 minute, unless the EC2 api is being slow to update.