Well, for the last 2 days I battled this documentation:
https://learn.microsoft.com/en-au/azure/aks/static-ip
and
https://learn.microsoft.com/en-au/azure/aks/ingress-own-tls
First of all I ensured that I had my aks k8s cluster upgraded to 1.11.5, so there is no question about having the static IP in a different resource group.
Overall, I could not get the static IP really working. With dynamic everything sounds fine, but I cannot add a A record for a dynamic IP.
I managed to deploy everything successfully, but any curl ip.. does not work. I did run exec -ti locally, and locally everything is fine.
Could someone please point me to a GitHub config or article that has this configuration running? As a disclaimer I know azure very well, so well the service principal assignments are well done, etc. However, I am new, only a few months on k8s.
Thanks in advance for any suggestion.
I can share logs if needed but believe I did check everything from dns to ingress routes. I am worried that this doc is not good and I am just loosing my time.
Answering myself this question, after quite a journey, for when I get older and I forget what I've done, and maybe my nephew will save some hours someday.
First, it's important:
In the values provided to nginx-ingress chart template, there are 2 annotations that are important:
service.beta.kubernetes.io/azure-load-balancer-resource-group: "your IP's resource group"
externalTrafficPolicy: "Local"
Here are all the values documented: https://github.com/helm/charts/blob/master/stable/nginx-ingress/values.yaml
The chart can be deployed near your service's namespace, it should not be in kube-system (with my current knowledge I don't find a reason to have it in system).
Second, could be misleading
There is a delay of ~30+ seconds (in my case) from the moment when IP appeared in the kubectl get services --watch and till the moment curl -i IP was able to answer the call. So, if you have automation or health probes then ensure that you have 1 - 2 mins added to wait. Or maybe take better nodes, bare metal machines.
Look at GCE and DO for the same setup as might help:
https://cloud.google.com/community/tutorials/nginx-ingress-gke
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
The guys at DO, are good writes as well.
Good luck!
Based on your comments, it seems that you are trying to override the externalIPs but use the default value of the helm chart for controller.service.type which is LoadBalancer. What you might want to do is to keep controller.service.type to LoadBalancer and set controller.service.loadBalancerIP with your static IP instead of overriding externalIPs.
Here some documentation from microsoft.
Related
I have an AKS cluster running on which I enabled Container Insights.
The Log Analytics workspace has a decent amount of logs in there.
Now I do have my applications running on a separate namespace, and one namespace which has some Grafana containers running (which I also don't want in my captured logs).
So, I searched on how I could reduce the amount of captured logs and came across this Microsoft docs article.
I deployed the template ConfigMap to my cluster and for [log_collection_settings.stdout] and [log_collection_settings.stderr] I excluded the namespaces which I don't want to capture.
When calling kubectl edit configmap container-azm-ms-agentconfig -n kube-system I get the following:
Which means that my config is actually in there.
Now when I open a query window in Log Analytics workspace and execute the following query:
KubePodInventory
| where Namespace == "kube-system"
I get plenty of results with a TimeGenerated column that contains values that are like 5 minutes ago, while I setup the ConfigMap a week ago.
In the logs of one of the pods omsagent-... I see logs like the following:
Both stdout & stderr log collection are turned off for namespaces: '*.csv2,*_kube-system_*.log,*_grafana-namespace_*.log'
****************End Config Processing********************
****************Start Config Processing********************
config::configmap container-azm-ms-agentconfig for agent settings mounted, parsing values
config::Successfully parsed mounted config map
While looking here at StackOverflow, I found the following answers which make me believe that this is the right thing that I did:
https://stackoverflow.com/a/63838009
https://stackoverflow.com/a/63058387
https://stackoverflow.com/a/72288551
So, not sure what I am doing wrong here. Anyone an idea?
Since I hate it myself that some people don't post an answer even if they already have one, here it is (although not the answer you want, at least for now).
I posted the issue on GitHub where the repository is maintained for Container Insights.
The issue can be seen here on GitHub.
If you don't want to click the link, here is the answer from Microsoft:
We are working on adding support for namespace filtering for inventory and perf metrics tables and will update you as soon this feature available.
So, currently we are not able to exclude more data than the ContainerLog table with this ConfigMap.
I have a Scale Set I provisioned in Azure through Terraform.
(A scale set is an implicit availability set with 5 fault domains and 5 update domains.[ [1]])
I need to find out which Fault Domain each instance is in, so that I can configure my application cluster based on this, for improved redundancy.
So far, I have found only a single post remotely addressing thisenter link description here.
More context:
I can switch to regular VMs rather than a scale set if there is absolutely no other way.
I use ansible's dynamic inventory (azure_rm.py) which I have already customised to work with Scale Sets. If the solution can leverage this, extra kudos :)
My application allows me to define topology (datacentre, rack, etc.) and I am deploying it in a single Azure datacentre. Maybe I have missed a different solution?
Many many thanks,
–Jeff
I have solved this by using the 169.254.169.254 'virtual IP' that allows a VM in the could to read its own metadata.
Specifically, I am running:
curl -H Metadata:true --silent "http://169.254.169.254/metadata/instance/compute/platformFaultDomain?api-version=2017-03-01&format=text"
in an Ansible task, and then using Ansible's local facts to make this available as an Ansible variable on the host.
I would like to add additional name servers to kube-dns in the kube-system namespace (solution provided here: https://stackoverflow.com/a/34085756/2461761), however doing this in an automated manner.
So I know I can create my own dns addon via https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns and launch it in the system's namespace, however, I am trying to provide a simple script to developers to spin up their own clusters with DNS resolution built in and don't want them to concern with the system namespace.
Is there a way to modify/set the SKYDNS_NAMESERVERS environment variable without having to making a copy of the replication controller?
Or even set it via a command and re-launch the pods of the kube-dns deployment?
Thank you in advance.
I still think that "adding SKYDNS_NAMESERVERS to the manifest file" solution is a good choice.
Suppose the developers still need to spin up the cluster, it would be better to set up the upstream DNS servers ahead through the manifest file instead of changing them on the fly. Or is there any requirement that need this to be done after the cluster is up?
If this has to be done while everything is running, one way to do so is to modify the manifest file locates on the master node. For current version kubernetes(1.4), you will also need to modify the ReplicationController name to a new one and the Addon Manager will then update the resources for you. But notice that there would be kube-dns outage(probably seconds) in between because current Addon Manager executes the update in the delete->create manner.
I have been using terraform to create a CoreOs Cluster on Digital Ocean just fine. My question was addressed here but nearly a year has passed
which seems like 10 on a fast pace projects like etcd2 and terraform. IMHO, if the master fails terraform will create another instance with the exact same configurantion, but according to the free discovery coreos service the cluster will be full and all the slaves will have the wrong ip to connect to the etcd2 master. In the case of a minion failure, the master ip wont be an issue, but I still wont be able to join a full cluster.
How does terraform deal with this kind of problem? Is there a solution or am I still binded to a hacky solution like the link above?
If I run terraform taint node1. It there a way to notify the dicovery service this change?
Terraform doesn't replace configuration management tools like Ansible, Chef and Puppet.
This can be solved using a setup where, say, a Ansible run is triggered to reconfigure the slaves when the master is reprovisioned. The ansible inventory in this case, would have been update by terraform with the right ip, and the slave ansible role can pick this up and configure appropriately.
There are obviously other ways to do this, but it is highly recommended that you couple a proper CM tool with Terraform and propagate such changes.
I know this question has been asked before, like this one. But they all very old, the method is very complex and I tried cannot really get it work. So I wonder if the new Azure SDK gives something easy, I guess should from Microsoft.WindowsAzure.ServiceRuntime namespace.
I need this because I use a worker role that mount CloudDrive, keep checking it and share to the network, then build a lucene.net on it.
This deployment works very well.
Since only 1 instance can mount the CloudDrive, so when I do VIP swap, I have to stop/(or delete) the stage deployment, then the new production deployment can successfully mount the drive. This cause the fulltext search stop for awhile (around 1-2 minutes if everything well and I click the button fast enough). So I wonder if I can detect current status, and only mount when production and unmount when stage.
I figured out one way to solve this, please see my answer here:
https://stackoverflow.com/a/18138700/1424115
Here a what more simpler solution.
What i did was an ip check. The staging enviroment gets a different external ip then the production enviroment. The production ip adres is the ip of (yourapp).cloudapp.net. So the only thing you need to do is to check if these two match.