I have an AKS cluster configured with an ingress-nginx internal ingress controller of class nginx-internal. This creates an internal LB with a private IP. We then create a few ingress objects using the ingress class nginx-internal. These ingress objects gets assigned the ILBs private IP(external IP). So far so good.
Now, we upgraded ingress-nginx internal ingress controller(to version v1.2.0 from 0.49.0 as we had to upgrade to k8s v1.22.6) and this potentially caused the ILBs IP address to change. To our surprise, the ingress objects still have the old IPs assigned and not the new ones.
I would have thought the ingress controller would have figured this out and would have updated the IP addresses on the all ingress objects that it tracks.
Any help/explanations on what may have gone wrong?
The recommended way for ingress-nginx on the new version is to use Helm. This would ensure the new IPs would be used.
NAMESPACE=ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace $NAMESPACE \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
Check the Azure docs here https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli
Related
I installed nginx ingress controller on AKS cluster.But not able to access ingress endpoints that are exposed by our app.As per the initial analysis we see that ingress endpoints have been assigned external IP of one of the node where as ingress controller service has different IP.
What I am doing wrong ?
$kubectl get pods --all-namespaces | grep ingress
kube-system ingress-nginx-58ftggg-4xc56 1/1 Running
$kubectl get svc
kubernetes CLUSTERIP 172.16.0.1 none(ExternalIP) 443/TCP
$kubectl get ingress
vault-ingress-documentation 10.145.13.456
$kubectl describe ingress vault-ingress-documentation
Name:vault-ingress-documentation
Namespace:corebanking
Address:10.145.13.456
Default backend:default-http-backend:80 (<error:default-http-backend:80 not found)
$kubectl get services -n kube-system | grep ingress
ingress-nginx Loadbalancer 172.16.160.33 10.145.13.456 80:30389/TCP,443:31812/TCP
I tried to reproduce the same in my environment and got below results:
I have created deployment so it will create replica sets and pods for the particular nodes and we can see the pods are up and running like below:
Kubectl create -f deployment.yaml
I have created the deployment for the service file it will access inside of the cluster IP:
Kubectl create -f service.yaml
To expose the external IP of the application, I created the ingress rules using Ingress.yaml file.
I have added the annotation class for the ingress.yaml file like below:
annotations:
kubernetes.io/ingress.class: nginx
Here ingress rules will be created with empty address like below:
When I try to access the application, I am not able to access. To access the application, add loadBalancer IP with Domain name in /etc/host path.
Now I am able to connect the application with service IP, but I am not able to expose to the external IP.
To expose to the external IP, I added annotation class like below:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
After that I have applied the changes in the ingressrules.yaml file:
kubectl replace –force -f ingress_rules.yaml
OR
Kubectl create -f ingress_rules.yaml
Now I am able to see the address of the IP, by using this I am able to access the application.
I have two ingress controller deployed in two different namespaces in Azure K8s cluster
ingress-A ingress-nginx-controller LoadBalancer 10.0.131.22 20.xx.xx.xx 80:31788/TCP,443:30605/TCP 89s
ingress-A ingress-nginx-controller-admission ClusterIP 10.0.171.187 <none> 443/TCP 89s
ingress-B ingress-nginx-controller LoadBalancer 10.0.61.156 52.xx.xx.xx 80:31966/TCP,443:30125/TCP 18m
ingress-B ingress-nginx-controller-admission ClusterIP 10.0.97.78 <none> 443/TCP 18m
I have already two static IP that are assigned to my domain which i would like to use instead of the one Azure K8s cluster generated.
I try to figure how I can update these IP to mine But i couldnt a find a way.
I have tried this:
kubectl patch svc ingress-nginx-controller -n ingress-nginx-iot -p '{"status": {"loadBalancer": {"ingress":{"ip":"my new ip address"}}}}'
i got this error:
The request is invalid: patch: Invalid value: "map[status:map[loadBalancer:map[ingress:map[ip:20.76.109.236]]]]": cannot restore slice from map
I tried to modify them from Azure K8s cluster portal but didn't work as well.
I have a spark cluster running on the inhouse-kubernetes cluster(managed with Rancher). Our company and the configuration of the cluster doesn't allow the services to be accessed from the:
spark://SERVICE_NAME.namespace.svc.domain.....
We created the cluster using the Big data Europe's yaml file with some obvious changes like resources.
Link to their github:
https://github.com/big-data-europe/docker-spark#kubernetes-deployment
The best thing about this approach is that we dont have to setup anything manually, the deployments, services etc. We just run the yaml file and everything is built for us in seconds.
Yaml file:
https://raw.githubusercontent.com/big-data-europe/docker-spark/master/k8s-spark-cluster.yaml
To access the spark-ui what I simply do is create an ingress object and we are able to access it from outside. Cool!
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: spark-master
labels:
app: spark-master
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/hsts: "false"
spec:
rules:
- host: RANDOM_NAME.NAMESPACE.svc.k8s.CLUSTER.DOMAIN.com
http:
paths:
- path: /
backend:
serviceName: spark-master
servicePort: 8080
What I am trying to do is, access the spark cluster created by the BDE's given yaml file through my CLI on my work station.
Because the service way(the proper way) isn't supported yet for us so I try to use the port-forwarding method
Some insight:
The spark master is on 7077
The spark UI is on 8080 (accessible through ingress object)
The spark rest is on 6066
kubectl -n <NAMESPACE> port-forward pods/spark-master-64bbbd7877-6vt6w 12345:7077
My kubectl is configured to connect to the cluster (Thank you Rancher for the ready to use config file)
But when I try to submit a job to the cluster via:
spark-submit --class org.apache.spark.examples.SparkPi --master spark://localhost:12345 --deploy-mode cluster \
--conf spark.kubernetes.namespace=NAMESPACE \
--conf \spark.kubernetes.authenticate.submission.oauthToken=MY_TOKEN \
--conf spark.kubernetes.file.upload.path=/temp C:\opt\spark\spark-3.0.0-bin-hadoop2.7\examples\jars\spark-examples_2.12-3.0.0.jar 1000
I get the error
Forwarding from 127.0.0.1:12345 -> 7077
Forwarding from [::1]:12345 -> 7077
Handling connection for 12345
E1014 13:17:45.039840 13148 portforward.go:400] an error occurred forwarding 12345 -> 7077: error forwarding port 7077 to pod f83c6b40d5af66589976bbaf69537febf79ee317288a42eee31cb307b03a954d, uid : exit status 1: 2020/10/14 11:17:45 socat[5658] E connect(5, AF=2 127.0.0.1:7077, 16): Connection refused
So in short, the submit command doesn't connect to the Spark Cluster deployed from my CLI.
I can run the spark submit using kubectl command as specified on the documentation of BDE but our requirement is to connect via CLI for some reasons.
Help in this regard would be highly appreciated.
My token and other stuff is correct as in k8s mode, I am able to ping the cluster(with url) easily
EDIT:
I assume that the spark-master process creates a socket that does explicitly NOT bind to the address 0.0.0.0 but only to it's primary address. Since port-forwarding will use a loopback address within the pod, connections fail.
And I need to reconfigure the spark-master process to explicitly bind to 0.0.0.0.
Does someone know a way to do that if that is the issue?
thank you for your question and especially for your edit. It helped me figure out the problem and solve it.
I am using the bitnami helm chart to install spark on my cluster. Problem was the spark daemon was is started with parameter "--host" prefilled with "hostname -f", therefore not reacting to localhost.
I solved the problem for the bitnami chart by setting the environment variable for the master pod "SPARK_MASTER_HOST" to 0.0.0.0.
EDIT:
This solution still has the problem, that the backwards connection when starting jobs does not work, since the master assumes that the origin of the request is 127.0.0.1 :(.
Probably a vpn tunnel is needed in order to solve this.
Google Compute Engine newbie here.
I'm following along with the bookshelf tutorial: https://cloud.google.com/nodejs/tutorials/bookshelf-on-compute-engine
But run into a problem. When I try to view my application on http://[YOUR_INSTANCE_IP]:8080 with my external IP
Nothing shows up. I've tried running the tutorial again and again, but still same problem avails.
EDIT:
My firewall rules: http://i.imgur.com/gHyvtie.png
My VM instance:
http://i.imgur.com/mDkkFRW.png
VM instance showing the correct networking tags:
http://i.imgur.com/NRICIGl.png
Going to http://35.189.73.115:8080/ in my web browser still fails to show anything. Says "This page isn't working"
TL;DR - You're most likely missing firewall rules to allow incoming traffic to port 8080 on your instances.
Default Firewall rules
Google Compute Engine firewall by default blocks all ingress traffic (i.e. incoming network traffic) to your Virtual Machines. If your VM is created on the default network (which is usually the case), few ports like 22 (ssh), 3389 (RDP) are allowed.
The default firewall rules are described here.
Opening ports for ingress
The ingress firewall rules are described in detail here.
The recommended approach is to create a firewall rule which allows incoming traffic to your VMs (containing a specific tag you choose) on port 8080 . You can then associate this tag only to the VMs where you will want to allow ingress 8080.
The steps to do this using gcloud:
# Create a new firewall rule that allows INGRESS tcp:8080 with VMs containing tag 'allow-tcp-8080'
gcloud compute firewall-rules create rule-allow-tcp-8080 --source-ranges 0.0.0.0/0 --target-tags allow-tcp-8080 --allow tcp:8080
# Add the 'allow-tcp-8080' tag to a VM named VM_NAME
gcloud compute instances add-tags VM_NAME --tags allow-tcp-8080
# If you want to list all the GCE firewall rules
gcloud compute firewall-rules list
Here is another stack overflow answer which walks you through how to allow ingress traffic on specific ports to your VM using Cloud Console Web UI (in addition to gcloud).
PS: These are also part of the steps in the tutorial you linked.
# Add the 'http-server' tag while creating the VM
gcloud compute instances create my-app-instance \
--image=debian-8 \
--machine-type=g1-small \
--scopes userinfo-email,cloud-platform \
--metadata-from-file startup-script=gce/startup-script.sh \
--zone us-central1-f \
--tags http-server
# Add firewall rules to allow ingress tcp:8080 to VMs with tag 'http-server'
gcloud compute firewall-rules create default-allow-http-8080 \
--allow tcp:8080 \
--source-ranges 0.0.0.0/0 \
--target-tags http-server \
--description "Allow port 8080 access to http-server"
I'm using Kubernetes v1.0.6 on AWS that has been deployed using kube-up.sh.
Cluster is using kube-dns.
$ kubectl get svc kube-dns --namespace=kube-system
NAME LABELS SELECTOR IP(S) PORT(S)
kube-dns k8s-app=kube-dns,kubernetes.io/cluster-service=true,kubernetes.io/name=KubeDNS k8s-app=kube-dns 10.0.0.10 53/UDP
Which works fine.
$ kubectl exec busybox -- nslookup kubernetes.default
Server: 10.0.0.10
Address 1: 10.0.0.10 ip-10-0-0-10.eu-west-1.compute.internal
Name: kubernetes.default
Address 1: 10.0.0.1 ip-10-0-0-1.eu-west-1.compute.internal
This is the resolv.conf of a pod.
$ kubectl exec busybox -- cat /etc/resolv.conf
nameserver 10.0.0.10
nameserver 172.20.0.2
search default.svc.cluster.local svc.cluster.local cluster.local eu-west-1.compute.internal
Is it possible to have the containers use an additional nameserver?
I have a secondary DNS based service discovery Oon let's say 192.168.0.1) that I would like my kubernetes containers to be able to use for dns resolution.
ps. A kubernetes 1.1 solution would also be acceptable :)
Thank you very much in advance,
George
The DNS addon README has some details on this. Basically, the pod will inherit the resolv.conf setting of the node it is running on, so you could add your extra DNS server to the nodes' /etc/resolv.conf. The kubelet also takes a --resolv-conf argument that may provide a more explicit way for you to inject the extra DNS server. I don't see that flag documented anywhere yet, however.
In Kuberenetes (probably) 1.2 we'll be moving to a model where nameservers are assumed to be fungible. There are too many resolvers that break when different nameservers serve different subsets of DNS, and there is no real specification here that we can point to.
In other words, we'll start dropping the host's nameserver records from the container's merged resolv.conf and making our own DNS server the only nameserver line. Our DNS will be able to forward requests to upstream nameservers.
I eventually managed to solve this pretty easily by configuring SkyDNS to add an additional nameserver, you can just add the environmental variable SKYDNS_NAMESERVERS as defined in the SkyDNS docs in your SkyDNS replication controller. It has minimal impact and does not depend on node changes etc.
env:
- name: SKYDNS_NAMESERVERS
value: 10.0.0.254:53,10.0.64.254:53
For those usign Kubernetes kube-dns, flag -nameservers nor environment variable SKYDNS_NAMESERVERS are no longer avaiable.
Usage of /kube-dns:
--alsologtostderr log to standard error as well as files
--config-map string config-map name. If empty, then the config-map will not used. Cannot be used in conjunction with federations flag. config-map contains dynamically adjustable configuration.
--config-map-namespace string namespace for the config-map (default "kube-system")
--dns-bind-address string address on which to serve DNS requests. (default "0.0.0.0")
--dns-port int port on which to serve DNS requests. (default 53)
--domain string domain under which to create names (default "cluster.local.")
--healthz-port int port on which to serve a kube-dns HTTP readiness probe. (default 8081)
--kube-master-url string URL to reach kubernetes master. Env variables in this flag will be expanded.
--kubecfg-file string Location of kubecfg file for access to kubernetes master service; --kube-master-url overrides the URL part of this; if neither this nor --kube-master-url are provided, defaults to service account tokens
--log-backtrace-at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log-dir string If non-empty, write log files in this directory
--log-flush-frequency duration Maximum number of seconds between log flushes (default 5s)
--logtostderr log to standard error instead of files (default true)
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
-v, --v Level log level for V logs
--version version[=true] Print version information and quit
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Now, either you put your name servers on the hosts resolv.conf, so DNS is inherited from the node, or you use custom resolv.conf and add it to Kubelet with the flag --resolv-conf as explained here
You need to know the IP of your Core DNS to set it as a secondary DNS
Run this command to get the CoreDNS IP:
kubectl -n kube-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 43d
metrics-server ClusterIP 172.20.232.147 <none> 443/TCP 43d
This is how I setup DNS in my deployment yaml.
I posted the Google DNS IP (for clarity) and my CoreDNS ip, but you should use your VPC DNS and your CoreDNS server.
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8080
dnsPolicy: None
dnsConfig:
nameservers:
- 8.8.8.8
- 172.20.0.10
searches:
- 1b.svc.cluster.local
- svc.cluster.local
- cluster.local
- ec2.internal
options:
- name: ndots
value: "5"