Trouble accessing Kubernetes endpoints - apache-spark

I'm bringing up Spark on Kubernetes according to this example: https://github.com/kubernetes/kubernetes/tree/master/examples/spark
For some reason, I'm having problems getting the master to listen on :7077 for connections from worker nodes. It appears that connections aren't being proxied down from the service. If I bring the service up, then bring the master controller up with the $SPARK_MASTER_IP set to spark-master, it correctly resolves to the service IP but cannot bind the port. If I set the ip to localhost instead, it binds a local port and comes up -- since services should forward socket connections down to the pod endpoint this should be fine, so we move on.
Now I bring up workers. They attempt to connect to the service IP on :7077 and cannot. It seems as if connections to the service aren't making it down to the endpoint. Except...
I also have a webui service configured as in the example. If I connect to it with kubectl --proxy I can get down to the web service that's served on :8080 from spark-master, by hitting it through the webui service. Yet the nearly identically-configured spark-master service on port 7077 gives no love. If I configure the master to bind a local IP, it comes up but doesn't get connections from the service. If I configure it to bind through the service, the bind fails and it can't come up at all.
I'm running out of ideas as to why this might be happening -- any assistance is appreciated. I'm happy to furnish more debugging info on request.

I'm sorry, the Spark example was broken, in multiple ways.
The issue:
https://github.com/kubernetes/kubernetes/issues/17463
It now works, as of 2/25/2016, and is passing our continuous testing, at least at HEAD (and the next Kubernetes 1.2 release).
Note that DNS is required, though it is set up by default in a number of cloud provider implementations, including GCE and GKE.

Related

Why telnet and nc command report connection works in azure kubernetes pod while it shouldn't

I have an Azure AKS kubernetes cluster. And I created a Pod with Ubuntu container from Ubuntu image and several other Pods from java/.net Dockerfile.
I try to enter to any of the PODs (including the ubuntu one), and execute telnet/nc command in the pod to a remote server/port to validate the remote connection, it's very weird that no matter on which remote server IP and port I choose, they always report connection succeed, but actually the IP/Port should not work.
Here is the command snapshot I executed: From the image You will find I'm telneting to 1.1.1.1 with 1111 port number. I could try any other ip and port number, it always report connection succeed. And I tried to connect to all the other pods in the AKS cluster, they are all the same. I also tried to re-create the AKS kubernetes cluster by choosing CNI network instead of the default Kubenet network, still the same. Could anyone help me on this? Thanks a lot in advance
I figured out the root cause of this issue, it's because I installed Istio as service mesh, and turn out this is the expected behavior by design by referring this link: https://github.com/istio/istio/issues/36540
However, although this is by design of Istio, I'm still very interested in how to easily figure out whether a remote ip/port tcp connection works or not in Istio sidecar enabled POD.

Kubernetes + Socket.io: Pod client -> LoadBalancer service SSL issues

I have a socket.io-based node.js deployment on my Kubernetes cluster with a LoadBalancer-type service through Digital Ocean. The service uses SSL termination using a certificate uploaded to DO.
I've written a pod which acts as a health check to ensure that clients are still able to connect. This pod is node.js using the socket.io-client package, and it connects via the public domain name for the service. When I run the container locally, it connects just fine, but when I run the container as a pod in the same cluster as the service, the health check can't connect. When I shell into the pod, or any pod really, and try wget my-socket.domain.com, I get an SSL handshake error "wrong version number".
Any idea why a client connection from outside the cluster works, a client connection out of the cluster to a normal server works, but a client connection from a pod in the cluster to the public domain name of the service doesn't work?
You have to set up Ingress Controller to route traffic from a Load-Balancer to a Service.
The flow of traffic looks like this:
INTERNET -> LoadBalancer -> [ Ingress Controller -> Service]
If you want to use SSL:
You can provision your own SSL certificate and create a Secret to hold it. You can then refer to the Secret in an Ingress specification to create an HTTP(S) load balancer that uses the certificate.
You can deploy an ingress controller like nginx using following instruction: ingress-controller.
Turns out, the issue is with how kube-proxy handles LoadBalancer-type services and requests to it from inside the cluster. Turns out, when the service is created, it adds iptables entries that causes requests inside the cluster skip the load balancer completely, which becomes an issue when the load balancer also handles SSL termination. There is a workaround, which is to add a loadbalancer-hostname annotation which forces all connections to use the load balancer. AWS tends not to have this problem because they automatically apply the workaround to their service configurations, but Digital Ocean does not.
Here are some more details:
https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/annotations.md

How can I apply port forwarding to an Azure Container Instance?

I've been experimenting with a containerised app that listens for and processes TCP traffic on a specified port.
To make this work on my own physical machine that was acting as the host I had to setup port forwarding from it to the container.
I've since deployed the dockerized app to an Azure Container Instance, which runs as expected and starts listening on own IP address and the specified port BUT I can't find a way to setup port forwarding so that traffic sent to the public IP address assigned to the container group can get to the app, is this possible?
This article on container groups seems to suggest it is but doesn't seem to say how
Official answer from Microsoft Support (posting here in case anyone has the same question)
Unfortunately Port forwarding is not supported in ACI yet and it’s in roadmap.
UPDATE
It looks like this answer from support is wrong. Ports specified when creating the container group are automatically published so containers with exposed ports can receive traffic from the outside, the issue I was having was with a problem with my code.

How do I expose Kubernetes service to the internet?

I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet.
I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP).
I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods).
My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
If you use the kubectl expose command:
--external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP.
Or if you create from a json or yaml file, use the spec/externalIPs array.

Can kube-apiserver allow the unsecure connection outside of localhost?

I'm trying to setup a kubernetes cluster for a development environment (local vms). Because it's development I'm not using working certs for the api-server. It would seem I have to use the secure connection in order to connect minion daemons such as kube-proxy and kubelet to the master's kube-apiserver. Has anyone found a way around that? I haven't seen anything in the docs about being able to force the unsecure connection or ignoring that the certs are bad, I would assume there's a flag for it when running either the minion or master daemons, but I've had no luck. Etcd is working, it shows any entry from both master and minions and the logs show attempts at handshakes but definitely failing due to bad certs.
You can set the flag --insecure-bind-address=0.0.0.0 when starting kube-apiserver to allow access to the unauthenticated api endpoint running on port 8080 to your network (by default it is only accessible on localhost).

Resources