Random characters when describing kubernetes namespaces - dns

I'm trying to connect my Kubernetes deployments together via DNS.
I have a Java (Spring Boot) deployment and a javascript (node.js) deployment, both exposed via a default ClusterIP Service. I need websocket and REST communication between both services.
I've read that I should use DNS so that these two services can talk to each other, but I'm having trouble trying to determine what those DNS's are.
For example,
kubectl get pods --all-namespaces
gives me this:
NAMESPACE NAME
default javascript-deployment-65869b7db4-mxfrb
default java-deployment-54bfc87fd6-z8wml
What do I need to specify in my Service config to stop these random suffixes being applied?
How do I then determine what my DNS names need to be with a similar form of my-svc.my-namespace.svc.cluster.local?

About your questions:
1- Kubernetes doesn't recommend to avoid creating the names because basically, it ensures that the pods are unique and also, the first part of the hash it groups all the pods with the same replica-controller.
So just as advice, don't touch it. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label
2- kubectl get services -o wide will provide you in which port is your app listening. You just need to use the cluster ip + port like CLUSTER_IP:PORT to be able to reach your service.

I fixed it using the Service metadata name and port.
For example, this is my service definition:
apiVersion: v1
kind: Service
metadata:
name: my-big-deployment
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: my-service
From my applications in the cluster I can now access this service via the following environment variables:
MY_BIG_DEPLOYMENT_SERVICE_HOST
MY_BIG_DEPLOYMENT_SERVICE_PORT

Related

How to dynamically assign a port to a helm configuration?

So, I have a really simple Flask app that I'm deploying in a Kubernetes environment using helm. Now, I have the following defined in my values.yaml:
...
service:
type: ClusterIP
port: 5000
targetPort: 5000
# can add
# flaskPort: "5000"
ingress:
...
I know that I can set environment variables in my helm install command by typing helm install python-service . --values values-dev.yaml --set flaskPort=5000 and in my python code just do :
PORT = os.environ.get("flaskPort")
app.run(port=PORT, debug=True, host=0.0.0.0)
I can also define in my values-dev.yaml and in my templates/deployment.yaml entries for this environment variable flaskPort. But what about the port and targetPort entries in my values-dev.yaml? Wouldn't that clash with whatever flaskPort I set? How do I modify my chart to make sure that whatever port I specify in my helm install command, my python app is started on that port. The python app is a small mock server which responds to simple GET/POST commands.
Each Kubernetes pod has its own IP address inside the cluster, so you don't need to worry about port conflicts. Similarly, each service has its own IP address, distinct from the pod IP addresses, plus its own DNS name, so services can use the same ports as pods or other services without conflicts.
This means that none of this needs to be configurable at all:
Your application can listen on whatever port is the default for its framework; for Flask that is generally port 5000. (It does need to listen on the special "all interfaces" address 0.0.0.0.)
The pod spec should reflect the same (fixed) port number. It can help to give it a name.
ports:
- name: http
containerPort: 5000
The service can use any port it likes; for an HTTP-based service I'd recommend the default HTTP port 80. The targetPort: can be a name, which would match the name: of the corresponding pod/container port.
type: ClusterIP
ports:
- name: http
port: 80
targetPort: http
Calls to the service from within the cluster can use plain http://svcname.nsname/ URLs, without really caring how the service is implemented, what the pod IPs are, are what ports the pods happen to be using.
At a Helm level it can make sense to make details of the service configurable; in particular if it's a NodePort or LoadBalancer service (or neither) and any of the various cloud-provider-specific annotations. You don't need to configure the pod's port details, particularly if you've written both the application and the Helm chart. For example, if you run helm create, the template service that you get doesn't allow configuring the pod's port; it's fixed in the deployment spec and available to the service under the http name.

Kubernetes fails to create loadbalancer on azure

We have created the kubernetes cluster on the azure VM, with Kube master and two nodes. We have deployed application and created the service with "NodePort" which works well. But when we try to use the type: LoadBalancer then it create service but the external IP goes pending state. Currently we unable create service type load balance and due this "ingress" nginx controller also going to same state. So we are not sure how do we setup load balancing in this case.
We have tried creating Load Balancer in Azure and trying to use that ip like shown below in service.
kind: Service
apiVersion: v1
metadata:
name: jira-service
labels:
app: jira-software
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
selector:
app: jira-software
type: LoadBalancer
loadBalancerIP: xxx.xxx.xxx.xxx
ports:
- name: jira-http
port: 8080
targetPort: jira-http
similarly we have one more application running on this kube cluster and we want to access application based on the context path.
if we invoke jira it should call backend server jira http://dns-name/jira
if we invoke some other app like bitbucket http://dns-name/bitbukcet
If I understand correctly you used type LoadBalancer in Virtual Machine, which will not work - type LoadBalancer works only in managed Kubernetes services like GKE, AKS etc.
You can find more information here.

Https certificates and Kubernetes (EKS)

I would like to secure my web application running on Kubernetes (EKS). All the nodes attached to the cluster are running on private subnets.
I have one front-end service and a dozen back-end services.
The front-end service is a pod running a container which is running on port 80. It is configured to be attached to an ELB which is only accepting traffic from 443 with an https certificate.
apiVersion: v1
kind: Service
metadata:
name: service_name
labels:
app: service_name
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
ports:
- port: 443 # Exposed port
targetPort: 80 # Container port
selector:
app: service_name
type: LoadBalancer
The back-end services are pods running containers also running on port 80. None of them have been configured to be accessible from outside the cluster. Back-end services talk to each other by pointing to http://service_name (NOT https) as I configured them with this template:
apiVersion: v1
kind: Service
metadata:
name: service_name
spec:
ports:
- port: 80 # Exposed port
targetPort: 80 # Container port
selector:
app: service_name
It all works but is it sufficient?
Should the front-end/back-end containers use certificate/443 too with a wildcard https certificate? Should this configuration be done inside the container or on the services' configurations?
I have done quite a bit of investigation now and here is what I came down to.
All my EKS EC2 instances are running on the private subnets which means they are not accessible from outside. Yes, by default Kubernetes does not encrypt traffic between pods which means that a hacker who gained access to my VPC (could be a rogue AWS engineer, someone who manages to physically access AWS data centers, someone who managed to access my AWS account...) will be able to sniff the network traffic. At the same time, I feel that in that instance the hacker will have access to much more! If he has access to my AWS account, he can download the https certificate himself for instance. If he manages to walk into an (high security) AWS data center and finds my server - it's good to compare the risk he has to take against the value of your data. If your data includes credit card/payments or sensitive personal data (date of birth, health details...), SSL encryption is a must.
Anyway, to secure pods traffic, there are 2 options.
Update all the pod source code and add the certificate there. It requires a lot of maintenance if you are running many pods and certificates expire every other year..
Add an extra 'network layer' like https://istio.io/. This will add complexity to your cluster and in the case of EKS, support from AWS will be 'best effort'. Ideally, you would pay for Istio support.
For the load balancer, I decided to add an ingress to the cluster (Ngnix, Traefik...) and set it up with https. That's critical as the ELB sits on the public subnets.

Configuring HTTPS for an internal IP on Azure Kubernetes

I have deployed a Kubernetes cluster to a custom virtual network on Azure using acs-engine. There is an ASP.NET Core 2.0 Kestrel app running on the agent VMs and the app is accessed over VPN through a Service of the Azure internal load balancer type. Now I would like to enable HTTPS on the service. I have already obtained a domain name and a certificate but have no idea how to proceed. Apparently configuring Kestrel to use HTTPS and copying the certificate to each container is not the way to go.
I have checked out tutorials such as ingress on k8s using acs and configure Nginx Ingress Controller for TLS termination on k8s on Azure but both of them end up exposing a public external IP and I want to keep the IP internal and not accessible from the internet. Is this possible? Can it be done without ingresses and their controllers?
While for some reason I still can't access the app through the ingress I was able to create an internal ingress service with the IP I want with the following configuration:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
name: nginx-ingress-svc
spec:
type: LoadBalancer
ports:
- port: 443
targetPort: 443
loadBalancerIP: 130.10.1.9
selector:
k8s-app: nginx-ingress-controller
The tutorial you linked is a bit outdated, at least the instructions have you go to a 'examples' folder in the GitHub repo they link but that doesn't exist. Anyhow, a normal nginx ingress controller consists of several parts: the nginx deployment, the service that exposes it and the default backed parts. You need to look at the yamls they ask you to deploy, look for the second part of what I listed - the ingress service - and change type from LoadBalancer to ClusterIP (or delete type altogether since ClusterIP is the default)

Does NodePort work on Azure Container Service (Kubernetes)

I have got the following service for Kubernetes dashboard
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Annotations: kubectl.kubernetes.io/last-applied-configuration={"kind":"Service","apiVersion":"v1","metadata":{"name":"kubernetes-dashboard","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"k...
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP: 10.0.106.144
Port: <unset> 80/TCP
NodePort: <unset> 30177/TCP
Endpoints: 10.244.0.11:9090
Session Affinity: None
Events: <none>
According to the documentation, I ran
az acs kubernetes browse
and it works on http://localhost:8001/ui
But I want to access it outside the cluster too. The describe output says that it is exposed using NodePort on port 30177.
But I'm not able to access it on http://<any node IP>:30177
As we know, expose the service to internet, we can use nodeport and LoadBalancer.
As far as I know, Azure does not support nodeport type now.
But I want to access it outside the cluster too.
we can use LoadBalancer to re-create the kubernetes dashboard, here are my steps:
Delete kubernetes-dashboard via kubernetes UI: select Namespace to kube-system, then select services, then delete it:
Modify Kubernetes-dashboard-service.yaml: SSH master VM, then change type from nodeport to LoadBalancer:
root#k8s-master-47CAB7F6-0:/etc/kubernetes/addons# vi kubernetes-dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: "true"
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
type: LoadBalancer
start kubernetes browse from CLI 2.0:
C:\Users>az acs kubernetes browse -g k8s -n containerservice-k8s
Then SSH to master VM to check the status:
Now, we can via the Public IP address to browse the UI:
Update:
The following image shows the architecture of azure container service cluster(Kubernetes), we should use Load-Balancer to expose the service to internet.
On second thought, this actually is expected to NOT work. The only public IP in the cluster, by default, is for the load balancer on the masters. And that load balancer obviously is not configured to forward random ports (like 30000-32767 for example). Further, none of the nodes directly have a public IP, so by definition NodePort is not going to work external to the cluster.
The only way you're going to make this work is by giving the nodes public IP addresses directly. This is not encouraged for a variety of reasons.
If you merely want to avoid waiting... then I suggest:
Don't delete the Service. Most dev scenarios should just be kubectl apply -f <directory> in which case you don't really need to wait for the Service to re-provision
Use Ingress along with 'nginx-ingress-controller' so that you only need to wait for the full LB+NSG+PublicIP provisioning once, and then can just add/remove Ingress objects in your dev scenario.
Use minikube for development scenarios, or manually add public ips to the nodes to make the NodePort scenario work.
You can't expose the service via nodeport by running the kubectl expose command, you get a VIP address outside the range of the subnets your cluster sits on... Instead, deploy a service through a yaml file and you can specify an internal load balancer as a type..., which will give you a local IP on the Master subnet, which you can connect to via the internal network...
Or, you can just expose the service with an external load balancer and get a public ip. available on the www.

Resources