Replacing ip address in kubernetes with custom name - azure

I have created a sample spring boot app and did the following:-
1.created a docker image
2.created an Azure container registry and did a docker push to this
3.Created a cluster in Azure Kubernetes service and deployed it successfully.I have chosen external endpoint option for this.
Kubernetes external end point
say for service to service call i dont want to use IP like http://20.37.134.68:80 but another custom name how can i do it?
Also if i chose internal then is there any way to replace the name.
Tried editing YAML with endpoint name property but failed.Any ideas?

I think you mixing some concept, so I'll try to explain and help you to reach what you want.
When you deploy a container image in a Kubernetes cluster, in the most cases you will use a pod or deployment spec, that basically is a yaml file with all your deployment/pod configuration, name, image name etc. Here is an example of a simple echo-server app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- name: http
containerPort: 80
Observe the fields name in the file. Here you can configure the name for your deployment and for your containers.
In order to expose your application, you will need to use a service. Services can be internal and external. Here you can find all service types.
For a internal service, you need to use the service type ClusterIP (default), it means only your cluster will reach the pods. To reach your service from other pods, you can use the service name composed by my-svc.my-namespace.svc.cluster-domain.example.
Here is an example of a service for the deployment above:
apiVersion: v1
kind: Service
metadata:
name: echo-svc
spec:
selector:
app: echo
ports:
- protocol: TCP
port: 80
targetPort: 80
To expose your service externally, you have the option to use a service type NodePort, LoadBalancer or use an ingress.
You can configure your DNS name in the ingress rules and make path rules if you want, or even configure a HTTPS for your application. There are few options to ingresses in kubernetes, and one of the most popular is nginx-ingress.
Here is an example of how to configure a simple ingress for our example service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "false"
name: echo-ingress
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: echo-svc
servicePort: 80
In the example, i'm using the dns name myapp.mydomain.com, so it means you can only will reach your application by this name.
After create the ingress, you can see the external ip with the command kubectl get ing, and you can create a A entry in your dns server.

Related

Expose Ingress with host - what is a host and where to get it?

I am using Azure Cloud and I set up a kubernetes cluster.
Now I want to expose 1 service in my cluster with gRPC, so I learned I need Ingress.
I set up ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
#nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
name: fortune-ingress
#namespace: default
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: apigateway-service
port:
number: 80
Now the ingress is working but it has Host "*" and no external IP, so I dont know how to connect with that.
I know I have a domain on another service, but I dont want to use it right now for developmenmt reasons, I just want to have a test-host or external IP to try if everything is working within my cluster and focus on that.
What to do?

How to block all traffic to pods matching a label using a NetworkPolicy

Afaik, the K8s NetworkPolicy can only allow pods matching a label to do something. I do not want to:
Deny all traffic
Allow traffic for all pods except the ones matching my label
but instead:
Allow all traffic
Deny traffic for pods matching my label
How do I do that?
From kubectl explain NetworkPolicy.spec.ingress.from:
DESCRIPTION:
List of sources which should be able to access the pods selected for this
rule. Items in this list are combined using a logical OR operation. If this
field is empty or missing, this rule matches all sources (traffic not
restricted by source). If this field is present and contains at least one
item, this rule allows traffic only if the traffic matches at least one
item in the from list.
As far as I understand this, we can only allow, not deny.
As you mentioned in the comments, you are using the Kind tool for running Kubernetes. Instead of kindnet CNI plugin (default CNI plugin for Kind) which does not support Kubernetes network policies, you can use Calico CNI plugin which support Kubernetes network policies + it has its own, similar solution called Calico network policies.
Example - I will create cluster with disabled default kind CNI plugin + enabled NodePort for testing (assuming that you have kind + kubectl tools already installed):
kind-cluster-config.yaml file:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true # disable kindnet
podSubnet: 192.168.0.0/16 # set to Calico's default subnet
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp # Optional, defaults to tcp
Time for create a cluster using above config:
kind create cluster --config kind-cluster-config.yaml
When cluster is ready, I will install Calico CNI plugin:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
I will wait until all calico pods are ready (kubectl get pods -n kube-system command to check). Then, I will create sample nginx deployment + service type NodePort for accessing:
nginx-deploy-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
Let's apply it: kubectl apply -f nginx-deploy-service.yaml
So far so good. Now I will try to access nginx-service using node IP (kubectl get nodes -o wide command to check node IP address):
curl 172.18.0.2:30000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Okay, it's working.
Now time to install calicoctl and apply some example policy - based on this tutorial - to block ingress traffic only for pods with label app with value nginx:
calico-rule.yaml:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: app == "nginx"
types:
- Ingress
Apply it:
calicoctl apply -f calico-rule.yaml
Successfully applied 1 'GlobalNetworkPolicy' resource(s)
Now I can't reach the address 172.18.0.2:30000 which was working previously. The policy is working fine!
Read more about calico policies:
Get started with Calico network policy
Calico policy tutorial
Also check this GitHub topic for more information about NetworkPolicy support in Kind.
EDIT:
Seems like Calico plugin supports as well Kubernetes NetworkPolicy, so you can just install Calico CNI plugin and apply the following policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
I tested it and seems it's working fine as well.

Loadbalancer IP and Ingress IP status is pending in kubernetes

I have created the Kubernetes Cluster using two Azure Ubuntu VMs. I am able to deploy and access pods and deployments using the Nodeport service type. I have also checked the pod's status in Kube-system namespace. All of the pod's status showing as running. but, whenever I mention service type to Loadbalancer, it was not creating the LoadBalancer IP and it's status always showing as pending. I have also created an Ingress controller for the Nginx service. still, it is not creating an ingress Address. While initializing the Kubernetes master, I am using the following command.
kubeadm init
Below is deployment, svc and Ingress manifest files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"p...
Selector: app=nginx
Type: ClusterIP
IP: 10.96.107.97
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.44.0.4:80,10.44.0.5:80,10.44.0.6:80
Session Affinity: None
Events: <none>
$ kubectl describe ingress nginx
Name: test-ingress
Namespace: default
Address:
Default backend: nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Rules:
Host Path Backends
---- ---- --------
`*` `*` nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"test-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"nginx","servicePort":80}}}
Events: `<none>`
Do we need to mention any IP ranges(private or public) of VMs while initializing the kubeadm init? or
Do we need to change any network settings in Azure Ubuntu VMs?
As you created your own Kubernetes cluster rather than AWS, Azure or GCP provided one, there is no load balancer integrated. Due to this reason, you are getting IP status pending.
But with the use of Ingress Controller or directly through NodePort you can circumvent this problem.
However, I also observed in your nginx service you are using an annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and you said you are using Azure and those are platform specific annotations for the service and that annotation is AWS specific.
However, you can give something like this a try, if you would like to experiment directly with public IPs, you can define your service by providing externalIPs in your service if you have a public ip allocated to your node and allows ingress traffic from somewhere.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
But, a good approach to get this done is using an ingress controller if you are planning to build your own Kubernetes cluster.
Hope this helps.

Access Prodigy UI in Kubernetes Pod

I am attempting to create a service for creating training datasets using the Prodigy UI tool. I would like to do this using a Kubernetes cluster which is running in Azure cloud. My Prodigy UI should be reachable on 0.0.0.0:8880 (on the container).
As such, I created a deployment as follows:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: prodigy-dply
spec:
replicas: 1
selector:
matchLabels:
app: prodigy_pod
template:
metadata:
labels:
app: prodigy_pod
spec:
containers:
- name: prodigy-sentiment
image: bdsdev.azurecr.io/prodigy
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c", "prodigy spacy textapi -F training_recipe.py"]
ports:
- name: prodigyport
containerPort: 8880
This should (should being the operative word here) expose that 8880 port at the pod level aliased as prodigyport
Following that, I have created a Service as below:
kind: Service
apiVersion: v1
metadata:
name: prodigy-service
spec:
type: LoadBalancer
selector:
app: prodigy_pod
ports:
- protocol: TCP
port: 8000
targetPort: prodigyport
At this point, when I run the associated kubectl create -f <deployment>.yaml and kubectl create -f <service>.yaml, I get an ExternalIP and associated Port: 10.*.*.*:34672.
This is not reachable by browser, and I'm assuming I have a misunderstanding of how my browser would interact with this Service, Pod, and the underlying Container. What am I missing here?
Note: I am willing to accept that kubernetes may not be the tool for the job here, it seems enticing because of the ease of scalability and updating images to reflect more recent configurations
You can find public IP address(LoadBalancer Ingress) with this command:
kubectl get service azure-vote-front
Result like this:
root#k8s-master-79E9CFFD-0:~# kubectl get service azure
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure 10.0.136.182 52.224.219.190 8080:31419/TCP 10m
Then you can browse it with external IP and port, like this:
curl 52.224.219.190:8080
Also you can find the Load Balaner rules via Azure portal:
Hope this helps.
You can find the IP address created for your service by getting the service information through kubectl:
kubectl describe services prodigy-service
The IP address is listed next to LoadBalancer Ingress.
Also, you can use port forwarding to access your pod:
kubectl port-forward <pod_name> 8880:8880
After that you can access Prodigy UI by localhost:8880 in your browser.

Side-car Traefik container route to ports in Kuberenets

I am running a NodeJS image in my Kubernetes Pod, while exposing a specific port (9080), and running Traefik as a side-car container as reverse proxy. How do I specify Traefik route from the Deployment template.
Deployment
apiVersion: apps/v1beta1
kind: Deployment
metadata:
labels:
app: web
name: web-controller
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- image: "nodeJS-image"
name: web
ports:
- containerPort: 9080
name: http-server
- image: "traefik-image"
name: traefik-proxy
ports:
- containerPort: 80
name: traefik-proxy
- containerPort: 8080
name: traefik-ui
args:
- --web
- --kubernetes
If I understand correctly, you want to forward requests hitting the Traefik container to the Node.js application living in the same pod. Given that the application is configured statically from Traefik's perspective, you can simply mount a proper file provider configuration into the Traefik pod (presumably via a ConfigMap) pointing at the side car container.
The most simple way to achieve this (as documented) is to append the following file provider configuration directly at the bottom of Traefik's TOML configuration file:
[file]
[backends.backend.servers.server]
url = "http://127.0.0.1:9080"
[frontends.frontend]
backend = "backend"
[frontends.frontend.routes.route]
host = "machine-echo.example.com"
If you mount the TOML configuration file into the Traefik pod under a path other than the default one (/etc/traefik.toml), you will also need to pass the --configFile option in the manifest referencing the correct location of the file.
After that, any request hitting the Traefik container on port 80 with a host header of machine-echo.example.com should get forwarded to the Node.js side car container on port 9080.

Resources