I'm trying get the hello-node service running and accesssible from outside on an azure VM with minikube.
minikube start --driver=virtualbox
created deployment
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver
exposed deployment
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
suppose kubectl get services says:
hello-node LoadBalancer 1.1.1.1 8080:31382/TCP
The public IP of the azure VM is 2.2.2.2, the private IP is 10.10.10.10 and the virtualbox IP is 192.168.99.1/24
How can I access the service from a browser outside the cluster's network?
In your case, you need you to use --type=NodePort for creating a service object that exposes the deployment. The type=LoadBalancer service is backed by external cloud providers.
kubectl expose deployment hello-node --type=NodePort --name=hello-node-service
Display information about the Service:
kubectl describe services hello-node-service
The output should be similar to this:
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.32.0.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31496/TCP
Endpoints: 10.200.1.4:8080,10.200.2.5:8080
Session Affinity: None
Events: <none>
Make a note of the NodePort value for the service. For example, in the preceding output, the NodePort value is 31496.
Get the public IP address of your VM. And then you can use this URL:
http://<public-vm-ip>:<node-port>
Don't forget to open this port in firewall rules.
Related
I have an application that relies on a kafka service.
With Kafka connect, I'm getting an error when trying to curl localhost:8083, on the Linux VM that's running the kubernetes pod for Kafka connect.
curl -v localhost:8083 gives:
Rebuilt URL to: localhost:8083/
Trying 127.0.0.1...
connect to 127.0.0.1 port 8083 failed: Connection refused
Failed to connect to localhost port 8083: Connection refused
Closing connection 0
curl: (7) Failed to connect to localhost port 8083: Connection refused
kubectl get po -o wide for my kubernetes namespace gives:
When I check open ports using sudo lsof -i -P -n | grep LISTEN I don't see 8083 listed. The kafka connect pod is running and there's nothing suspicious in the logs for the pod.
There's a kubernetes manifest that I think was probably used to set up the Kafka connect service, these are the relevant parts. I'd really appreciate any advice about how to figure out why I can't curl localhost:8083
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-connect
namespace: my-namespace
spec:
...
template:
metadata:
labels:
app: connect
spec:
containers:
- name: kafka-connect
image: confluentinc/cp-kafka-connect:3.0.1
ports:
- containerPort: 8083
env:
- name: CONNECT_REST_PORT
value: "8083"
- name: CONNECT_REST_ADVERTISED_HOST_NAME
value: "kafka-connect"
volumes:
- name: connect-plugins
persistentVolumeClaim:
claimName: pvc-connect-plugin
- name: connect-helpers
secret:
secretName: my-kafka-connect-config
---
apiVersion: v1
kind: Service
metadata:
name: kafka-connect
namespace: my-namespace
labels:
app: connect
spec:
ports:
- port: 8083
selector:
app: connect
You can't connect to a service running inside your cluster, from outside your cluster, without a little bit of tinkering.
You have three possible solutions:
Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster.
See the services and kubectl expose documentation.
Be aware, depending on your environment, this may expose the service to the internet.
Access using Proxy Verb: (see here)
This only works for HTTP/HTTPS. Use this if your service is not secure
enough to be exposed to the internet.
Access from pod running inside your cluster.
As you have noticed in the comments, you can curl from inside the pod. You can also do this from any other pod running in the same cluster. Pods can communicate with each other without any additional configuration.
Why can I not curl 8083 when I ssh onto the VM?
Pods/services are not reachable from outside the cluster, if not exposed using aforementioned methods (point 1 or 2).
Why isn't the port exposed on the host VM that has the pods?
It's not exposed on your VM, it's exposed inside your cluster.
I would strongly recommend going through Cluster Networking documentation to learn more.
I have a Kubernetes cluster inside Azure which holds some services and pods. I want to make those pods communicate with each other but when I try to execute a CURL/WGET from one to another, a timeout occurs.
The service YAMLs can be found below:
First service:
apiVersion: v1
kind: Service
metadata:
labels:
app: core-node
name: core-node
spec:
ports:
- name: "9001"
port: 9001
targetPort: 8080
selector:
app: core-node
status:
loadBalancer: {}
Second service:
apiVersion: v1
kind: Service
metadata:
labels:
app: core-python
name: core-python
spec:
ports:
- name: "9002"
port: 9002
targetPort: 8080
selector:
app: core-python
status:
loadBalancer: {}
When I am connecting to the "core-node" pod for example through sh and try to execute the following command, it gets a timeout. It happens also if I try for "core-python" pod to the other one.
wget core-python:9002
wget: can't connect to remote host (some ip): Operation timed out
I also tried using the IP directly and also trying to switch from ClusterIP to LoadBalancer, but the same thing happens. I have some proxy configuration as well but this is done mainly at Ingress level and should not affect the communication between PODS via service names, at least from what I know.
Pods are in running status and their APIs can be accessed through the public URLs exposed through Ingress.
#EDIT1:
I connected also to one of the PODs and checked if port 8080 is listening and it seems ok from my perspective.
netstat -nat | grep LISTEN
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
#EDIT2:
When I do an endpoints check for this service, it returns the following:
kubectl get ep core-node
NAME ENDPOINTS AGE
core-node 10.x.x.x:8080 37m
If I try to wget this IP from the other pod, it responds:
wget 10.x.x.x:8080
Connecting to 10.x.x.x:8080 (10.x.x.x:8080)
wget: server returned error: HTTP/1.1 404 Not Found
I'm trying to expose a pod using a load balancer service. The service was created successfully and an external IP was assigned. When I tried accessing the external in the browser the site is no and I got ERR_CONNECTION_TIMED_OUT. Please see the yaml below:
apiVersion: v1
kind: Service
metadata:
labels:
name: service-api
name: service-api
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 30868
port: 80
protocol: TCP
targetPort: 9080
name: http
selector:
name: service-api
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
I also tried creating the service using kubernetes CLI still no luck.
It looks like I have a faulty DNS on my k8s cluster. In order to resolve the issue, I have to restart the cluster. But before restarting the cluster, you can also delete all the pods in kube-system to refresh the DNS pods and if it's still not working I suggest restarting the cluster.
Battling with Kubernetes manifest on Azure. I have a simple api app running on port 443 (https). I simply want to run and replicate this app 3 times within a kubernetes cluster with a load balancer.
Kubernetes cluster:
My manifest file:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: apiApp
spec:
replicas: 3
template:
metadata:
labels:
app: apiApp
spec:
containers:
- name: apiApp
image: {image name on Registry}
ports:
- containerPort: 443
hostPort: 443
---
apiVersion: v1
kind: Service
metadata:
name: apiApp
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: 443
selector:
app: apiApp
In the above manifest the loadbalancer does not seem to find the app on port 443 within the container.
1) How can I create this manifest to link load balancer to port 443 of the containers and also expose the load balancer to the outside world on port 443.
2)How would manifest look like in multi cluster environment (same conditions as above)
For your issue, I did the test with the load balancer follow the document Deploy an Azure Kubernetes Service (AKS) cluster.
This example only has one pod, so I scale up the pod in to 3 with the command kubectl scale --replicas=3 deployment/azure-vote-front. The yaml file about scales and Load Balancer will like the screenshot below.
When the Cluster finish, I can access the service from Internet via Web Browse. And you can use the command az aks browse to go into the Kubernets dashboard to get a overview of the Kubernets Cluster.
Update
The Azure Kubernets Cluster is just a resource group like below and so as the load balancer:
I am attempting to create a service for creating training datasets using the Prodigy UI tool. I would like to do this using a Kubernetes cluster which is running in Azure cloud. My Prodigy UI should be reachable on 0.0.0.0:8880 (on the container).
As such, I created a deployment as follows:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: prodigy-dply
spec:
replicas: 1
selector:
matchLabels:
app: prodigy_pod
template:
metadata:
labels:
app: prodigy_pod
spec:
containers:
- name: prodigy-sentiment
image: bdsdev.azurecr.io/prodigy
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c", "prodigy spacy textapi -F training_recipe.py"]
ports:
- name: prodigyport
containerPort: 8880
This should (should being the operative word here) expose that 8880 port at the pod level aliased as prodigyport
Following that, I have created a Service as below:
kind: Service
apiVersion: v1
metadata:
name: prodigy-service
spec:
type: LoadBalancer
selector:
app: prodigy_pod
ports:
- protocol: TCP
port: 8000
targetPort: prodigyport
At this point, when I run the associated kubectl create -f <deployment>.yaml and kubectl create -f <service>.yaml, I get an ExternalIP and associated Port: 10.*.*.*:34672.
This is not reachable by browser, and I'm assuming I have a misunderstanding of how my browser would interact with this Service, Pod, and the underlying Container. What am I missing here?
Note: I am willing to accept that kubernetes may not be the tool for the job here, it seems enticing because of the ease of scalability and updating images to reflect more recent configurations
You can find public IP address(LoadBalancer Ingress) with this command:
kubectl get service azure-vote-front
Result like this:
root#k8s-master-79E9CFFD-0:~# kubectl get service azure
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure 10.0.136.182 52.224.219.190 8080:31419/TCP 10m
Then you can browse it with external IP and port, like this:
curl 52.224.219.190:8080
Also you can find the Load Balaner rules via Azure portal:
Hope this helps.
You can find the IP address created for your service by getting the service information through kubectl:
kubectl describe services prodigy-service
The IP address is listed next to LoadBalancer Ingress.
Also, you can use port forwarding to access your pod:
kubectl port-forward <pod_name> 8880:8880
After that you can access Prodigy UI by localhost:8880 in your browser.