k8s not able to reach the database - azure

Here is my docker image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-alpine3.8 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY ./xyz/publish .
ENV ASPNETCORE_URLS=https://+:443;http://+80
ENTRYPOINT ["dotnet","abc/xyz.dll"]
Here is my Deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: xyzdemo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
papi: web
template:
metadata:
labels:
papi: web
spec:
containers:
- name: xyzdemo-site
image: xyz.azurecr.io/abc:31018
ports:
- containerPort: 443
imagePullSecrets:
- name: secret
---
apiVersion: v1
kind: Service
metadata:
name: xyzdemo-entrypoint
namespace: default
spec:
type: LoadBalancer
selector:
papi: web
ports:
- port: 44328
targetPort: 443
Here is my appsettings file
"Server": "xyz.database.windows.net",
"Database": "pp",
"User": "ita",
"Password": "password",
using all these i deployed the application in to the k8s cluster and am able to open the application from the browser, however when i try to get the info from the database, application is getting the network related error after a while.
System.Data.SqlClient.SqlException (0x80131904): A network-related or
instance-specific error occurred while establishing a connection to
SQL Server
I tried going inside to POD and did the ls command, i can see my application setting file as well as when Cat the application settings, i can see the correct credentials and i dont know what to do and not sure why is not able to connect to the database.
So finally i tried adding the sql connections as the env variables to the pod , then it started working. when i remove those its not connecting.
Now i removed the env variables which has the sql connections then did the log on the pod.
it says can't connect to the database: 'Empty' and server: 'Empty'
not sure why is it taking the empty when it has the details inside the applicationsettings.json file.

Well, I do not see what is the config for your k8's application to connect to database. Importantly, where is your database hosted? How can papi:web connect to database?
I also suspect your service is not having appropriate port re-direction. From your service.yaml above, https port of 443 is internally mapped to 44328. What is 44328? What is listening on that port? Your application seems to have no mention of 44328. (Refer Dockerfile)
I would improvise your service.yaml to look something like this:
apiVersion: v1
kind: Service
metadata:
name: xyzdemo-entrypoint
namespace: default //This is inferred anyways
spec:
selector:
papi: web
ports:
- name: http
protocol: TCP
port: 80
targetPort: xxxx //Where your web-server is listening. (From your dockerfile, this is also 80 but it can be any valid TCP port)
- name: https
protocol: TCP
port: 443
targetPort: xxxx //https for your web-server. (From your dockerfile, this is also 443. Again, can be any TCP port)
Opening up database server to internet is not a good practise. It's a big security threat. Good pattern is to facilitate your web-server communicate to database-server via internal dns that k8's maintain (This is assuming your database server is also a container - something like kubedb. If not, you're database server will have to be available via some sort of proxy that whitelists known hosts and only allows known hosts. eg - cloudsql proxy in GCP).
Depending on how your database server is hosted, you'll have to configure your db config to allow or whitelist your containerised application (The IP you get after applying service.yaml) Only then will your k8's app be able to talk/achieve connectivity to respective db.

I suspect you need to Allow connections on the Azure SQL firewall for this to work. Using the portal would be the easiest way. You can just allow all or allow Azure services for starters (assuming your Kubernetes is inside Azure). And narrow it down later (if this is the culprit).
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-firewall-configure#use-the-azure-portal-to-manage-server-level-ip-firewall-rules

Related

Unable to access my application from outside kubernetes cluster

I have deployed the Kubernetes cluster in the Azure platform. My application is hosted in azure docker and i deployed the docker image into my Kubernetes cluster. After deploying the pods are running fine. But I was not able to access the application from outside (postman/talentapi). Nginx Ingress controller is installed inside the cluster but still not able to access the application and getting no response error. Our application uses 2 different ports (5000 & 7003). Port 7003 is used to connect the application inside docker and port 5000 used to connect outside docker. Here by im sharing my service.yaml file,
kind: Service
apiVersion: v1
metadata:
name: kyc-service
namespace: kyc
spec:
#type: ClusterIP
selector:
app: kyc-app
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
- name: kycapp
protocol: TCP
port: 7003
targetPort: 7003

Kafka connect - Failed to connect to localhost port 8083: Connection refused

I have an application that relies on a kafka service.
With Kafka connect, I'm getting an error when trying to curl localhost:8083, on the Linux VM that's running the kubernetes pod for Kafka connect.
curl -v localhost:8083 gives:
Rebuilt URL to: localhost:8083/
Trying 127.0.0.1...
connect to 127.0.0.1 port 8083 failed: Connection refused
Failed to connect to localhost port 8083: Connection refused
Closing connection 0
curl: (7) Failed to connect to localhost port 8083: Connection refused
kubectl get po -o wide for my kubernetes namespace gives:
When I check open ports using sudo lsof -i -P -n | grep LISTEN I don't see 8083 listed. The kafka connect pod is running and there's nothing suspicious in the logs for the pod.
There's a kubernetes manifest that I think was probably used to set up the Kafka connect service, these are the relevant parts. I'd really appreciate any advice about how to figure out why I can't curl localhost:8083
apiVersion: apps/v1
kind: Deployment
metadata:
name: kafka-connect
namespace: my-namespace
spec:
...
template:
metadata:
labels:
app: connect
spec:
containers:
- name: kafka-connect
image: confluentinc/cp-kafka-connect:3.0.1
ports:
- containerPort: 8083
env:
- name: CONNECT_REST_PORT
value: "8083"
- name: CONNECT_REST_ADVERTISED_HOST_NAME
value: "kafka-connect"
volumes:
- name: connect-plugins
persistentVolumeClaim:
claimName: pvc-connect-plugin
- name: connect-helpers
secret:
secretName: my-kafka-connect-config
---
apiVersion: v1
kind: Service
metadata:
name: kafka-connect
namespace: my-namespace
labels:
app: connect
spec:
ports:
- port: 8083
selector:
app: connect
You can't connect to a service running inside your cluster, from outside your cluster, without a little bit of tinkering.
You have three possible solutions:
Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster.
See the services and kubectl expose documentation.
Be aware, depending on your environment, this may expose the service to the internet.
Access using Proxy Verb: (see here)
This only works for HTTP/HTTPS. Use this if your service is not secure
enough to be exposed to the internet.
Access from pod running inside your cluster.
As you have noticed in the comments, you can curl from inside the pod. You can also do this from any other pod running in the same cluster. Pods can communicate with each other without any additional configuration.
Why can I not curl 8083 when I ssh onto the VM?
Pods/services are not reachable from outside the cluster, if not exposed using aforementioned methods (point 1 or 2).
Why isn't the port exposed on the host VM that has the pods?
It's not exposed on your VM, it's exposed inside your cluster.
I would strongly recommend going through Cluster Networking documentation to learn more.

Websocket connection fails for internal communication within a Kubernetes container

I am using Kubernetes to deploy my React Application. Due to the database, I am using (RethinkDB). I have to initiate a WebSocket connection between my React Application and a Node.js server that proxies to the Database instance. The connection works as intended when I deploy the Database instance, backend Node server and the react application in my local machine. However when I deploy the application in Kubernetes I receive the error
WebSocket connection to 'ws://localhost:8015/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
within my react application.
For further debugging, I executed a terminal within the container and ran curl command to the Service that connects to the Database and received no errors. I also ran the backend node server to see whether it connects to the remote database and saw no issues there. Finally, I tested whether the Backend server initiates the WebSocket as intended using wscat and the WebSocket connection is working. And due to the fact that the application runs well on my local machine, leads me to believe that there the issue in the React Application connecting to the WebSocket could be caused by how Kubernetes handles Websocket connections. Any clarification on the issue is gladly appreciated.
P.S
I Have added the backend server code, the code within the React application that connects to the WebSocket and the YAML files of my React+Backend Deployment. If any more files are required please feel free to comment
backend node server
const http = require('http');
var rethinkdbWebsocketServer = require('rethinkdb-websocket-server');
const httpServer = http.createServer();
rethinkdbWebsocketServer.listen({
httpServer: httpServer,
httpPath: '/',
dbHost: remoteDB_IP,
dbPort: 28015,
unsafelyAllowAnyQuery: true
});
httpServer.listen(8015);
React code that connects to the Websocket
ReactRethinkdb.DefaultSession.connect({
host: 'localhost', // the websocket server
port: process.env.REACT_APP_WEBSOCKET_PORT,
path: '/',
secure: false,
autoReconnectDelayMs: 2000, // when disconnected, millis to wait before reconnect
});
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dashboard
labels:
app: dashboard
spec:
replicas: 1
selector:
matchLabels:
app: dashboard
template:
metadata:
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: myrepor/dashbaord
imagePullPolicy: Always
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: dashboard
spec:
selector:
app: dashboard
ports:
- port: 3000
targetPort: 3000
type: LoadBalancer
so kubernetes ideas is all about running each application in separate pod.
And connecting them through services.
This way you can deploy each layer separately without down time - service routes to old pod(s) until new are up and running. (and even more important scale up the separately )
Kubernetes does some kind of name resolution inside of each k8s cluster. Not exactly sure if that is a full dns or not
thus I would recommend
separate your node server into one pod. and deploy service for it with your ws port (8015).
your react app separate pod with its own service and define node server service name as endpoint for WS.
The reason is simple - not even sure that localhost going to be resolved correctly with in the pod.

Kubernetes service load balancer "No route to host" error

I'm trying to expose a pod using a load balancer service. The service was created successfully and an external IP was assigned. When I tried accessing the external in the browser the site is no and I got ERR_CONNECTION_TIMED_OUT. Please see the yaml below:
apiVersion: v1
kind: Service
metadata:
labels:
name: service-api
name: service-api
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 30868
port: 80
protocol: TCP
targetPort: 9080
name: http
selector:
name: service-api
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
I also tried creating the service using kubernetes CLI still no luck.
It looks like I have a faulty DNS on my k8s cluster. In order to resolve the issue, I have to restart the cluster. But before restarting the cluster, you can also delete all the pods in kube-system to refresh the DNS pods and if it's still not working I suggest restarting the cluster.

Kubernetes not resolving node service

I'm having issues with the internal DNS/service resolution within Kubernetes and I can't seem to track the issue down. I have an api-gateway pod running Kong, which calls other services by their internal service name, i.e srv-name.staging.svc.cluster.local. Which was working fine up until recently. I attempted to deploy 3 more services, into two namespaces, staging and production.
The first service, works as expected when calling booking-service.staging.svc.cluster.local, however the same code doesn't seem to work in the production service. And the other two service don't worth in either namespace.
The behavior I'm getting is a timeout. If I curl these services from my gateway pod, they all timeout, apart from the first service deployed (booking-service.staging.svc.cluster.local). When I call these services from another container within the same pod, they do work as expected.
I have Node services set up for each service I wish to expose to the client side.
Here's an example Kubernetes deployment:
---
# API
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{SRV_NAME}}
spec:
replicas: 1
template:
metadata:
labels:
app: {{SRV_NAME}}
spec:
containers:
- name: booking-api
image: microhq/micro:kubernetes
args:
- "api"
- "--handler=rpc"
env:
- name: PORT
value: "8080"
- name: ENV
value: {{ENV}}
- name: MICRO_REGISTRY
value: "kubernetes"
ports:
- containerPort: 8080
- name: {{SRV_NAME}}
image: eu.gcr.io/{{PROJECT_NAME}}/{{SRV_NAME}}:latest
imagePullPolicy: Always
command: [
"./service",
"--selector=static"
]
env:
- name: MICRO_REGISTRY
value: "kubernetes"
- name: ENV
value: {{ENV}}
- name: DB_HOST
value: {{DB_HOST}}
- name: VERSION
value: "{{VERSION}}"
- name: MICRO_SERVER_ADDRESS
value: ":50051"
ports:
- containerPort: 50051
name: srv-port
---
apiVersion: v1
kind: Service
metadata:
name: booking-service
spec:
ports:
- name: api-http
port: 80
targetPort: 8080
protocol: TCP
selector:
app: booking-api
I'm using go-micro https://github.com/micro/go-micro with the Kubernetes pre-configuration. Which again works in one case absolutely fine, but not all the others. Which leads me to believe it's not code related. It also works fine locally.
When I do nslookup from another pod, it resolves the name and finds the cluster IP for the internal Node service as expected. When I attempt to cURL that IP address, I get the same timeout behavior.
I'm using Kubernetes 1.8 on Google Cloud.
I don't understand why you think that it is an issue with the internal DNS/service resolution within Kubernetes since when you perform the DNS lookup it works, but if you query that IP you get a connection timeout.
If you curl these services from outside the pod they all timeout, apart from the first service deployed, no matter if you used the IP or the domain name.
When you call these services from another container within the same pod, they do work as expected.
It seems an issue with the connection between pods more than a DNS issue therefore I would focus your troubleshooting towards that direction, but correct me if I'am wrong.
Can you perform the classical networking troubleshooting (ping, telnet, traceroute)from a pod toward the IP given by the DNS lookup and from one of the container that is giving timeout to one of the other pods and update the question with the results?

Resources