grpc in k8s cannot resolve service dns name - node.js

i am using node js trying to run grpc in kubernetes cluster env. localy without kubernetes its working fine.the server side is listening to : '0.0.0.0:80' and client tries to connect via: http://recommended-upgrades-qa-int. in the kuberenets i get the following error:
ERROR failed to get via grpc the getRecommended Error: 14 UNAVAILABLE: Connect Failed endpoint:http://<K8S_SERVICE_NAME>
ERROR: Recommendations fetch error: Error: 14 UNAVAILABLE: Connect Failed severity=error, message=failed to get via grpc the getRecommended Error: 14 UNAVAILABLE: Connect Failed endpoint:http://<K8S_SERVICE_NAME>
server side:
const connectionHost = this.listenHost + ':' + this.listenPort;
server.bind(connectionHost, grpc.ServerCredentials.createInsecure());
logger.info(`Server running at ${connectionHost}`);
server.start();
client side:
RecommendedService = grpc.load(__dirname + '/../../node_modules/#zerto/lib-service-clients/Output/sources/recommendedClient.proto').RecommendedService;
} catch (error){
console.log(error);
}
this.client = RecommendedService && new RecommendedService(grpcAddress, grpc.credentials.createInsecure());
manefist files:
server side
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-side-deployment
namespace: default
spec:
selector:
matchLabels:
app: server-side-deployment
replicas: 1
template:
metadata:
labels:
app: server-side-deployment
spec:
containers:
- name: server-side-deployment
image: (DOCKER_IMAGE_PATH)
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: recommended-upgrades-qa-int
namespace: default
spec:
selector:
app: server-side-deployment
ports:
- protocol: TCP
port: 80
targetPort: 80
name: http
client side
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-side-deployment
namespace: default
spec:
selector:
matchLabels:
app: client-side-deployment
replicas: 1
template:
metadata:
labels:
app: client-side-deployment
spec:
containers:
- name: client-side-deployment
image: (DOCKER_IMAGE_PATH)
imagePullPolicy: Always
env:
- name: RECOMANDED_SERVICE
value: http://recommended-upgrades-qa-int
ports:
- containerPort: 80

From the docs:
“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. This resolves to the cluster IP of the Service.
Your issue here is probably sing <service name> while your service is in another namespace. Try using:
<service name>.<service namespace>.svc.cluster.local

it seems we figure it out. first the url must contain port 80 also there was an inner uncaught exception in the server service which may caused it not to work.
Thank you all

Related

Shiny proxy on AKS behind an Azure Application Gateway

I’ve been using shiny proxy on AKS for the past couple of months and it’s been fantastic, no problems at all, however, the need for a more secure setup has arised, and I have to use it behind an Azure Application Gateway (v2) with WAF and TLS certifcates (on the AGW).
The deployments happen with no problems whatsoever, but, upon trying to access the Application I always get a “404 Not Found”, also the health probes always throw "no Results" has anyone been through this before?
Here is my Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
spec:
selector:
matchLabels:
run: {{lvappname}}-proxy
replicas: 1
template:
metadata:
labels:
run: {{lvappname}}-proxy
spec:
containers:
- name: {{lvappname}}-proxy
image: {{server}}/shiny-app/{{lvappname}}-shiny-proxy-application:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- name: kube-proxy-sidecar
image: {{server}}/shiny-app/{{lvappname}}-kube-proxy-sidecar:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8001
And here is my Service and Ingress
kind: Service
apiVersion: v1
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
tier: frontend
spec:
selector:
app: {{lvappname}}-proxy
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging-application-gateway
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-399,404"
appgw.ingress.kubernetes.io/health-probe-path: "/"
labels:
app: {{lvappname}}
spec:
rules:
- host: {{lvappname}}-{{lvstage}}.{{domain}}
http:
paths:
- path: /
backend:
service:
name: {{lvappname}}-proxy
port:
number: 8080
pathType: Prefix
tls:
- hosts:
- {{lvappname}}-{{lvstage}}.{{domain}}
secretName: {{lvappname}}-{{lvstage}}.{{domain}}-secret-name
and here is my shinyproxy configuration file
proxy:
port: 8080
authentication: none
landing-page: /app/{{lvappname}}
hide-navbar: true
container-backend: kubernetes
kubernetes:
namespace: {{ns}}
image-pull-policy: IfNotPresent
image-pull-secret: {{lvappname}}-secret
specs:
- id: {{lvappname}}
display-name: {{lvappname}} application
description: Application for {{lvappname}}
container-cmd: ["R", "-e", "shiny::runApp('/app/Shiny')"]
container-image: {{server}}/shiny-app/{{lvappname}}:{{TAG}}
server:
servlet.session.timeout: 3600
spring:
session:
store-type: redis
redis:
host: redis-leader
Any Help would be deeply appreciated
Thank you all in advance

ReplyError: ERR value is not an integer or out of range in my Kubernetes cluster

I used 2 pods and service on my Kubernetes cluster. They are Redis and my nodeJs application. Also I used scaffold for dev environment.
I want to connect Redis from my nodeJs application. I set environment variable on my nodeJs_app.yaml file for connection to Redis.
My nodeJs_app.yaml file look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeJsApp
spec:
replicas: 1
selector:
matchLabels:
app: no
template:
metadata:
labels:
app: nodeJsApp
spec:
containers:
- name: nodeJsApp
image: nodeJsApp
env:
- name: REDISCONNECTIONDEV
value: "redis://redis-srv:6379/"
---
apiVersion: v1
kind: Service
metadata:
name: nodeJsApp
spec:
type: NodePort
selector:
app: nodeJsApp
ports:
- name: nodeJsApp
protocol: TCP
port: 3000
nodePort: 30001
Also my Redis.yaml file like that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-depl
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
---
apiVersion: v1
kind: Service
metadata:
name: redis-srv
spec:
selector:
app: redis
ports:
- name: redis
protocol: TCP
port: 6379
targetPort: 6379
I checked with kubectl get pods and kubectl get services and each services and ports running correctly.
On my NodeJs_app/index.js file I tried to connect Redis service with:
let REDIS_URL = process.env.REDISCONNECTIONDEV || production_redis_url;
let redisQueue = new Queue('my queue', REDIS_URL);
redisQueue('global:completed', (jobId, result) => {
// console.log(`Job completed with result ${result}`);
});
But this situation return an error like that:
/app/node_modules/redis-parser/lib/parser.js:179
return new ReplyError(string)
^
ReplyError: ERR value is not an integer or out of range
at parseError (/app/node_modules/redis-parser/lib/parser.js:179:12)
at parseType (/app/node_modules/redis-parser/lib/parser.js:302:14) {
command: { name: 'select', args: [ 'NaN' ] }
}
Also I'm new to Redis. Why this problem is occuring? How can I solve this?
Error occurs because you are trying to connect to redis database with NaN instead of number (number indicates database, 1-16). Check your code, because in some place you put NaN instead of number while trying to connect to redis database.
How to create own database in redis?

ZMQ sockets do not work as expected on Kubernetes

Short summary: My ZMQ sockets do not receive (probably also send) messages when I deploy the code on kubernetes.
I have an application that involves multiple clients and servers. It is developed on node and uses ZeroMQ as communication layer. It works on my local machine, it works on docker and I am trying to deploy the application using kubernetes.
When deploying the application, the pods, the deployments and the kubernete's services launch. Apparently everything goes ok, but the clients send an initial message that never arrives to the server. The deployments are in the same namespace and I use Flannel as CNI. To the best of my knowledge, the cluster was properly initialized but the messages never arrive.
I read this question about ZMQ sockets having problems binding on kubernetes. I have tried playing with the ZMQ_CONNECT_TIMEOUT parameter but it does not do anything. Also, unlike the question I cite, my messages never arrive.
I could provide some code, but there is a lot of code and I don't think the application is the problem. I think I am missing something on the Kubernetes configuration since it's my firts time using it . Let me know if you need more information.
Edit 1. 12/01/2021
As #anemyte suggests I will try to provide a simplified version of the code:
Client Side:
initiate () {
return new Promise(resolve => {
this.N_INCOMING = 0;
this.N_OUTGOING = 0;
this.rrCounter = 0;
this.PULL_SOCKET.bind("tcp://*:" + this.MY_PORT, error => {
utils.handleError(error);
this.PUB_SOCKET.bind("tcp://*:" + (this.MY_PORT + 1), error => {
utils.handleError(error);
this.SUB_SOCKET.subscribe("");
this.SUB_SOCKET.connect(this.SERVER + ":" + (this.SERVER_PORT + 1),
error => {utils.handleError(error)});
this.PULL_SOCKET.on("message", (m) => this.handlePullSocket(m));
this.SUB_SOCKET.on("message", (m) => this.handleSubSocket(m));
this.SERVER_PUSH_SOCKET = zmq.socket("push");
this.SERVER_PUSH_SOCKET.connect(this.SERVER + ":" + this.SERVER_PORT,
error => {utils.handleError(error)});
this.sendHello();
resolve();
});
});
});
Server side:
initiate () {
return new Promise(resolve => {
this.PULL_SOCKET.bind(this.MY_IP + ":" + this.MY_PORT, err => {
if (err) {
console.log(err);
process.exit(0);
}
this.PUB_SOCKET.bind(this.MY_IP + ":" + (this.MY_PORT + 1), err => {
if (err) {
console.log(err);
process.exit(0);
}
this.PULL_SOCKET.on("message", (m) => this.handlePullSocket(m));
resolve();
});
});
});
}
Client initiates the connection by sending the Hello Message. Server's listener function handlePullSocket should handle those messages.
Edit 2. 12/01/2021
As requested, I am adding the deployment/service configurations.
Client-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-resolved.yml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: c1
name: c1
namespace: fishtrace
spec:
replicas: 1
selector:
matchLabels:
app: c1
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-resolved.yml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
app: c1
spec:
containers:
- env:
- name: NODO_ADDRESS
value: 0.0.0.0
- name: NODO_PUERTO
value: "9999"
- name: NODO_PUERTO_CADENA
value: "8888"
- name: SERVER_ADDRESS
value: tcp://servidor
- name: SERVER_PUERTO
value: "7777"
image: registrogeminis.com/docker_c1_rpi:latest
name: c1
ports:
- containerPort: 8888
- containerPort: 9999
resources: {}
volumeMounts:
- mountPath: /app/vol
name: c1-volume
imagePullPolicy: Always
restartPolicy: Always
imagePullSecrets:
- name: myregistrykey
volumes:
- name: c1-volume
persistentVolumeClaim:
claimName: c1-volume
status: {}
Client-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: c1
name: c1
spec:
ports:
- name: "9999"
port: 9999
targetPort: 9999
- name: "8888"
port: 8888
targetPort: 8888
selector:
app: c1
type: ClusterIP
Server-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-resolved.yml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
io.kompose.service: servidor
name: servidor
namespace: fishtrace
spec:
replicas: 1
selector:
matchLabels:
app: servidor
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-resolved.yml
kompose.version: 1.19.0 (f63a961c)
creationTimestamp: null
labels:
app: servidor
spec:
containers:
- env:
- name: SERVER_ADDRESS
value: tcp://*
- name: SERVER_PUERTO
value: "7777"
image: registrogeminis.com/docker_servidor_rpi:latest
name: servidor
ports:
- containerPort: 7777
- containerPort: 7778
resources: {}
volumeMounts:
- mountPath: /app/vol
name: servidor-volume
imagePullPolicy: Always
restartPolicy: Always
imagePullSecrets:
- name: myregistrykey
volumes:
- name: servidor-volume
persistentVolumeClaim:
claimName: servidor-volume
status: {}
Server-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: ./kompose convert
kompose.version: 1.22.0 (955b78124)
creationTimestamp: null
labels:
io.kompose.service: servidor
name: servidor
spec:
ports:
- name: "7777"
port: 7777
targetPort: 7777
selector:
app: servidor
type: ClusterIP
At the end it was a DNS problem. It was always a DNS problem. Thanks to #Matt for pointing out the problem.
In the official Kubernetes DNS doc they state that there is a known issue with systems that use /etc/resolv.conf as a link to the real configuration file, /run/systemd/resolve/resolv.conf in my case. It is a well-known problem, and the recommended solution is to update the kubelet's configuration to point to /run/systemd/resolve/resolv.conf.
To do so, I added the line resolvConf:/run/systemd/resolve/resolv.conf in /var/lib/kubelet/config.yaml. I also edited /etc/kubernetes/kubelet.conf just to be sure. Finally, you are supposed to reload the service executing sudo systemctl daemon-reload && sudo systemctl restart kubelet to propagate the changes.
However, I already did that before asking in SE. And it did not seem to work. I had to restart the whole cluster to make the changes take effect. Then the DNS worked perfectly and the ZMQ sockets behaved as expected.
Update 31/04/2021: I discovered that you have to forcefully restart the coredns kubernetes' service to actually propagate the changes. So in the end kubectl rollout restart coredns -n kube-system after restarting the kubelet service was enough.

How can I run a docker container from a docker image from docker hub in skaffold?

Let's say I want to run a container from an image from docker hub, let's say mosquitto I'd execute docker run -it -p 1883:1883 -p 9001:9001 eclipse-mosquitto.
I tried to pull the image from gcr.io (deployment.yaml) like done here:
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
type: NodePort
ports:
- targetPort: 1883
port: 1883
selector:
app: mqtt-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-broker
template:
metadata:
labels:
app: mqtt-broker
spec:
containers:
- name: mqtt-broker
image: gcr.io/vu-james-celli/eclipse-mosquitto # https://hub.docker.com/_/eclipse-mosquitto
ports:
- containerPort: 1883
skaffold.yaml:
apiVersion: skaffold/v2beta10
kind: Config
build:
artifacts:
- <other image builds>
deploy:
kubectl:
manifests:
- mqtt-broker/*
portForward:
- resourceType: deployment
resourceName: mqtt-broker
port: 1883
localPort: 1883
<other port forwardings>
...
However when I run skaffold --dev --port-forward I get an error in the output:
- deployment/mqtt-broker: container mqtt-broker is waiting to start: gcr.io/vu-james-celli/eclipse-mosquitto can't be pulled
How do I have to configure skaffold.yaml (schema version v2beta10) when using kubectl to run the mosquitto container as part of a deployment?
You could create a pod with a single container referencing eclipse-mosquitto, and then ensure that pod is referenced from your skaffold.yaml.
apiVersion: v1
kind: Pod
metadata:
name: mqtt
spec:
containers:
- name: mqtt
image: eclipse-mosquitto
ports:
- containerPort: 1883
name: mqtt
- containerPort: 9001
name: websockets
You could turn this into a deployment or replicaset with services, etc.
First, pull the image from docker hub onto the local machine: docker pull eclipse-mosquitto
Second, refer the image in the mqtt-broker/deployment.yaml e.g.:
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
type: NodePort
ports:
- targetPort: 1883
port: 1883
selector:
app: mqtt-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-broker
template:
metadata:
labels:
app: mqtt-broker
spec:
containers:
- name: mqtt-broker
image: eclipse-mosquitto
ports:
- containerPort: 1883
Third, reference the deploment.yaml in skaffold.yaml` e.g.:
apiVersion: skaffold/v2beta10
kind: Config
build:
artifacts:
- <services-under-development>
deploy:
kubectl:
manifests:
- mqtt-broker/deployment.yaml
portForward:
- resourceType: deployment
resourceName: mqtt-broker
port: 1883
localPort: 1883
- <port-forwarding-for-services-under-development>

Why am I getting a SQSError: 404 when trying to connect using Boto to an ElasticMQ service that is behind Ambassador?

I have an elasticmq docker container which is deployed as a service using Kubernetes. Furthermore, this service is exposed to external users by way of Ambassador.
Here is the yaml file for the service.
---
kind: Service
apiVersion: v1
metadata:
name: elasticmq
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: elasticmq
prefix: /
host: elasticmq.localhost.com
service: elasticmq:9324
spec:
selector:
app: elasticmq
ports:
- port: 9324
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: elasticmq
labels:
app: elasticmq
spec:
replicas: 1
selector:
matchLabels:
app: elasticmq
template:
metadata:
labels:
app: elasticmq
spec:
containers:
- name: elasticmq
image: elasticmq
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9324
livenessProbe:
exec:
command:
- /bin/bash
- -c
- nc -zv localhost 9324 -w 1
initialDelaySeconds: 60
periodSeconds: 5
volumeMounts:
- name: elastimq-conf-volume
mountPath: /elasticmq/elasticmq.conf
volumes:
- name: elastimq-conf-volume
hostPath:
path: /path/elasticmq.conf
Now I can check that the elasticmq container is healthy and Ambassador worked by doing a curl:
$ curl elasticmq.localhost.com?Action=ListQueues&Version=2012-11-05
[1] 10355
<ListQueuesResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/">
<ListQueuesResult>
</ListQueuesResult>
<ResponseMetadata>
<RequestId>00000000-0000-0000-0000-000000000000</RequestId>
</ResponseMetadata>
</ListQueuesResponse>[1]+ Done
On the other hand, when I try to do the same thing using Boto3, I get a SQSError: 404 not found.
This is my Python script:
import boto.sqs.connection
conn = boto.sqs.connection
c = conn.SQSConnection(proxy_port=80, proxy='elasticmq.localhost.com', is_secure=False, aws_access_key_id='x', aws_secret_access_key='x'
c.get_all_queues('')
I thought it had to do with the outside host specified in elasticmq.conf, so I changed that to this:
include classpath("application.conf")
// What is the outside visible address of this ElasticMQ node (used by rest-sqs)
node-address {
protocol = http
host = elasticmq.localhost.com
port = 80
context-path = ""
}
rest-sqs {
enabled = true
bind-port = 9324
bind-hostname = "0.0.0.0"
// Possible values: relaxed, strict
sqs-limits = relaxed
}
I thought changing the elasticmq conf would work, but it doesn't. How can I get this to work?

Resources