For my setup, I'm working with a third party MQTT VerneMQ broker hosted in AWS. I have been given username/password credentials to connect over secure MQTT (port 8883) using a specific clientId. My goal (though irrelevant to the issue at hand) is merely to subscribe to a topic and redirect traffic from the topic to Google PubSub.
I wrote a simple NodeJS program to make said connection, and it works beautifully when run locally through ts-node
const client = connect(`mqtts://${process.env.MQTT_HOST}`, {
port: parseInt(process.env.MQTT_PORT, 10),
clientId: process.env.MQTT_CLIENT_ID,
username: process.env.MQTT_USERNAME,
password: process.env.MQTT_PASSWORD,
rejectUnauthorized: false,
});
client.on('error', handleError);
client.on('connect', (p) => {
console.log('connect', JSON.stringify(p));
client.subscribe({ [mqttTopic]: { qos: 0 } });
});
client.on('message', (topic, msg) => onMessageReceived(msg));
I then proceeded to Dockerize it
FROM node:lts-alpine
RUN apk update
WORKDIR /app
COPY . .
RUN npm i
EXPOSE 8883
CMD ["npm", "start"]
and that runs perfectly fine locally through docker run.
The trouble started when I loaded the image into Google's Compute Engine using their "Deploy a container image to this VM instance" option that uses a container-optimized OS image. When I checked the logs, the code was trying to reach out with a connect packet, but would always immediately close.
I thought this might be an issue with how I did the deployment, so to verify, I spun up a standard Debian VM, and upon installing Docker and running my image just like I did it locally, it worked just fine! So it's not that Docker is failing remotely.
I considered that perhaps that the deployment through Compute Engine was just weird, but it was simpler than standing up a Kubernetes cluster when I just needed the single image. Given my issues, I went ahead and spent the time to stand everything up in GKE. The logs reported the exact same messages that they reported when the image was deployed through Compute Engine. Here's the YAML:
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: test-mqtt
labels:
app: test-mqtt
spec:
replicas: 1
selector:
matchLabels:
app: test-mqtt
serviceName: test-mqtt-service
template:
metadata:
labels:
app: test-mqtt
spec:
containers:
- name: mqtt
image: us-central1-docker.pkg.dev/{GCP_PROJECT}/docker/test-mqtt
ports:
- name: mqtt-ssl
containerPort: 8883
---
apiVersion: v1
kind: Service
metadata:
name: test-mqtt-service
labels:
app: test-mqtt
spec:
ports:
- name: mqtt-ssl
port: 8883
selector:
app: test-mqtt
type: LoadBalancer
After all this, I thought for sure this was a port issue, so I checked and double checked the firewalls, both for the vNIC and internally (as suggested might be the case by Google - that didn't change anything). I can reach out over the port, run applications over the port, and it still fails when I open all the ports to the world. In order to triple check the ports, I went ahead and changed the code to reach out to https://test.mosquitto.org and I verified that I could still reach their server using port 8883. So it can't be the port.
I've come to the conclusion that some combination of OS (it worked in Debian with a manual deploy) and broker (it worked for the Mosquitto Test broker) is making this not work, but I feel like I've exhausted all possibilities.
What more can I check to make this work? I feel like it has to be something simple that I'm missing, but I've spent days on this to no avail.
Related
I'm trying to deploy a gRPC server with kubernetes, and connect to it outside the cluster.
The relevant part of the server:
function main() {
var hello_proto = grpc.loadPackageDefinition(packageDefinition).helloworld;
var server = new grpc.Server();
server.addService(hello_proto.Greeter.service, {sayHello: sayHello});
const url = '0.0.0.0:50051'
server.bindAsync(url, grpc.ServerCredentials.createInsecure(), () => {
server.start();
console.log("Started server! on " + url);
});
}
function sayHello(call, callback) {
console.log('Hello request');
callback(null, {message: 'Hello ' + call.request.name + ' from ' + require('os').hostname()});
}
And here is the relevant part of the client:
function main() {
var target = '0.0.0.0:50051';
let pkg = grpc.loadPackageDefinition(packageDefinition);
let Greeter = pkg.helloworld["Greeter"];
var client = new Greeter(target,grpc.credentials.createInsecure());
var user = "client";
client.sayHello({name: user}, function(err, response) {
console.log('Greeting:', response.message);
});
}
When I run them manually with nodeJS, as well as when I run the server in a docker container (client is still run with node without a container) it works just fine.
The docker file with the command: docker run -it -p 50051:50051 helloapp
FROM node:carbon
# Create app directory
WORKDIR /usr/src/appnpm
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
CMD npm start
However, when I'm deploying the server with kubernetes (again, the client isnt run within a container) I'm not able to connect.
The yaml file is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloapp
spec:
replicas: 1
selector:
matchLabels:
app: helloapp
strategy: {}
template:
metadata:
labels:
app: helloapp
spec:
containers:
image: isolatedsushi/helloapp
name: helloapp
ports:
- containerPort: 50051
name: helloapp
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: helloservice
spec:
selector:
app: helloapp
ports:
- name: grpc
port: 50051
targetPort: 50051
The deployment and the service start up just fine
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helloservice ClusterIP 10.105.11.22 <none> 50051/TCP 17s
kubectl get pods
NAME READY STATUS RESTARTS AGE
helloapp-dbdfffb-brvdn 1/1 Running 0 45s
But when I run the client it can't reach the server.
Any ideas what I'm doing wrong?
As mentioned in comments
ServiceTypes
If you have exposed your service as ClusterIP it's visible only internally in the cluster, if you wan't to expose your service externally you have to use either nodePort or LoadBalancer.
Publishing Services (ServiceTypes)
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that's outside of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Related documentation about that.
Minikube
With minikube you can achieve that with minikube service command.
There is documentation about minikube service and there is an example.
grpc http/https
As mentioned here by #murgatroid99
The gRPC library does not recognize the https:// scheme for addresses, so that target name will cause it to try to resolve the wrong name. You should instead use grpc-server-xxx.com:9090 or dns:grpc-server-xxx.com:9090 or dns:///grpc-server-xxx.com:9090. More detailed information about how gRPC interprets channel target names can be found in this documentation page.
As it does not recognize https I assume it's the same for http, so it's not possible.
kubectl port-forward
Additionally as #IsolatedSushi mentioned
It also works when I portforward with the command kubectl -n hellospace port-forward svc/helloservice 8080:50051
As mentioned here
Kubectl port-forward allows you to access and interact with internal Kubernetes cluster processes from your localhost. You can use this method to investigate issues and adjust your services locally without the need to expose them beforehand.
There is an example in documentation.
I am using Kubernetes to deploy my React Application. Due to the database, I am using (RethinkDB). I have to initiate a WebSocket connection between my React Application and a Node.js server that proxies to the Database instance. The connection works as intended when I deploy the Database instance, backend Node server and the react application in my local machine. However when I deploy the application in Kubernetes I receive the error
WebSocket connection to 'ws://localhost:8015/' failed: Error in connection establishment: net::ERR_CONNECTION_REFUSED
within my react application.
For further debugging, I executed a terminal within the container and ran curl command to the Service that connects to the Database and received no errors. I also ran the backend node server to see whether it connects to the remote database and saw no issues there. Finally, I tested whether the Backend server initiates the WebSocket as intended using wscat and the WebSocket connection is working. And due to the fact that the application runs well on my local machine, leads me to believe that there the issue in the React Application connecting to the WebSocket could be caused by how Kubernetes handles Websocket connections. Any clarification on the issue is gladly appreciated.
P.S
I Have added the backend server code, the code within the React application that connects to the WebSocket and the YAML files of my React+Backend Deployment. If any more files are required please feel free to comment
backend node server
const http = require('http');
var rethinkdbWebsocketServer = require('rethinkdb-websocket-server');
const httpServer = http.createServer();
rethinkdbWebsocketServer.listen({
httpServer: httpServer,
httpPath: '/',
dbHost: remoteDB_IP,
dbPort: 28015,
unsafelyAllowAnyQuery: true
});
httpServer.listen(8015);
React code that connects to the Websocket
ReactRethinkdb.DefaultSession.connect({
host: 'localhost', // the websocket server
port: process.env.REACT_APP_WEBSOCKET_PORT,
path: '/',
secure: false,
autoReconnectDelayMs: 2000, // when disconnected, millis to wait before reconnect
});
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dashboard
labels:
app: dashboard
spec:
replicas: 1
selector:
matchLabels:
app: dashboard
template:
metadata:
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: myrepor/dashbaord
imagePullPolicy: Always
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: dashboard
spec:
selector:
app: dashboard
ports:
- port: 3000
targetPort: 3000
type: LoadBalancer
so kubernetes ideas is all about running each application in separate pod.
And connecting them through services.
This way you can deploy each layer separately without down time - service routes to old pod(s) until new are up and running. (and even more important scale up the separately )
Kubernetes does some kind of name resolution inside of each k8s cluster. Not exactly sure if that is a full dns or not
thus I would recommend
separate your node server into one pod. and deploy service for it with your ws port (8015).
your react app separate pod with its own service and define node server service name as endpoint for WS.
The reason is simple - not even sure that localhost going to be resolved correctly with in the pod.
Here is my docker image
FROM mcr.microsoft.com/dotnet/core/aspnet:2.2-alpine3.8 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
COPY ./xyz/publish .
ENV ASPNETCORE_URLS=https://+:443;http://+80
ENTRYPOINT ["dotnet","abc/xyz.dll"]
Here is my Deployment.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: xyzdemo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
papi: web
template:
metadata:
labels:
papi: web
spec:
containers:
- name: xyzdemo-site
image: xyz.azurecr.io/abc:31018
ports:
- containerPort: 443
imagePullSecrets:
- name: secret
---
apiVersion: v1
kind: Service
metadata:
name: xyzdemo-entrypoint
namespace: default
spec:
type: LoadBalancer
selector:
papi: web
ports:
- port: 44328
targetPort: 443
Here is my appsettings file
"Server": "xyz.database.windows.net",
"Database": "pp",
"User": "ita",
"Password": "password",
using all these i deployed the application in to the k8s cluster and am able to open the application from the browser, however when i try to get the info from the database, application is getting the network related error after a while.
System.Data.SqlClient.SqlException (0x80131904): A network-related or
instance-specific error occurred while establishing a connection to
SQL Server
I tried going inside to POD and did the ls command, i can see my application setting file as well as when Cat the application settings, i can see the correct credentials and i dont know what to do and not sure why is not able to connect to the database.
So finally i tried adding the sql connections as the env variables to the pod , then it started working. when i remove those its not connecting.
Now i removed the env variables which has the sql connections then did the log on the pod.
it says can't connect to the database: 'Empty' and server: 'Empty'
not sure why is it taking the empty when it has the details inside the applicationsettings.json file.
Well, I do not see what is the config for your k8's application to connect to database. Importantly, where is your database hosted? How can papi:web connect to database?
I also suspect your service is not having appropriate port re-direction. From your service.yaml above, https port of 443 is internally mapped to 44328. What is 44328? What is listening on that port? Your application seems to have no mention of 44328. (Refer Dockerfile)
I would improvise your service.yaml to look something like this:
apiVersion: v1
kind: Service
metadata:
name: xyzdemo-entrypoint
namespace: default //This is inferred anyways
spec:
selector:
papi: web
ports:
- name: http
protocol: TCP
port: 80
targetPort: xxxx //Where your web-server is listening. (From your dockerfile, this is also 80 but it can be any valid TCP port)
- name: https
protocol: TCP
port: 443
targetPort: xxxx //https for your web-server. (From your dockerfile, this is also 443. Again, can be any TCP port)
Opening up database server to internet is not a good practise. It's a big security threat. Good pattern is to facilitate your web-server communicate to database-server via internal dns that k8's maintain (This is assuming your database server is also a container - something like kubedb. If not, you're database server will have to be available via some sort of proxy that whitelists known hosts and only allows known hosts. eg - cloudsql proxy in GCP).
Depending on how your database server is hosted, you'll have to configure your db config to allow or whitelist your containerised application (The IP you get after applying service.yaml) Only then will your k8's app be able to talk/achieve connectivity to respective db.
I suspect you need to Allow connections on the Azure SQL firewall for this to work. Using the portal would be the easiest way. You can just allow all or allow Azure services for starters (assuming your Kubernetes is inside Azure). And narrow it down later (if this is the culprit).
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-firewall-configure#use-the-azure-portal-to-manage-server-level-ip-firewall-rules
Hi I am trying to learn kubernetes.
what I am trying to do this using minikube and following are what I did:
1.) Write a simple server using Node
2.) Write a Dockerfile for that particular Node server
3.) Create a kubernetes deployment
4.) Create a service (of type ClusterIP)
5.) Create a service (of type NodePort) to expose the container so I can access from outside (browser, curl)
But when I try to connect to the NodePort with the format of <NodePort>:<port> and `:, it gives an error Failed to connect to 192.168.39.192 port 80: Connection refused
These are the files I created as steps mentioned above (1-5).
1.) server.js - Here only I have mentioned server.js, relavent package.json exists and they work as expected when I run the server locally (without deploying it in docker), I told this in case you might ask questions if my server works correctly, yes it does :)
'use strict';
const express = require('express');
// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// App
const app = express();
app.get('/', (req, res) => {
res.send('Hello world\n');
});
app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
2.) Dockerfile
FROM node:10
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
# If you are building your code for production
# RUN npm ci --only=production
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "npm", "start" ]
3.) deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-web-app
spec:
replicas: 2
selector:
matchLabels:
name: node-web-app
template:
metadata:
labels:
# you can specify any labels you want here
name: node-web-app
spec:
containers:
- name: node-web-app
# image must be the same as you built before (name:tag)
image: banuka/node-web-app
ports:
- name: http
containerPort: 8080
protocol: TCP
imagePullPolicy: Never
terminationGracePeriodSeconds: 60
4.) clusterip.yaml
kind: Service
apiVersion: v1
metadata:
labels:
# these labels can be anything
name: node-web-app-clusterip
name: node-web-app-clusterip
spec:
selector:
app: node-web-app
ports:
- protocol: TCP
port: 80
# target is the port exposed by your containers (in our example 8080)
targetPort: 8080
5.) NodePort.yaml
kind: Service
apiVersion: v1
metadata:
labels:
name: node-server-nodeport
name: node-server-nodeport
spec:
# this will make the service a NodePort service
type: NodePort
selector:
app: node-app-web
ports:
- protocol: TCP
# new -> this will be the port used to reach it from outside
# if not specified, a random port will be used from a specific range (default: 30000-32767)
nodePort: 32555
port: 80
targetPort: 8080
and when I try to curl from outside or use my web browser it gives following error:
curl: (7) Failed to connect to 192.168.39.192 port 32555: Connection refused
ps: pods and containers are also working as expected.
There are several possible reasons for this.
First: Are you using your local IP or the IP where the minikube VM is running? To verify use minikube ip.
Second: The NodePort service wants to use pods with label app: node-app-web, but your pods only have the label name: node-web-app
Just to make sure the port that you assume is used, check with minikube service list that the requested port was allocated. Check your firewall settings as well.
I had same problem always when I write wrong selectors to NodePort service spec
The Service's selector must match the Pod's label.
In your NodePort.yaml selector is app: node-app-web , while in deployment.yaml label is node-web-app.
I'm having issues with the internal DNS/service resolution within Kubernetes and I can't seem to track the issue down. I have an api-gateway pod running Kong, which calls other services by their internal service name, i.e srv-name.staging.svc.cluster.local. Which was working fine up until recently. I attempted to deploy 3 more services, into two namespaces, staging and production.
The first service, works as expected when calling booking-service.staging.svc.cluster.local, however the same code doesn't seem to work in the production service. And the other two service don't worth in either namespace.
The behavior I'm getting is a timeout. If I curl these services from my gateway pod, they all timeout, apart from the first service deployed (booking-service.staging.svc.cluster.local). When I call these services from another container within the same pod, they do work as expected.
I have Node services set up for each service I wish to expose to the client side.
Here's an example Kubernetes deployment:
---
# API
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{SRV_NAME}}
spec:
replicas: 1
template:
metadata:
labels:
app: {{SRV_NAME}}
spec:
containers:
- name: booking-api
image: microhq/micro:kubernetes
args:
- "api"
- "--handler=rpc"
env:
- name: PORT
value: "8080"
- name: ENV
value: {{ENV}}
- name: MICRO_REGISTRY
value: "kubernetes"
ports:
- containerPort: 8080
- name: {{SRV_NAME}}
image: eu.gcr.io/{{PROJECT_NAME}}/{{SRV_NAME}}:latest
imagePullPolicy: Always
command: [
"./service",
"--selector=static"
]
env:
- name: MICRO_REGISTRY
value: "kubernetes"
- name: ENV
value: {{ENV}}
- name: DB_HOST
value: {{DB_HOST}}
- name: VERSION
value: "{{VERSION}}"
- name: MICRO_SERVER_ADDRESS
value: ":50051"
ports:
- containerPort: 50051
name: srv-port
---
apiVersion: v1
kind: Service
metadata:
name: booking-service
spec:
ports:
- name: api-http
port: 80
targetPort: 8080
protocol: TCP
selector:
app: booking-api
I'm using go-micro https://github.com/micro/go-micro with the Kubernetes pre-configuration. Which again works in one case absolutely fine, but not all the others. Which leads me to believe it's not code related. It also works fine locally.
When I do nslookup from another pod, it resolves the name and finds the cluster IP for the internal Node service as expected. When I attempt to cURL that IP address, I get the same timeout behavior.
I'm using Kubernetes 1.8 on Google Cloud.
I don't understand why you think that it is an issue with the internal DNS/service resolution within Kubernetes since when you perform the DNS lookup it works, but if you query that IP you get a connection timeout.
If you curl these services from outside the pod they all timeout, apart from the first service deployed, no matter if you used the IP or the domain name.
When you call these services from another container within the same pod, they do work as expected.
It seems an issue with the connection between pods more than a DNS issue therefore I would focus your troubleshooting towards that direction, but correct me if I'am wrong.
Can you perform the classical networking troubleshooting (ping, telnet, traceroute)from a pod toward the IP given by the DNS lookup and from one of the container that is giving timeout to one of the other pods and update the question with the results?