Connection to elasticsearch on kubernetes fails - node.js

I want to setup a simple single-node elasticsearch pod on kubernetes that I can connect to via my backend.
Here is the config for my service and statefulset:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
type: ClusterIP
clusterIP: None
selector:
app: elasticsearch
ports:
- port: 9200 # To get at the elasticsearch container, just hit the service on 9200
targetPort: 9200 # routes to the exposed port on elasticsearch
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch # name of stateful
namespace: default
spec:
serviceName: elasticsearch
replicas: 1
selector:
matchLabels:
app: elasticsearch # should match service > spec.slector.app.
template:
metadata:
labels:
app: elasticsearch
spec:
volumes:
- name: elasticsearch-pvc
persistentVolumeClaim:
claimName: elasticsearch-volume-claim
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:8.2.3
resources:
limits:
cpu: 100m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: search
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.type
value: single-node
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: xpack.security.enabled
value: "false"
initContainers:
- name: fix-permissions
image: busybox
command:
["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
I'm connecting via the javascript client ("#elastic/elasticsearch": "^8.2.1") like so:
import { Client, HttpConnection } from '#elastic/elasticsearch'
import config from '../../config'
export const client = new Client({
node: config.elasticSearch.host,
headers: {
Accept: 'application/json',
'Content-Type': 'application/json'
},
Connection: HttpConnection
})
Where config.elasticSearch.host = http://elasticsearch:9200
However when I run my initial seed script I get the following error:
/app/node_modules/#elastic/transport/lib/Transport.js:525
: new errors_1.ConnectionError(error.message, result);
^
ConnectionError: connect ECONNREFUSED 10.244.0.112:9200
I'm not entirely sure why the connection is being refused since the service should be directing the request to my elasticsearch stateful set.

Related

NodeJS, gRPC and Kubernetes

I have created a headless service in kubernetes for gRPC server pods.
# Express server: acts as client for gRPC server
apiVersion: apps/v1
kind: Deployment
metadata:
name: bbl-org-client
spec:
replicas: 1
selector:
matchLabels:
app: bbl-org-client
template:
metadata:
labels:
app: bbl-org-client
spec:
containers:
- name: bbl-org-client
image: sk10/bbl-org-client-service:fbbcc26-dirty
resources:
limits:
memory: "256Mi"
cpu: "0.1"
---
apiVersion: v1
kind: Service
metadata:
name: bbl-org-client
spec:
type: ClusterIP
selector:
app: bbl-org-client
ports:
- name: bbl-org-client
protocol: TCP
port: 3000
targetPort: 8080
---
# Babble gRPC server
apiVersion: apps/v1
kind: Deployment
metadata:
name: bbl-org-server
spec:
replicas: 1
selector:
matchLabels:
app: bbl-org-server
template:
metadata:
labels:
app: bbl-org-server
spec:
containers:
- name: bbl-org-server
image: sk10/bbl-org-server-service:fbbcc26-dirty
resources:
limits:
memory: "256Mi"
cpu: "0.1"
---
apiVersion: v1
kind: Service
metadata:
name: bbl-org-server
spec:
clusterIP: None
selector:
app: bbl-org-server
ports:
- name: bbl-org-server
protocol: TCP
port: 50051
targetPort: 50051
---
# Mongo DB
apiVersion: apps/v1
kind: Deployment
metadata:
name: babble-org-mongo
spec:
replicas: 1
selector:
matchLabels:
app: babble-org-mongo
template:
metadata:
labels:
app: babble-org-mongo
spec:
containers:
- name: babble-org-mongo
image: mongo
resources:
limits:
memory: "256Mi"
cpu: "0.1"
---
apiVersion: v1
kind: Service
metadata:
name: babble-org-mongo
spec:
type: ClusterIP
selector:
app: babble-org-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
and my client connection code is
const client = new orgPackageDefinition.OrganizationService(
"bbl-org-server.default.svc.cluster.local:50051",
grpc.credentials.createInsecure()
);
But it is not connecting to the server and I get a response as
{
"message": {
"code": 14,
"details": "No connection established",
"metadata": {},
"progress": "PROCESSED"
}
}
Please help me.
I have created a headless service and I'm able to ping the bbl-org-server from bbl-org-client. But, I'm not able to connect with gRPC client.
Add a prefix for the Kubernetes service so it knows that is a grpc port for the service. In the examples below you can see the difference between a http and a grpc:
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8383
- name: grpc
port: 9090
protocol: TCP
targetPort: 9090

k8s deploy minio,but web console page cannot be accessed

k8s file like this from bitnami
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: minio
name: minio
spec:
replicas: 1
selector:
matchLabels:
app: minio
serviceName: minio
template:
metadata:
labels:
app: minio
spec:
containers:
- env:
- name: BITNAMI_DEBUG
value: "false"
- name: MINIO_SCHEME
value: http
- name: MINIO_FORCE_NEW_KEYS
value: "no"
- name: MINIO_ROOT_USER
value: linkflow
- name: MINIO_ROOT_PASSWORD
value: Sjtu403c##%
- name: MINIO_BROWSER
value: "on"
- name: MINIO_PROMETHEUS_AUTH_TYPE
value: public
- name: MINIO_CONSOLE_PORT_NUMBER
value: "9001"
image: registry.aliyuncs.com/linkflow/minio-bitnami
livenessProbe:
failureThreshold: 5
httpGet:
path: /minio/health/live
port: minio-api
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
name: minio
ports:
- containerPort: 9000
name: minio-api
protocol: TCP
- containerPort: 9001
name: minio-console
protocol: TCP
readinessProbe:
failureThreshold: 5
initialDelaySeconds: 5
periodSeconds: 5
successThreshold: 1
tcpSocket:
port: minio-api
timeoutSeconds: 1
resources:
limits:
memory: 1Gi
requests:
memory: 1G
securityContext:
runAsNonRoot: true
runAsUser: 1001
volumeMounts:
- mountPath: /data
name: data
securityContext:
fsGroup: 1001
volumeClaimTemplates:
- kind: PersistentVolumeClaim
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
storageClassName: default
volumeMode: Filesystem
---
apiVersion: v1
kind: Service
metadata:
labels:
app: minio
name: minio
spec:
ports:
- name: minio-api
port: 9000
targetPort: minio-api
- name: minio-console
port: 9001
targetPort: minio-console
selector:
app: minio
when i use local k8s portforward ,it run ok. get http://127.0.0.1/minio web console is can be see
kubectl port-forward svc/minio 9001:9001
my ingress
- backend:
service:
name: minio
port:
number: 9001
path: /minio
pathType: ImplementationSpecific
and when i use azure SLB with domain, https://hostname/minio it error
Uncaught SyntaxError: Unexpected token '<'
i try add env MINIO_BROWSER_REDIRECT_URL,but not work. how could i do?
ingress patch need to change to /
- backend:
service:
name: minio
port:
number: 9001
path: /
pathType: ImplementationSpecific

Unable to establish connection with postgres by using ClusterIP service

Can't reach database server at postgres-srv:5432
Please make sure your database server is running at postgres-srv:5432.
depl.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: root
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
app: postgres
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: /var/lib/data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
Service:
apiVersion: v1
kind: Service
metadata:
name: postgres-srv
spec:
selector:
app: postgres
ports:
- name: db
protocol: TCP
port: 5432
targetPort: 5432
postgres-srv ClusterIP 10.108.208.56 <none> 5432/TCP 4m59s
Connection Url:
DATABASE_URL="postgresql://postgres:root#postgres-srv:5432/postgresdb?schema=public"

Kubernetes Zookeeper Cluster Setup/Configuration YAML

I am trying to run zookeeper as cluster in Azure Kubernetes Service. All the instances are staring with myid:1, not sure what configuration I need to change. Any help is appreciated.
Here's my configuration file,
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zookeeper-sc
selfLink: /apis/storage.k8s.io/v1/storageclasses/zookeeper-sc
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: 'true'
provisioner: kubernetes.io/azure-disk
parameters:
cachingmode: ReadOnly
kind: Managed
storageaccounttype: StandardSSD_LRS
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper
image: "zookeeper:3.6.2"
env:
- name: ZOO_MY_ID
valueFrom:
fieldRef:
fieldPath: metadata.annotations['spec.pod.beta.kubernetes.io/statefulset-index']
- name: ZOO_SERVERS
value: "server.1=zk-0:2888:3888;2181 server.2=zk-1:2888:3888;2181 server.3=zk-2:2888:3888;2181"
- name: ZOO_STANDALONE_ENABLED
value: "false"
- name: ZOO_4LW_COMMANDS_WHITELIST
value: "srvr,mntr"
resources:
requests:
memory: "1Gi"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: zk-data
mountPath: "/data"
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: zk-data
spec:
storageClassName: "zookeeper-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
After a week I came up with the below configuration that worked,
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zookeeper-sc
selfLink: /apis/storage.k8s.io/v1/storageclasses/zookeeper-sc
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: 'true'
provisioner: kubernetes.io/azure-disk
parameters:
cachingmode: ReadOnly
kind: Managed
storageaccounttype: StandardSSD_LRS
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
initContainers:
- command:
- /bin/bash
- -c
- |-
set -ex;
mkdir -p /data;
if [[ ! -f "/data/myid" ]]; then
hostindex=$HOSTNAME; let zooid=${hostindex: -1: 1}+1; echo $zooid > "/data/myid"
echo "Zookeeper MyId: " $zooid
fi
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
image: zookeeper:3.6.2
name: zookeeper-init
securityContext:
runAsUser: 1000
volumeMounts:
- name: zk-data
mountPath: "/data"
containers:
- name: zookeeper
image: "zookeeper:3.6.2"
env:
- name: ZOO_SERVERS
value: "server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888;2181 server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888;2181 server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888;2181"
- name: ZOO_STANDALONE_ENABLED
value: "false"
- name: ZOO_4LW_COMMANDS_WHITELIST
value: "srvr,mntr"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: zk-data
mountPath: "/data"
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: zk-data
spec:
storageClassName: "zookeeper-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

Istio, cannot call microservice-2 from microservice-1

I installed istio on minikube and deployed sample application.
Everything works normal except i cannot get request between microservices.
Products service is listening on :9080 /products route.
The urls i tried:
'products:9080/products'
'blog-products:9080/products'
But none of them did not worked.
await axios.get('products:9080/products')
await axios.get('blog-products:9080/products')
If i run this command, it works but when i try to run that url inside microservice-1 it does not work.
kubectl exec -it $(kubectl get pod -l app=products -o jsonpath='{.items[0].metadata.name}') -c products -- curl products:9080/products
blog.yaml for service definitions
#blog.yaml
##################################################################################################
# Products service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: products
labels:
app: products
service: products
spec:
ports:
- port: 9080
name: http
selector:
app: products
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: blog-products
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: products-v1
labels:
app: products
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: products
version: v1
template:
metadata:
labels:
app: products
version: v1
spec:
serviceAccountName: blog-products
containers:
- name: products
image: s1nc4p/blogproducts
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# home services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: home
labels:
app: home
service: home
spec:
ports:
- port: 9080
name: http
selector:
app: home
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: blog-home
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: home-v1
labels:
app: home
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: home
version: v1
template:
metadata:
labels:
app: home
version: v1
spec:
serviceAccountName: blog-home
containers:
- name: home
image: s1nc4p/bloghome:v4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
blog.gateway.yaml
#blog.gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: blog-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: blog
spec:
hosts:
- "*"
gateways:
- blog-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: home
port:
number: 9080

Resources