How can i walk through my API-Gateway? (AKS) - azure

I have an Azure Kubernetes service with currently 3 microservices on it. 1 API gateway and 2 backend microservices. I can address my Api gateway and everything works there. But when I try to address my other microservices via my Api gateway, it still doesn't work.
This is my Yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: apigateway-front
spec:
replicas: 1
selector:
matchLabels:
app: apigateway-front
template:
metadata:
labels:
app: apigateway-front
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: apigateway-front
image: containerregistry.azurecr.io/apigateway:11
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8800
name: apigateway
---
apiVersion: v1
kind: Service
metadata:
name: apigateway-front
spec:
type: LoadBalancer
ports:
- port: 8800
selector:
app: apigateway-front
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: contacts-back
spec:
replicas: 1
selector:
matchLabels:
app: contacts-back
template:
metadata:
labels:
app: contacts-back
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: contacts-back
image: containerregistry.azurecr.io/contacts:12
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8100
name: contacts-back
---
apiVersion: v1
kind: Service
metadata:
name: contacts-back
spec:
ports:
- port: 8100
selector:
app: contacts-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: templates-back
spec:
replicas: 1
selector:
matchLabels:
app: templates-back
template:
metadata:
labels:
app: templates-back
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: templates-back
image: containerregistry.azurecr.io/templates:13
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8200
name: templates-back
---
apiVersion: v1
kind: Service
metadata:
name: templates-back
spec:
ports:
- port: 8200
selector:
app: templates-back
Do I need an additional Naming Service (Eureka) to access my backend microservices? Or can I do it without.

Related

NodeJS, gRPC and Kubernetes

I have created a headless service in kubernetes for gRPC server pods.
# Express server: acts as client for gRPC server
apiVersion: apps/v1
kind: Deployment
metadata:
name: bbl-org-client
spec:
replicas: 1
selector:
matchLabels:
app: bbl-org-client
template:
metadata:
labels:
app: bbl-org-client
spec:
containers:
- name: bbl-org-client
image: sk10/bbl-org-client-service:fbbcc26-dirty
resources:
limits:
memory: "256Mi"
cpu: "0.1"
---
apiVersion: v1
kind: Service
metadata:
name: bbl-org-client
spec:
type: ClusterIP
selector:
app: bbl-org-client
ports:
- name: bbl-org-client
protocol: TCP
port: 3000
targetPort: 8080
---
# Babble gRPC server
apiVersion: apps/v1
kind: Deployment
metadata:
name: bbl-org-server
spec:
replicas: 1
selector:
matchLabels:
app: bbl-org-server
template:
metadata:
labels:
app: bbl-org-server
spec:
containers:
- name: bbl-org-server
image: sk10/bbl-org-server-service:fbbcc26-dirty
resources:
limits:
memory: "256Mi"
cpu: "0.1"
---
apiVersion: v1
kind: Service
metadata:
name: bbl-org-server
spec:
clusterIP: None
selector:
app: bbl-org-server
ports:
- name: bbl-org-server
protocol: TCP
port: 50051
targetPort: 50051
---
# Mongo DB
apiVersion: apps/v1
kind: Deployment
metadata:
name: babble-org-mongo
spec:
replicas: 1
selector:
matchLabels:
app: babble-org-mongo
template:
metadata:
labels:
app: babble-org-mongo
spec:
containers:
- name: babble-org-mongo
image: mongo
resources:
limits:
memory: "256Mi"
cpu: "0.1"
---
apiVersion: v1
kind: Service
metadata:
name: babble-org-mongo
spec:
type: ClusterIP
selector:
app: babble-org-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
and my client connection code is
const client = new orgPackageDefinition.OrganizationService(
"bbl-org-server.default.svc.cluster.local:50051",
grpc.credentials.createInsecure()
);
But it is not connecting to the server and I get a response as
{
"message": {
"code": 14,
"details": "No connection established",
"metadata": {},
"progress": "PROCESSED"
}
}
Please help me.
I have created a headless service and I'm able to ping the bbl-org-server from bbl-org-client. But, I'm not able to connect with gRPC client.
Add a prefix for the Kubernetes service so it knows that is a grpc port for the service. In the examples below you can see the difference between a http and a grpc:
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8383
- name: grpc
port: 9090
protocol: TCP
targetPort: 9090

Unable to establish connection with postgres by using ClusterIP service

Can't reach database server at postgres-srv:5432
Please make sure your database server is running at postgres-srv:5432.
depl.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: postgresdb
POSTGRES_USER: postgres
POSTGRES_PASSWORD: root
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv-volume
labels:
app: postgres
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: /var/lib/data
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:latest
imagePullPolicy: "IfNotPresent"
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
Service:
apiVersion: v1
kind: Service
metadata:
name: postgres-srv
spec:
selector:
app: postgres
ports:
- name: db
protocol: TCP
port: 5432
targetPort: 5432
postgres-srv ClusterIP 10.108.208.56 <none> 5432/TCP 4m59s
Connection Url:
DATABASE_URL="postgresql://postgres:root#postgres-srv:5432/postgresdb?schema=public"

Comunication frontend and backend with aks

application with angular not find another application .net in the same azure-aks
the frontend not find backend and give me a error net::ERR_NAME_NOT_RESOLVED
but when try to call this backend with another backend, works fine
backend yaml's
deployment yaml
apiVersion : apps/v1
kind: Deployment
metadata:
name: storageconverter-api
spec:
replicas: 1
selector:
matchLabels:
app: storageconverter-api
template:
metadata:
labels:
app: storageconverter-api
spec:
containers:
- name: storageconverter-api
image: legacypoc.azurecr.io/storageconverter-api
ports:
- containerPort: 80
service yaml
apiVersion: v1
kind: Service
metadata:
name: storageconverter-api
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
selector:
app: storageconverter-api
front end yaml's
deployment yaml
apiVersion : apps/v1
kind: Deployment
metadata:
name: mainfrontend
spec:
replicas: 1
selector:
matchLabels:
app: mainfrontend
template:
metadata:
labels:
app: mainfrontend
spec:
containers:
- name: mainfrontend
image: legacypoc.azurecr.io/mainfrontend
ports:
- containerPort: 80
service yaml
apiVersion: v1
kind: Service
metadata:
name: mainfrontend
spec:
type: LoadBalancer
ports:
- port: 4200
targetPort: 4200
selector:
app: mainfrontend
the applications is in the same namespace
the url i'm try to get is
http://storageconverter-api.default.svc.cluster.local/api/Menu

Kubernetes Zookeeper Cluster Setup/Configuration YAML

I am trying to run zookeeper as cluster in Azure Kubernetes Service. All the instances are staring with myid:1, not sure what configuration I need to change. Any help is appreciated.
Here's my configuration file,
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zookeeper-sc
selfLink: /apis/storage.k8s.io/v1/storageclasses/zookeeper-sc
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: 'true'
provisioner: kubernetes.io/azure-disk
parameters:
cachingmode: ReadOnly
kind: Managed
storageaccounttype: StandardSSD_LRS
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper
image: "zookeeper:3.6.2"
env:
- name: ZOO_MY_ID
valueFrom:
fieldRef:
fieldPath: metadata.annotations['spec.pod.beta.kubernetes.io/statefulset-index']
- name: ZOO_SERVERS
value: "server.1=zk-0:2888:3888;2181 server.2=zk-1:2888:3888;2181 server.3=zk-2:2888:3888;2181"
- name: ZOO_STANDALONE_ENABLED
value: "false"
- name: ZOO_4LW_COMMANDS_WHITELIST
value: "srvr,mntr"
resources:
requests:
memory: "1Gi"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: zk-data
mountPath: "/data"
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: zk-data
spec:
storageClassName: "zookeeper-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
After a week I came up with the below configuration that worked,
apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: zookeeper-sc
selfLink: /apis/storage.k8s.io/v1/storageclasses/zookeeper-sc
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: 'true'
provisioner: kubernetes.io/azure-disk
parameters:
cachingmode: ReadOnly
kind: Managed
storageaccounttype: StandardSSD_LRS
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
initContainers:
- command:
- /bin/bash
- -c
- |-
set -ex;
mkdir -p /data;
if [[ ! -f "/data/myid" ]]; then
hostindex=$HOSTNAME; let zooid=${hostindex: -1: 1}+1; echo $zooid > "/data/myid"
echo "Zookeeper MyId: " $zooid
fi
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
image: zookeeper:3.6.2
name: zookeeper-init
securityContext:
runAsUser: 1000
volumeMounts:
- name: zk-data
mountPath: "/data"
containers:
- name: zookeeper
image: "zookeeper:3.6.2"
env:
- name: ZOO_SERVERS
value: "server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888;2181 server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888;2181 server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888;2181"
- name: ZOO_STANDALONE_ENABLED
value: "false"
- name: ZOO_4LW_COMMANDS_WHITELIST
value: "srvr,mntr"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
volumeMounts:
- name: zk-data
mountPath: "/data"
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: zk-data
spec:
storageClassName: "zookeeper-sc"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

Istio, cannot call microservice-2 from microservice-1

I installed istio on minikube and deployed sample application.
Everything works normal except i cannot get request between microservices.
Products service is listening on :9080 /products route.
The urls i tried:
'products:9080/products'
'blog-products:9080/products'
But none of them did not worked.
await axios.get('products:9080/products')
await axios.get('blog-products:9080/products')
If i run this command, it works but when i try to run that url inside microservice-1 it does not work.
kubectl exec -it $(kubectl get pod -l app=products -o jsonpath='{.items[0].metadata.name}') -c products -- curl products:9080/products
blog.yaml for service definitions
#blog.yaml
##################################################################################################
# Products service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: products
labels:
app: products
service: products
spec:
ports:
- port: 9080
name: http
selector:
app: products
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: blog-products
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: products-v1
labels:
app: products
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: products
version: v1
template:
metadata:
labels:
app: products
version: v1
spec:
serviceAccountName: blog-products
containers:
- name: products
image: s1nc4p/blogproducts
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# home services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: home
labels:
app: home
service: home
spec:
ports:
- port: 9080
name: http
selector:
app: home
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: blog-home
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: home-v1
labels:
app: home
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: home
version: v1
template:
metadata:
labels:
app: home
version: v1
spec:
serviceAccountName: blog-home
containers:
- name: home
image: s1nc4p/bloghome:v4
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
blog.gateway.yaml
#blog.gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: blog-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: blog
spec:
hosts:
- "*"
gateways:
- blog-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: home
port:
number: 9080

Resources