jhipster microservice gateway with multi-cluster ingress - jhipster

I have a jhipster generated microservice gateway and application that I run in Google's GKE by using the jhipster kubernetes generator. I have istio deployed in the kubernetes cluster and not using the jhipster-registry.
When I deploy the gateway with ServiceType=Ingress, the communication between the gateway and the application works great. I am trying to get up a GKE multi-cluster ingress which load balances the application deployed in clusters in different regions.
Google has a beta tool called kubemci which sets up all the plumbing for the load balancers. However, in order to use kubemci, the gateway needs to be deployed as a NodePort instead of ClusterIP. When I deploy with ServiceType=NodePort, I get errors when trying to create entities.
The error is:
translation-not-found[Http failure response for http://store.xxxx.com/product04/api/products?page=0&size=20&sort=id,asc: 404 Not Found]
I do not get this error when the app is deployed as a ClusterIP and I access it through the istio ingress gateway. Does anyone know what I need to do get the microservices to talk to the gateway when its defined as a NodePort?

Related

SSL activation issue while using multiple micro services deployed in Azure Kubernetes cluster through Istio gateway

We have Azure Kubernetes cluster with Istio gateway installed. We have deployed 10 microservices in this cluster. We have installed SSL certificate without any errors. But https is working only with the microservice deployed lastly. Please support us in resolving this issue.

Ocelot API Gateway implementation in AKS

I'm creating AKS cluster, and I want to use API gateway (Ocelot ) to route, and authenticate requests towards containers(microservices) behind the gateway. My question is how to achieve this? I know I must deploy ocelot API gateway inside node, but I don't know how will I configure all traffic to go through API gateway. Can't find an example or directions that could help me. What steps do I need to take? Or is there maybe a better way of accomplishing the desired scenario?
If you use Ocelot as an API Gateway, you must create a .NET project with a configuration file for the routes you want to use. You then deploy this with a Deployment inside your cluster along with the containers running your APIs and front your API Gateway with a ClusterIP service. At this point, you should test internally if the calls are routed properly from the ClusterIP to the API Gateway and to your APIs. You can then expose your API Gateway on the Internet using either a Load Balancer service, an Ingress controller or Azure Application Gateway.
Another way is not to use an Ocelot API Gateway at all by using an Ingress controller and configuring the routes directly in it.

How do make my microservices only accessible by the api gateway

I would like to know how I can protect my Nodejs microservices so only the API gateway can access it. Currently the microservices are exposed on a unique port on my machine and can be access directly without passing through the gateway. That defeats the purpose of the gateway to serve as the only entry point in the system for secure and authorized information exchange.
The microservices and the gateway are currently built with Nodejs and express.
The plan is to eventually deploy it on the cloud (digital ocean). I'd appreciate any response. Thanks.
Kubernetes can solve this problem.
Kubernetes manages containers where each container can be a micro service.
While connecting your micro services to your gateway server, you can choose to only allow foreign connections to your gateway server. You would have a load balancer / nginx in your kubernetes cluster that redirects request to your gateway server.
Kubernetes has many other features such as:
service discovery: each of your micro service's IP could potentially change on restart/deployment unless you have static IP for all ur services. service discovery solves this problem.
high availability & horizontal scaling & zero downtime: you can configure to have several replicas for each of your service. So when one of the service goes down there still are other replicas alive to deal with the remaining requests. This also helps with CICD. With something like github action, you can make a smooth CICD pipeline. When you deploy a new docker image(update a micro service), kubernetes will launch a new container first and then kill the old container. So you have zero down time.
If you are working with micro services, you should definitely have a deep dive into kubernetes.

GKE - How to use HTTPS on the Gateway in Jhipster 6 Microservice UAA project

I need some guidance please, first here is my project details :
- Jhipster v6.0.0
- Angular
- Microservices architecture with Jhipster-Registry + UAA
- No monitoring, no Ingress, no Istio (just the defaults options, full JHipster)
- Deployed on Google Kubernetes Engine cluster
So, if I understand correctly, with my current setup it is the Gateway that is doing the load balancing using Netflix Ribbon and it is the entry point from the World Wide Web to access my app. How can I make my app accessible with HTTPS and SSL certificate on GKE ? I'm a bit confused, do I need to switch to Ingress ?
Thanks

Connecting Jhipster to Spinnaker?

I'm new to spinnaker. Since the Jhipster uses microservice and it has its own load balancer(Netflix OSS).
How could it be connected to Spinnakers load balancer?
Spinnaker does not have load balancer itself. Cloud providers, handled by Spinnaker, does.
May I link the Load balancer definition in Spinnaker docs :
Load Balancer: A Load Balancer is associated with an ingress protocol and port range, and balances traffic among instances in the corresponding Server Group. Optionally, you can enable health checks for a load balancer, with flexiblity to define health criteria and specify the health check endpoint.
AFAIK JHipster is a framework to develop microservices more than "using" them. But I'm maybe wrong.
Assuming JHipster is configuring the spring-cloud client side load balancing with Eureka, a JHipster app would work well deployed with Spinnaker with the Eureka provider enabled - the Eureka status for the JHipster app would be reflected in the Spinnaker UI as well as being used to gate the progression through a deployment pipeline (Spinnaker would not proceed with the pipeline until the app entered the UP state)

Resources