GKE - How to use HTTPS on the Gateway in Jhipster 6 Microservice UAA project - jhipster

I need some guidance please, first here is my project details :
- Jhipster v6.0.0
- Angular
- Microservices architecture with Jhipster-Registry + UAA
- No monitoring, no Ingress, no Istio (just the defaults options, full JHipster)
- Deployed on Google Kubernetes Engine cluster
So, if I understand correctly, with my current setup it is the Gateway that is doing the load balancing using Netflix Ribbon and it is the entry point from the World Wide Web to access my app. How can I make my app accessible with HTTPS and SSL certificate on GKE ? I'm a bit confused, do I need to switch to Ingress ?
Thanks

Related

How do make my microservices only accessible by the api gateway

I would like to know how I can protect my Nodejs microservices so only the API gateway can access it. Currently the microservices are exposed on a unique port on my machine and can be access directly without passing through the gateway. That defeats the purpose of the gateway to serve as the only entry point in the system for secure and authorized information exchange.
The microservices and the gateway are currently built with Nodejs and express.
The plan is to eventually deploy it on the cloud (digital ocean). I'd appreciate any response. Thanks.
Kubernetes can solve this problem.
Kubernetes manages containers where each container can be a micro service.
While connecting your micro services to your gateway server, you can choose to only allow foreign connections to your gateway server. You would have a load balancer / nginx in your kubernetes cluster that redirects request to your gateway server.
Kubernetes has many other features such as:
service discovery: each of your micro service's IP could potentially change on restart/deployment unless you have static IP for all ur services. service discovery solves this problem.
high availability & horizontal scaling & zero downtime: you can configure to have several replicas for each of your service. So when one of the service goes down there still are other replicas alive to deal with the remaining requests. This also helps with CICD. With something like github action, you can make a smooth CICD pipeline. When you deploy a new docker image(update a micro service), kubernetes will launch a new container first and then kill the old container. So you have zero down time.
If you are working with micro services, you should definitely have a deep dive into kubernetes.

How to make a call from angular application to .net core web api in kubernetes

I have created docker image for angular and .net core api and deployed in the azure kubernetes. I have used Ingress controller for angular to expose outside of the cluster. I would like to know how to make a http call from angular app to core api which is exposed as ClusterIP service(Without exposing outside).
For Example: http://xxxxxxxxxx/api/test (from angular app)
here what is the value of xxxxxxxxxxx.?
Or How can we make a call.?
Could you please suggest with example.?
Every service you create in kubernetes has a dns name (two actually):
service_name:service_port
service_name.namespace_name.svc.cluster.local:service_port
they will always resolve to the proper ip address to talk to your service (as long as kubernetes functions properly).
so just create a service for your api and use this notation to access it.
Reading: https://kubernetes.io/docs/concepts/services-networking/service/

jhipster microservice gateway with multi-cluster ingress

I have a jhipster generated microservice gateway and application that I run in Google's GKE by using the jhipster kubernetes generator. I have istio deployed in the kubernetes cluster and not using the jhipster-registry.
When I deploy the gateway with ServiceType=Ingress, the communication between the gateway and the application works great. I am trying to get up a GKE multi-cluster ingress which load balances the application deployed in clusters in different regions.
Google has a beta tool called kubemci which sets up all the plumbing for the load balancers. However, in order to use kubemci, the gateway needs to be deployed as a NodePort instead of ClusterIP. When I deploy with ServiceType=NodePort, I get errors when trying to create entities.
The error is:
translation-not-found[Http failure response for http://store.xxxx.com/product04/api/products?page=0&size=20&sort=id,asc: 404 Not Found]
I do not get this error when the app is deployed as a ClusterIP and I access it through the istio ingress gateway. Does anyone know what I need to do get the microservices to talk to the gateway when its defined as a NodePort?

JHipster Gateway with legacy REST service

I've setup a POC with the following components:
JHipster registry
JHipster API gateway
2 JHipster microservices
The communication works very well between these components.
Another requirement of my POC is to register an legacy webservice(SOAP or REST not developed with JHipster) in the JHipster gateway.
Is it possible?
I would want to use the API Gateway as a unique entry point for all the clients(external and internal)to access all the webservices of my company.
Thank you.
Two important criteria are service discovery and security.
For service discovery, JHipster offers 2 options: JHipster Registry (Eureka) and HashiCorp Consul. Consul is better suited for legacy apps as it is less invasive because you can use DNS resolution and templates and a sidecar proxy approach.
For security, legacy apps should be able to consume authentication tokens to apply authorizations.

Connecting Jhipster to Spinnaker?

I'm new to spinnaker. Since the Jhipster uses microservice and it has its own load balancer(Netflix OSS).
How could it be connected to Spinnakers load balancer?
Spinnaker does not have load balancer itself. Cloud providers, handled by Spinnaker, does.
May I link the Load balancer definition in Spinnaker docs :
Load Balancer: A Load Balancer is associated with an ingress protocol and port range, and balances traffic among instances in the corresponding Server Group. Optionally, you can enable health checks for a load balancer, with flexiblity to define health criteria and specify the health check endpoint.
AFAIK JHipster is a framework to develop microservices more than "using" them. But I'm maybe wrong.
Assuming JHipster is configuring the spring-cloud client side load balancing with Eureka, a JHipster app would work well deployed with Spinnaker with the Eureka provider enabled - the Eureka status for the JHipster app would be reflected in the Spinnaker UI as well as being used to gate the progression through a deployment pipeline (Spinnaker would not proceed with the pipeline until the app entered the UP state)

Resources