We are deploying Jhipster Apps on Kubernetes - Registry in a microservices setup and a monolithic application (all generated using 6.7.1 with KeyCloak). And we believe this issue exists with 6.10.5 too )When we increase the pod count of either Registry or Mono to two, these applications start behaving weirdly and we start seeing the error
No 'Access-Control-Allow-Origin' header is present on the requested resource
Similar to https://github.com/jhipster/generator-jhipster/issues/10642
We tried everything stated above but we suspect :
Since UI uses cookie , we assume it to be stateful and session replication is not happening when deployed on Kubernetes. Is there a Jhipster configuration that we need to set to start session replication - since it already uses HazelCast?
Our KeyCloak is deployed in the same cluster and same namespace using Official KeyCloak HelmCharts since jhipster kubernetes does not provide them.
Related
I have two projects, one in ASP.NET Core MVC in one container, the other in ASP.NET Core WebAPI in a separate container, both are using Azure Kubernetes Service and Helm.
The MVC project makes calls to the WebAPI project. It works locally by using localhost.
My question is how to set it up so that it works on AKS and accepts public request.
You can read this document which shows you the network in Kubernetes that communicate be between containers in the same pod, or in the same node but different pods, or in the different pods and different nodes. And Kubernetes usually use the service for each deployment to communicate between pods.
It's simple to achieve. You just need to create two images for your applications and use the images to create the deployment. Make sure what is the service for each deployment. In your code, when you want to connect to another pod of the deployment, you can directly connect it with the service of the deployment.
Here I show you a simple plan:
deployment: frontend -> ASP.NET Core MVC, service: frontend-service
deployment: backend -> ASP.NET Core WebAPI, service: backend-service
Within the frontend container, you can connect the backend container like this, I just use the shell command to make the example:
curl http://backend-service
It means you just need to connect the container you expect with its service.
Helm just use the chart to manage all the things for you.
I need some guidance please, first here is my project details :
- Jhipster v6.0.0
- Angular
- Microservices architecture with Jhipster-Registry + UAA
- No monitoring, no Ingress, no Istio (just the defaults options, full JHipster)
- Deployed on Google Kubernetes Engine cluster
So, if I understand correctly, with my current setup it is the Gateway that is doing the load balancing using Netflix Ribbon and it is the entry point from the World Wide Web to access my app. How can I make my app accessible with HTTPS and SSL certificate on GKE ? I'm a bit confused, do I need to switch to Ingress ?
Thanks
I am planning upgrade the existing Spark 1.6 to 2.1 in Cloudera, I was advised that I should assign gateway role to all Node Manager and Resource Manager nodes. Current gateway role is assigned to a proxy node, which is not included in the planned Spark2, the reason is that proxy node has too many (20+) roles, I wonder if anyone can give any suggestion here? I checked Cloudera doc, I don't see a guideline on it (or maybe I missed it?)
Thanks lots.
I have a slight disagreement with the other answer, which says
By default any host running a service will have the config files
included so you don't need to add a gateway role to your Node Manager
and Resource Manager roles
Just having Node Manager and Resource Manager running on a node will only give you the configuration files for YARN, not Spark2. That being said, you only need to deploy Spark gateway role to your edge node, where you allow end user to login and run command line tool such as beeline, hdfs command and spark-shell/spark-submit. No one should be allowed to login your Node Manager/Datanode, as a security policy.
In your case, it looks like what you call proxy node. The gateway is just configuration files and is not a running process. So I don't think you need to be concerned about too many existing roles.
A gateway role just has the config files such as /etc/hadoop/conf/*. It allows clients to run on that host (the hdfs, hadoop, yarn, spark CLIs) and submit commands to the cluster. By default any host running a service will have the config files included so you don't need to add a gateway role to your Node Manager and Resource Manager roles.
The official documentation describes it as such:
Managing Roles: Gateway Roles
A gateway is a special type of role whose sole purpose is to designate a host that should receive a client configuration for a specific service, when the host does not have any roles running on it. Gateway roles enable Cloudera Manager to install and manage client configurations on that host. There is no process associated with a gateway role, and its status will always be Stopped. You can configure gateway roles for HBase, HDFS, Hive, Kafka, MapReduce, Solr, Spark, Sqoop 1 Client, and YARN.
I have setup my Jhipster Uaa server, gateways and others micro services and i want to use #AuthorizedFeignClient annotation for inter-service-communication as well explained here : https://jhipster.github.io/using-uaa/
But i cannot find it into the java source generated(Jhispter Release 3.11.0).
Did i have to copy manually in my project this only 2 classes found in jhispter github generator for the moment ? (because still in beta ?) :
.../client/_AuthorizedFeignClient.java
and
.../client/_OAuth2InterceptedFeignConfiguration.java
Thanks,
Francois
currently the #AuthorizedFeignClient annotation is only available for microservice applications using UAA as authentication type, but not for gateways and UAA server itself!
I guess you were looking for the annoation in the gateway, or the the UAA server.
Why is this like this? For the gateway it is because the gateway already has a couple of responsibilities, so building composite logic in there is not a good idea.
If you generate a microservice (not gateway, not uaa server), you should have the client package in your Java root with this annoatation, as well as some more configurations (feign client config, load balanced resource details...)
You can copy those to your gateway, to make it working there.
You can copy them to the UAA, too. More on that, this even will work, but with some weird fact...when UAA will ask service "foo" for some data, it will first ask the UAA for a client credentials authentication....like performing a query to itself...while it could just access grant it itself. There is no accurate way to do it, but I didn't want to keep it in this uncool way in JHipster, so the annotation is for microservice only.
I was working on a side project and i deiced to redesign my Skelton project to be as Microservices, so far i didn't find any opensource project that follow this pattern. After a lot of reading and searching i conclude to this design but i still have some questions and thought.
Here are my questions and thoughts:
How to make the API gateway smart enough to load balnce the request if i have 2 node from the same microservice?
if one of the microservice is down how the discovery should know?
is there any similar implementation? is my design is right?
should i use Eureka or similar things?
Your design seems OK. We are also building our microservice project using API Gateway approach. All the services including the Gateway service(GW) are containerized(we use docker) Java applications(spring boot or dropwizard). Similar architecture could be built using nodejs as well. Some topics to mention related with your question:
Authentication/Authorization: The GW service is the single entry point for the clients. All the authentication/authorization operations are handled in the GW using JSON web tokens(JWT) which has nodejs libray as well. We keep authorization information like user's roles in the JWT token. Once the token is generated in the GW and returned to client, at each request the client sends the token in HTTP header then we check the token whether the client has the required role to call the specific service or the token has expired. In this approach, you don't need to keep track user's session in the server side. Actually there is no session. The required information is in the JWT token.
Service Discovery/ Load balance: We use docker, docker swarm which is a docker engine clustering tool bundled in docker engine (after docker v.12.1). Our services are docker containers. Containerized approach using docker makes it easy to deploy, maintain and scale the services. At the beginning of the project, we used Haproxy, Registrator and Consul together to implement service discovery and load balancing, similar to your drawing. Then we realized, we don't need them for service discovery and load balancing as long as we create a docker network and deploy our services using docker swarm. With this approach you can easily create isolated environments for your services like dev,beta,prod in one or multiple machines by creating different networks for each environment. Once you create the network and deploy services, service discovery and load balancing is not your concern. In same docker network, each container has the DNS records of other containers and can communicate with them. With docker swarm, you can easily scale services, with one command. At each request to a service, docker distributes(load balances) the request to a instance of the service.
Your design is OK.
If your API gateway needs to implement (and thats probably the case) CAS/ some kind of Auth (via one of the services - i. e. some kind of User Service) and also should track all requests and modify the headers to bear the requester metadata (for internal ACL/scoping usage) - Your API Gateway should be done in Node, but should be under Haproxy which will care about load-balancing/HTTPS
Discovery is in correct position - if you seek one that fits your design look nowhere but Consul.
You can use consul-template or use own micro-discovery-framework for the services and API-Gateway, so they share end-point data on boot.
ACL/Authorization should be implemented per service, and first request from API Gateway should be subject to all authorization middleware.
It's smart to track the requests with API Gateway providing request ID to each request so it lifecycle could be tracked within the "inner" system.
I would add Redis for messaging/workers/queues/fast in-memory stuff like cache/cache invalidation (you can't handle all MS architecture without one) - or take RabbitMQ if you have much more distributed transaction and alot of messaging
Spin all this on containers (Docker) so it will be easier to maintain and assemble.
As for BI why you would need a service for that? You could have external ELK Elastisearch, Logstash, Kibana) and have dashboards, log aggregation, and huge big data warehouse at once.