Where is the #AuthorizedFeignClient in Jhipster Release 3.11.0? - jhipster

I have setup my Jhipster Uaa server, gateways and others micro services and i want to use #AuthorizedFeignClient annotation for inter-service-communication as well explained here : https://jhipster.github.io/using-uaa/
But i cannot find it into the java source generated(Jhispter Release 3.11.0).
Did i have to copy manually in my project this only 2 classes found in jhispter github generator for the moment ? (because still in beta ?) :
.../client/_AuthorizedFeignClient.java
and
.../client/_OAuth2InterceptedFeignConfiguration.java
Thanks,
Francois

currently the #AuthorizedFeignClient annotation is only available for microservice applications using UAA as authentication type, but not for gateways and UAA server itself!
I guess you were looking for the annoation in the gateway, or the the UAA server.
Why is this like this? For the gateway it is because the gateway already has a couple of responsibilities, so building composite logic in there is not a good idea.
If you generate a microservice (not gateway, not uaa server), you should have the client package in your Java root with this annoatation, as well as some more configurations (feign client config, load balanced resource details...)
You can copy those to your gateway, to make it working there.
You can copy them to the UAA, too. More on that, this even will work, but with some weird fact...when UAA will ask service "foo" for some data, it will first ask the UAA for a client credentials authentication....like performing a query to itself...while it could just access grant it itself. There is no accurate way to do it, but I didn't want to keep it in this uncool way in JHipster, so the annotation is for microservice only.

Related

How to bypass authentication/authorization on a resource using JHipster microservice + gateway?

We are implementing microservices with JHipster and we have a scenario where we need a specific resource in one of our microservices to be available via Gateway without the need for authorization/authentication.
What sort of configuration should be done in the microservice app or gateway in order to achieve such behavior?
You need to give access to the resource on that microservice, so the resource is accessible from the gateway without authentication.
In you microservice, edit SecuritConfiguration exclude service from .antMatchers("/api/**").authenticated() and add ,for example, .antMatchers("/api/service1").permitAll() line before that.

JHipster Gateway with legacy REST service

I've setup a POC with the following components:
JHipster registry
JHipster API gateway
2 JHipster microservices
The communication works very well between these components.
Another requirement of my POC is to register an legacy webservice(SOAP or REST not developed with JHipster) in the JHipster gateway.
Is it possible?
I would want to use the API Gateway as a unique entry point for all the clients(external and internal)to access all the webservices of my company.
Thank you.
Two important criteria are service discovery and security.
For service discovery, JHipster offers 2 options: JHipster Registry (Eureka) and HashiCorp Consul. Consul is better suited for legacy apps as it is less invasive because you can use DNS resolution and templates and a sidecar proxy approach.
For security, legacy apps should be able to consume authentication tokens to apply authorizations.

What is the jhipster_gateway_authorized-microservices-endpoints__app1 spring property for

I created a microservice project with a gateway & stuff, and I've some interrogation on one of the gateway spring property.
I've this one on my gateway's application-dev.yml (also prod):
jhipster:
gateway:
authorized-microservices-endpoints:
app1: /api,/v2/api-docs
I'm suspecting that the 'app1' shall be replaced by one or all my microservices (and maybe the UAA one too), but I don't know what it does.
Any description or insight on it?
Regards,
The jhipster.gateway.authorized-microservices-endpoints config variable controls access to your microservices when requested through the gateway. In the case of the example, only /api (API endpoints) and /v2/api-docs (Swagger docs) are accessible through the gateway for the app1 microservice.
This means if you try to request an API mapping that is not present in the list (such as an actuator endpoint http://gateway:8080/app1/management/info) it will fail. You can still make the request directly to the microservice if you needed to.
By default, all paths are open to any microservice. To secure your apps, you will need to add your microservices to the list and set the accessible endpoints.
In summary, this config lets you reduce the attack surface of your microservices. Here's a link to the related JHipster issue where this was added. You can also find details in JHipster's Gateway documentation.

Docker Microservice Architecture - Communication between different containers

I've just started working with docker and I'm currently trying to work out how to setup a project using microservice architecture.
My goal is to move out different services from the api and instead have each one in their own container.
Current architecture
Desired architecture
Questions
How does the API gateway communicate with the internal services? Should all microservices have their own API which only accept communication from the API gateway? Any other means of communications?
What would be the ideal authentication between the gateway and the microservices? JWT token? Basic Auth?
Do you see any problems with this architecture if hosted in Azure?
Is integration testing even possible in the desired architecture? For example, I use EF SQlite inmemory for integration testing and its easily accessible within the api, but I don't see this working if the database is located in it's own container.
Anything important here that i've missed?
I had created an application that is completely a micro service based architecture running on AWS ECS(Container Service), Each microservice is pushed on container as Docker image. There are 2 instances of EC2 are running for achieving High Availability and same mircoservices are running on both instances so if one instance goes down another can take care of requests.
each microservice use its own database and inter microservice communication is happening using client registry on HTTP protocol and discovery, Spring Cloud Consul and Netflix Eureka can be used for service discovery and registery.
.
Please find the diagram below :

Microservices Architecture in NodeJS

I was working on a side project and i deiced to redesign my Skelton project to be as Microservices, so far i didn't find any opensource project that follow this pattern. After a lot of reading and searching i conclude to this design but i still have some questions and thought.
Here are my questions and thoughts:
How to make the API gateway smart enough to load balnce the request if i have 2 node from the same microservice?
if one of the microservice is down how the discovery should know?
is there any similar implementation? is my design is right?
should i use Eureka or similar things?
Your design seems OK. We are also building our microservice project using API Gateway approach. All the services including the Gateway service(GW) are containerized(we use docker) Java applications(spring boot or dropwizard). Similar architecture could be built using nodejs as well. Some topics to mention related with your question:
Authentication/Authorization: The GW service is the single entry point for the clients. All the authentication/authorization operations are handled in the GW using JSON web tokens(JWT) which has nodejs libray as well. We keep authorization information like user's roles in the JWT token. Once the token is generated in the GW and returned to client, at each request the client sends the token in HTTP header then we check the token whether the client has the required role to call the specific service or the token has expired. In this approach, you don't need to keep track user's session in the server side. Actually there is no session. The required information is in the JWT token.
Service Discovery/ Load balance: We use docker, docker swarm which is a docker engine clustering tool bundled in docker engine (after docker v.12.1). Our services are docker containers. Containerized approach using docker makes it easy to deploy, maintain and scale the services. At the beginning of the project, we used Haproxy, Registrator and Consul together to implement service discovery and load balancing, similar to your drawing. Then we realized, we don't need them for service discovery and load balancing as long as we create a docker network and deploy our services using docker swarm. With this approach you can easily create isolated environments for your services like dev,beta,prod in one or multiple machines by creating different networks for each environment. Once you create the network and deploy services, service discovery and load balancing is not your concern. In same docker network, each container has the DNS records of other containers and can communicate with them. With docker swarm, you can easily scale services, with one command. At each request to a service, docker distributes(load balances) the request to a instance of the service.
Your design is OK.
If your API gateway needs to implement (and thats probably the case) CAS/ some kind of Auth (via one of the services - i. e. some kind of User Service) and also should track all requests and modify the headers to bear the requester metadata (for internal ACL/scoping usage) - Your API Gateway should be done in Node, but should be under Haproxy which will care about load-balancing/HTTPS
Discovery is in correct position - if you seek one that fits your design look nowhere but Consul.
You can use consul-template or use own micro-discovery-framework for the services and API-Gateway, so they share end-point data on boot.
ACL/Authorization should be implemented per service, and first request from API Gateway should be subject to all authorization middleware.
It's smart to track the requests with API Gateway providing request ID to each request so it lifecycle could be tracked within the "inner" system.
I would add Redis for messaging/workers/queues/fast in-memory stuff like cache/cache invalidation (you can't handle all MS architecture without one) - or take RabbitMQ if you have much more distributed transaction and alot of messaging
Spin all this on containers (Docker) so it will be easier to maintain and assemble.
As for BI why you would need a service for that? You could have external ELK Elastisearch, Logstash, Kibana) and have dashboards, log aggregation, and huge big data warehouse at once.

Resources