What's the difference between Jhipster UAA and cloud foundry UAA, are they compatible with each other?
This doesn't describe it entirely but from http://jhipster.github.io/using-uaa/
JHipster UAA is a user accounting and authorizing service for securing JHipster microservices using the OAuth2 authorization protocol.
To clearly distinct JHipster UAA from other “UAA”s as cloudfoundrys UAA, JHipster UAA is an fully configured OAuth2 authorization server with the users and roles endpoints inside, wrapped into a usual JHipster application. This allows the developer to deeply configure every aspect of his user domain, without restricting on policies by other ready-to-use UAAs.
I'd say that JHipster UAA is simply a spring-boot app (tweaks a-la jhipster...but without the angular client-side) that uses the #EnableAuthorizationServer to make the UAA app to serve as an oauth2 authorization server--granting tokens to client apps (jhipster gateways in this case) to call resource servers and providing the public key that resource servers would use to verify tokens. JHipster UAA is predominantly a server-side app at the moment. It has the authorization server code and stores the actual user information but has no UI itself for managing those users (the UI to manage it is duplicated on each gateway app). JHipster's UAA also can't do single sign on (SSO) (unlike cloudfoundry uaa) because it doesn't expose a login endpoint in the browser needed to create the session on the authorization server to enable SSO between client (or gateway) apps.
Cloudfoundry's UAA is much more comprehensive but does much the same thing (as far as doing what oauth2 authorization servers do). As it stands right now, cloudfoundry is a more mature and flexible app but isn't integrated with jhipster out of the box...yet.
I currently still have an old public github repo that integrates jhipster with cloudfoundry uaa but jhipster has changed a lot since. https://github.com/sdoxsee/jhipster-openid-connect-microservices
Related
I am planning on building a K8s cluster with many microservices (each running in pods with services ensuring communication). I'm trying to understand how to ensure communication between these microservices is secure. By communication, I mean HTTP calls between microservice A and microservice B's API.
Usually, I would implement an OAuth flow, where an auth server would receive some credentials as input and return a JWT. And then the client could use this JWT in any subsequent call.
I expected K8s to have some built-in authentication server that could generate tokens (like a JWT) but I can't seem to find one.
K8s does have authentication for its API server, but that only seems to authenticate calls that perform Kubernetes specific actions such as scaling a pod or getting secrets etc.
However, there is no mention of simply authenticating HTTP calls (GET POST etc).
Should I just create my own authentication server and make it accessible via a service or is there a simple and clean way of authenticating API calls automatically in Kubernetes?
Not sure how to answer this vast question, however, i will try my best.
There are multiple solutions that you could apply but again there is nothing in K8s for auth you can use.
Either you have to set up the third-party OAuth server or IAM server etc, or you write and create your own microservice.
There are different areas which you cannot merge,
For service interconnection service A to service B it would be best to use the service mesh like Istio and LinkerD which provide the mutual TLS support for security and are easy to set up also.
So the connection between services will be HTTPS and secured but it's on you to manage it and set it up.
If you just run plain traffic inside your backend you can follow the same method that you described.
Passing plain HTTP with jwt payload or so in backend services.
Keycloak is also a good idea to use the OAuth server, i would also recommend checking out Oauth2-proxy
Listing down few article also that might be helpful
https://medium.com/codex/api-authentication-using-istio-ingress-gateway-oauth2-proxy-and-keycloak-a980c996c259
My Own article on Keycloak with Kong API gateway on Kubernetes
https://faun.pub/securing-the-application-with-kong-keycloak-101-e25e0ae9ec56
GitHub files for POC : https://github.com/harsh4870/POC-Securing-the--application-with-Kong-Keycloak
Keycloak deployment on K8s : https://github.com/harsh4870/Keycloack-postgres-kubernetes-deployment
We are in the process of developing a mobile (native) app, and are looking at how we should do user authentication. Most of the information I have found have been about web apps and / or third-party apps accessing public APIs. OAuth 2 is therefore recommended to be used most of the time.
Since we develop the app and our API isn't public, it seems like the Resource Owner Password Credentials OAuth 2 flow could be an option, but according to oauth.net that is not recommended any more.
We are using Google App Engine (with Node.js) and Cloud Endpoints (Not sure if end-points would be needed since it's a private API, but that is another question) as the back-end, and both Firebase Auth and Auth0 has built in support in Endpoints. However, we have some special requirements that doesn't make those services suitable (Swedish BankID for example).
What other options are there when authenticating users? Could we write an app in App Engine to check the users credentials against our database, and then send back a JWT (Cloud Endpoints supports custom authentication methods as long as they use JWT)? Would it be safe to do this ourselves? I have found some Node.js libraries for authentication, but most seem to be aimed at web apps. Are there any that are suited for a native app front end?
For authentication, yes, you can perform the check yourselves, in your database and deliver or not a JWT according with the authentication result.
However, and it's obvious, this authentication service must be public (because it's for authenticated unauthenticated users!). And thus, you can be expose to attacks on this service. And because it's the authentication service, if the service goes down, no one can no longer sign in, or worse, if you have a security breach, your user database can be stolen.
That's why, to use existing services, with all the protections, all the resources (people, monitoring, automatic response, high availability,...) deployed to managed a large number of threats. Firebase auth, Auth0, Okta (...) are suitable providers but I don't know your Swedish requirement and you might not avoid specific developments
I'm reading about of UAA server, and have one doubt...
If i have a project with microservices (MS), with:
UAA server
MS type gateway (using UAA authentication)
MS type application (using UAA authentication)
I understand that the UAA server, creates User entity in your own database (for example db_uaa), but my doubt is born when i think about the MS gateway.
The MS gateway too creates another User entity in your own database (db_gateway) or the MS gateway no creates User entity in your database (db_gateway), because it uses the UAA authentication.
I hope you can help me to clarify this doubt, thanks.
Users are stored on UAA side only.
The Gateway which supports the client side (i.e., authentication and user-management) will access to users through some exposed UAA services.
If you need to access to the user entity from another microservice, look at the #Feign concept.
Hope this helps.
Can anybody point me in a direction for configuring the jhipster gateway to use an external OpenID Connect (OIDC) provider instead of bundling all the UAA stuff? I know of the jhipster UAA server, but that seems to be a standalone auth server.
My use case is that my (many) different jhipster microservice projects will have their authentication and JWT generation stuff handled by an external OIDC provider - not the jhipster gateway itself.
Btw, I'm aware of these projects:
github.com/jhipster/jhipster-openid-connect
github.com/sdoxsee/jhipster-openid-connect-microservices
And I've read through this lengthy discussion which seems to conclude that and OpenId Connect alternative is in the making:
https://github.com/jhipster/jhipster-experimental-microservices/issues/3
I have some ideas:
Set up a microservices stack to use the UAA server. Then, in some way, instead of using the jhipster UAA server, point at my external OIDC provider.
Look at what mraible has done with regards to the Stormpath (and soon to come Okta) subgenerator.
Experiment with "social logins"(jhipster.github.io/tips/012_tip_add_new_spring_social_connector.html)
Would anybody like to discuss?
You may be already aware but OpenID Connect support has been merged and is due to come out in the next jhipster release (4.10.0?)
Here's the merged pull request and more support is coming.
I'm trying to design a green-field project that will have several services (serving data) and web-applications (serving HTML). I've read about microservices and they look like good fit.
The problem I still have is how to implement SSO. I want the user to authenticate once and have access to all the different services and applications.
I can think of several approaches:
Add Identity service and application. Any service that has protected resources will talk to the Identity service to make sure the credentials it has are valid. If they are not it will redirect the user for authentication.
Use a web-standard such as OpenID and have each service handle it own identities. This means the user will have to authorize individually each service/application but after that it will be SSO.
I'll be happy to hear other ideas. If a specific PaaS (such as Heroku) has a proprietary solution that would also be acceptable.
While implementing a microservice architecture at my previous job we decided the best approach was in alignment with #1, Add identity service and authorize service access through it. In our case this was done with tokens. If a request came with an authorization token then we could verify that token with the identity service if it was the first call in the user's session with the service. Once the token had been validated then it was saved in the session so subsequent calls in the user's session did not have to make the additional call. You can also create a scheduled job if tokens need to be refreshed in that session.
In this situation we were authenticating with an OAuth 2.0 endpoint and the token was added to the HTTP header for calls to our domain. All of the services were routed from that domain so we could get the token from the HTTP header. Since we were all part of the same application ecosystem, the initial OAuth 2.0 authorization would list the application services that the user would be giving permission to for their account.
An addition to this approach was that the identity service would provide the proxy client library which would be added to the HTTP request filter chain and handle the authorization process to the service. The service would be configured to consume the proxy client library from the identity service. Since we were using Dropwizard this proxy would become a Dropwizard Module bootstrapping the filter into the running service process. This allowed for updates to the identity service that also had a complimentary client side update to be easily consumed by dependent services as long as the interface did not change significantly.
Our deployment architecture was spread across AWS Virtual Private Cloud (VPC) and our own company's data centers. The OAuth 2.0 authentication service was located in the company's data center while all of our application services were deployed to AWS VPC.
I hope the approach we took is helpful to your decision. Let me know if you have any other questions.
Chris Sterling explained standard authentication practice above and it makes absolute sense. I just want to put another thought here for some practical reasons.
We implemented authentication services and multiple other micro services relying on auth server in order to authorize resources. At some point we ran in to performance issues due to too many round trips to authentication server, we also had scalability issues for auth server as number of micro services increased. We changed the architecture little bit to avoid too many round trips.
Auth server will be contacted only once with credentials and it will generate the token based on a private key. Corresponding public key will be installed in each client (micro service server) which will be able to validate the authentication key with out contacting auth server. Key contain time generated and a client utility installed in micro service will validity as well. Even though it was not standard implementation we have pretty good success with this model especially when all the micro services are internally hosted.