pipeline keycloak to microservices using nginx in nodejs - node.js

I have several microservices as nodejs apps which are protected by keycloak. I am currently using express api gateway to pipeline auth and then access the other microservices, which is working fine.Currently my requirement is to using nginx instead of the express api gateway.Since i am new to this can someone point me in the right direction of how do i go about setting the same as below
use nginx as gateway
pipeline keycloak auth to get the access token
Pass it to the other microservices to access them
Also below is the sample yaml that i maintain in my express api gateway
http:
port: ${PORT}
admin:
host: localhost
port: 9081
apiEndpoints:
microservice:
host: "*"
paths: ["/api/service/", "/api/service/*"]
serviceEndpoints:
microService:
url: ${MICRO_SERVICE_URL}
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
- keycloak-protect
microservicePipeline:
apiEndpoints:
- service
policies:
- cors:
- keycloak-protect:
- proxy:
- action:
serviceEndpoint: microService
changeOrigin: true
Kindly help me how to proceed further .Thanks

Related

403 Forbidden, communication among docker containers

I have an application composed of react-client (frontend), express server (backend), and keycloak. For development purpose, I run keycloak inside a docker-container and expose its corresponding port (8080); frontend and backend run locally on my machine. They connect to keycloak on the aforementioned port. Backend serves some REST end-points and these end-points are protected by keycloak. Everything works fine.
However, when I tried to containerize my application for production purpose by putting backend in a container and run everything with docker-compose (frontend still run on my local machine), backend rejected all requests from frontend, although these requests are attached with a valid token. I guess the problem is that backend cannot connect with keycloak to verify the token but I don't know why and how to fix the problem.
This is my docker-compose.yml:
version: "3.8"
services:
backend:
image: "backend"
build:
context: .
dockerfile: ./backend/Dockerfile
ports:
- "5001:5001"
keycloak:
image: "jboss/keycloak"
ports:
- "8080:8080"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- KEYCLOAK_IMPORT=/tmp/realm-export.json
volumes:
- ./realm-export.json:/tmp/realm-export.json
mongo_db:
image: "mongo:4.2-bionic"
ports:
- "27017:27017"
mongo_db_web_interface:
image: "mongo-express"
ports:
- "4000:8081"
environment:
- ME_CONFIG_MONGODB_SERVER=mongo_db
This is keycloak configuration in backend code:
{
"realm": "License-game",
"bearer-only": true,
"auth-server-url": "http://keycloak:8080/auth/",
"ssl-required": "external",
"resource": "backend",
"confidential-port": 0
}
This is keycloak configuration in frontend code:
{
URL: "http://localhost:8080/auth/",
realm: 'License-game',
clientId: 'react'
}
This is the configuration of keycloak for backend
Backend and frontend are using different Keycloak URLs in your case - http://keycloak:8080/auth/ vs http://localhost:8080/auth/, so they are expecting different issuer in the token.
So yes, token from the frontend is valid, but not for the backend. Because that one is expecting different issuer value in the token. Use the same keycloak domain everywhere and you want have this kind of problem.
I was having the same problem this days. As previously answered the problem is within the token issuer.
In order to make it work refer to this solution

AWS API GATEWAY from origin 'http://localhost:8080' has been blocked by CORS policy

I’m trying to deploy an API to be used in Front End. it's work when testing the API alone but it's return CORS error when integrate in Vue app where the error return is:
Access to XMLHttpRequest at
'https://APIDomain/development/pin'
from origin 'http://localhost:8080' has been blocked by CORS policy:
No 'Access-Control-Allow-Origin' header is present on the requested
resource.
i'm using serverless to deploy this API where i have setup the origin allowed in this serverless.yml:
service: test-lambda-node
provider:
name: aws
runtime: nodejs12.x
stage: development
region: ap-east-1
memorySize: 512
timeout: 15
functions:
app:
handler: lambda.handler
events:
- http:
path: /
method: ANY
cors:
origin: 'http://localhost:8080'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
- http:
path: /{proxy+}
method: ANY
cors:
origin: 'http://localhost:8080'
headers:
- Content-Type
- X-Amz-Date
- Authorization
- X-Api-Key
- X-Amz-Security-Token
- X-Amz-User-Agent
allowCredentials: false
I’m also using wildcard to allowed all type of origin but it's still not work. I also attempted to enable CORS manually in AWS API Gateway console but still not work.
Is there any way for me to allow this API to enable the CORS policy for any origin?
Problem
CORS headers aren’t being sent from your Vue application.
Solution
Add Access-Control-Allow-Origin header to your Vue app when sending a request to the API.
Note
Also, set allowCredentials in your serverless.yml and Access-Control-Allow-Credentials In your Vue app to true if you are using cookies.
References
Serverless - CORS Survival Guide: https://www.serverless.com/blog/cors-api-gateway-survival-guide

Serverless deploy on AWS - routes brakes

I'm deploing my first nodejs serverless app on AWS. In local stage all work well, but when I try to access to my app on AWS, all the routes are brakes. The endpoint serving from the cli is like this:
https://test.execute-api.eu-west-1.amazonaws.com/stage/
adding the word stage at the end of the path. So all my routes to static resources or other endpoint are brakes.
This is my config file:
secret.json
{
"NODE_ENV": "stage",
"SECRET_OR_KEY": "secret",
"TABLE_NAME": "table",
"service_URL": "https://services_external/json",
"DATEX_USERNAME": "usrn",
"DATEX_PASSWD": "psw"
}
serverless.yml
service: sls-express-dynamodb
custom:
iopipeNoVerify: true
iopipeNoUpgrade: true
iopipeNoStats: true
secrets: ${file(secrets.json)}
provider:
name: aws
runtime: nodejs8.10
stage: ${self:custom.secrets.NODE_ENV}
region: eu-west-1
environment:
NODE_ENV: ${self:custom.secrets.NODE_ENV}
SECRET_OR_KEY: ${self:custom.secrets.SECRET_OR_KEY}
TABLE_NAME: ${self:custom.secrets.TABLE_NAME}
DATEX_USERNAME: ${self:custom.secrets.DATEX_USERNAME}
DATEX_PASSWD: ${self:custom.secrets.DATEX_PASSWD}
DATEX_URL: ${self:custom.secrets.DATEX_URL}
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
# - dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource: 'arn:aws:dynamodb:${opt:region, self:provider.region}:*:table/${self:provider.environment.TABLE_NAME}'
functions:
app:
handler: server.run
events:
- http:
path: /
method: ANY
cors: true
- http:
path: /{proxy+}
method: ANY
cors: true
You should be able to find out the API Gateway endpoint via Web UI.
Login into the AWS Console
Go to API Gateway
On the left panel, click on the API name. (E.g. sls-express-dynamodb-master)
On the left panel, click on Stages
On the middle panel, click on the stage name. (E.g. master)
On the right panel you will find the API URL, marked as: Invoke URL

Express gateway cannot GET

I am trying to setup multiple (nodejs) services in express gateway, but, for some reason, the second service is not picked up. Please find below my gateway.config.yml
http:
port: 8080
admin:
port: 9876
hostname: localhost
apiEndpoints:
config:
host: localhost
actions:
host: localhost
serviceEndpoints:
configService:
url: "http://localhost:3002"
actionService:
url: "http://localhost:3006"
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
pipelines:
- name: basic
apiEndpoints:
- config
policies:
- proxy:
- action:
serviceEndpoint: configService
changeOrigin: true
- name: basic2
apiEndpoints:
- actions
policies:
- proxy:
- action:
serviceEndpoint: actionService
changeOrigin: true
That is expected, because apiEndpoints config part uses the same host and path to build routing
apiEndpoints:
config:
host: localhost
actions:
host: localhost
what you can do is to somehow separate it by path
apiEndpoints:
config:
path: /configs
actions:
path: /actions
in that way localhost/configs/db will go to config service ..:3002/configs/db
and localhost/actions/magic will go to actions ..:3006/actions/magic
you may also want to install Rewrite plugin
https://www.express-gateway.io/docs/policies/rewrite/
in case target services have different URL patterns

How to allow only https traffic using a nodejs app on GAE flex vm?

I have the following configuration in app.yaml
runtime: nodejs
env: flex
handlers:
- url: /.*
script: app.js
secure: always
But still I am able to access both http and https. I would like to disable all communication to http. Am I missing any configuration?

Resources