403 Forbidden, communication among docker containers - node.js

I have an application composed of react-client (frontend), express server (backend), and keycloak. For development purpose, I run keycloak inside a docker-container and expose its corresponding port (8080); frontend and backend run locally on my machine. They connect to keycloak on the aforementioned port. Backend serves some REST end-points and these end-points are protected by keycloak. Everything works fine.
However, when I tried to containerize my application for production purpose by putting backend in a container and run everything with docker-compose (frontend still run on my local machine), backend rejected all requests from frontend, although these requests are attached with a valid token. I guess the problem is that backend cannot connect with keycloak to verify the token but I don't know why and how to fix the problem.
This is my docker-compose.yml:
version: "3.8"
services:
backend:
image: "backend"
build:
context: .
dockerfile: ./backend/Dockerfile
ports:
- "5001:5001"
keycloak:
image: "jboss/keycloak"
ports:
- "8080:8080"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- KEYCLOAK_IMPORT=/tmp/realm-export.json
volumes:
- ./realm-export.json:/tmp/realm-export.json
mongo_db:
image: "mongo:4.2-bionic"
ports:
- "27017:27017"
mongo_db_web_interface:
image: "mongo-express"
ports:
- "4000:8081"
environment:
- ME_CONFIG_MONGODB_SERVER=mongo_db
This is keycloak configuration in backend code:
{
"realm": "License-game",
"bearer-only": true,
"auth-server-url": "http://keycloak:8080/auth/",
"ssl-required": "external",
"resource": "backend",
"confidential-port": 0
}
This is keycloak configuration in frontend code:
{
URL: "http://localhost:8080/auth/",
realm: 'License-game',
clientId: 'react'
}
This is the configuration of keycloak for backend

Backend and frontend are using different Keycloak URLs in your case - http://keycloak:8080/auth/ vs http://localhost:8080/auth/, so they are expecting different issuer in the token.
So yes, token from the frontend is valid, but not for the backend. Because that one is expecting different issuer value in the token. Use the same keycloak domain everywhere and you want have this kind of problem.

I was having the same problem this days. As previously answered the problem is within the token issuer.
In order to make it work refer to this solution

Related

Keycloak can not access when running inside a docker-compose network

I'm trying to use Keycloak quick starts in https://github.com/keycloak/keycloak-quickstarts.
I ran app-nodejs-html5 (https://github.com/keycloak/keycloak-quickstarts/tree/latest/app-nodejs-html5) , Keycloak 20.0.1 & service-nodejs (https://github.com/keycloak/keycloak-quickstarts/tree/latest/service-nodejs) all inside a docker-compose.
The front end app 'app-nodejs-html5' & key cloak server communication happen without an issue. But API service 'service-nodejs' & keycloak server communication not working properly. When I try to access API from frontend it every time return HTTP code 403.
When I move out service from docker-compose to outside all apps are working fine.
I suspect when both service & keycloak are in docker-compose, API service trying to access Keycloak using both http://localhost:3001 & http://keycloak.local:8080 URL's.
I tried using Frontend URL but that didn't work either.
I tried adding aliases & it didn't work too. ref : Keycloak and Spring Boot web app in dockerized environment
Did anyone succeed running keycloak & API service inside a docker compose ?
Extract from docker-compose
keycloak:
# keycloak admin console is available at http://localhost:3001/admin
image: keycloak:20.0.1
build:
context: .
dockerfile: Dockerfile.keycloak
container_name: keycloak
hostname: keycloak.local
ports:
- "3001:8080"
- "3002:8443"
environment:
KC_HOSTNAME: localhost
KC_HOSTNAME_STRICT: "false"
KC_DB: postgres
KC_DB_USERNAME: dba
KC_DB_PASSWORD: password
KC_DB_SCHEMA: public
KC_DB_URL: jdbc:postgresql://database.local:5432/
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: password
entrypoint: /opt/keycloak/bin/kc.sh start-dev
depends_on:
- database

SvelteKit SSR fetch() when backend is in a Docker container

I use docker compose for my project. It includes these containers:
Nginx
PostgreSQL
Backend (Node.js)
Frontend (SvelteKit)
I use SvelteKit's Load function to send request to my backend. In short, it sends http request to the backend container either on client-side or server-side. Which means, the request can be send not only by browser but also by container itself.
I can't get it to work on both client-side and server-side fetch. Only one of them is working.
I tried these URLs:
http://api.localhost/articles (only client-side request works)
http://api.host.docker.internal/articles (only server-side request works)
http://backend:8080/articles (only server-side request works)
I get this error:
From SvelteKit:
FetchError: request to http://api.localhost/articles failed, reason: connect ECONNREFUSED 127.0.0.1:80
From Nginx:
Timeout error
Docker-compose.yml file:
version: '3.8'
services:
webserver:
restart: unless-stopped
image: nginx:latest
ports:
- 80:80
- 443:443
depends_on:
- frontend
- backend
networks:
- webserver
volumes:
- ./webserver/nginx/conf/:/etc/nginx/conf.d/
- ./webserver/certbot/www:/var/www/certbot/:ro
- ./webserver/certbot/conf/:/etc/nginx/ssl/:ro
backend:
restart: unless-stopped
build:
context: ./backend
target: development
ports:
- 8080:8080
depends_on:
- db
networks:
- database
- webserver
volumes:
- ./backend:/app
frontend:
restart: unless-stopped
build:
context: ./frontend
target: development
ports:
- 3000:3000
depends_on:
- backend
networks:
- webserver
networks:
database:
driver: bridge
webserver:
driver: bridge
How can I send server-side request to docker container by using http://api.localhost/articles as URL? I also want my container to be accesible by other containers as http://backend:8080 if possible.
Use SvelteKit's externalFetch hook to have a different and overridden API URL in frontend and backend.
In docker-compose, the containers should be able to access each other by name if they are in the same Docker network.
Your frontend docker SSR should be able to call your backend docker by using the URL:
http://backend:8080
Web browser should be able to call your backend by using the URL:
(whatever reads in your Nginx configuration files)
Naturally, there are many reasons why this could fail. The best way to tackle this is to test URLs one by one, server by server using curl and entering addresses to the web browser address. It's not possible to answer the exact reason why it fails, because the question does not contain enough information, or generally repeatable recipe for the issue.
For further information, here is our sample configuration for a dockerised SvelteKit frontend. The internal backend shortcut is defined using hooks and configuration variables. Here is our externalFetch example.
From a docker compose you will be able to CURL from one container using the dns (service name you gave in the compose file)
CURL -XGET backend:8080
You can achieve this also by running all of these containers on host driver network.
Regarding the http://api.localhost/articles
You can change the /etc/hosts
And specify the IP you want your computer to try to communicate with when this url : http://api.localhost/articles is used.

PRISMA: Authentication token is invalid: 'Authorization' header not provided

Running Prisma on my locally without secret runs fine..Now i am trying to run it for Production i am always encountering this error ERROR: Authentication token is invalid: 'Authorization' header not provided on my server and locally. i am missing something for sure but don't know what. Please help following are my prisma.yml and docker-compose.yml files.
Prisma.yml
# This service is based on the type definitions in the two files
# databasetypes.prisma` and `database/enums.prisma`
datamodel:
- ./packages/routes/index.directives.graphql
- ./packages/routes/index.scalar.graphql
- ./packages/routes/account/index.enum.graphql
- ./packages/routes/account/index.prisma
...
# Generate a Prisma client in JavaScript and store in
# a folder called `generated/prisma-client`.
# It also downloads the Prisma GraphQL schema and stores it
# in `generated/prisma.graphql`.
generate:
- generator: javascript-client
output: ./prisma
# The endpoint represents the HTTP endpoint for your Prisma API.
# It encodes several pieces of information:
# * Prisma server (`localhost:4466` in this example)
# * Service name (`myservice` in this example)
# * Stage (`dev` in this example)
# NOTE: When service name and stage are set to `default`, they
# can be omitted.
# Meaning http://myserver.com/default/default can be written
# as http://myserver.com.
endpoint: 'http://127.0.0.1:4466/soul/dev'
# The secret is used to create JSON web tokens (JWTs). These
# tokens need to be attached in the `Authorization` header
# of HTTP requests made against the Prisma endpoint.
# WARNING: If the secret is not provided, the Prisma API can
# be accessed without authentication!
secret: ${env:SECRET}
Docker-compose.yml
version: '3'
services:
server:
container_name: soul
restart: always
build: .
command: 'npm run dev'
links:
- redis
- prisma
env_file:
- ./.env
volumes:
- .:/node/soul/
working_dir: /node/soul/
ports:
- '3000:3000'
redis:
container_name: "redisserver"
image: redis:latest
restart: always
command: ["redis-server", "--bind", "redis", "--port", "6379"]
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- '4466:4466'
environment:
PRISMA_CONFIG: |
managementApiSecret: ${SECRET}
port: 4466
databases:
default:
connector: mysql
host: mysql
port: 3306
user: root
password: ******
mysql:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: ******
volumes:
- mysql:/var/lib/mysql
volumes:
mysql: ~
It looks like you're using the API Management Secret where you are supposed to be using a Service Secret.
According to the Prisma docs, the Service Secret and API Management Secret are two different things.
For Prisma v1.34 you can read about the differences here:
https://v1.prisma.io/docs/1.34/prisma-server/authentication-and-security-kke4/#prisma-server
Quote from that page:
A Prisma server provides the runtime environment for one or more Prisma services. To create, delete and modify the Prisma services on a Prisma server, the Management API is used. The Management API is protected with the Management API secret specified in the Docker Compose files when the Prisma server is deployed. Learn more here.
Prisma services are secured via the service secret that's specified in your prisma.yml. A Prisma service typically serves application data that's stored in relation to a certain datamodel. Learn more here.
const db = new Prisma({
typeDefs: 'src/generated/prisma.graphql',
endpoint: process.env.PRISMA_ENDPOINT,
secret: <YOUR_PRISMA_SERVICE_SECRET>, // Note: This must match what is in your prisma.yml
});
# prisma.yml
endpoint: ${env:PRISMA_ENDPOINT}
datamodel: mydatamodel.graphql
secret: <YOUR_PRISMA_SERVICE_SECRET>
In their Prisma 1.34 docs, Prsima recommends using an environment variable to get the secret into the prisma.yml file. There are risks associated with this but that is what is in their docs.
See: https://v1.prisma.io/docs/1.34/prisma-cli-and-configuration/prisma-yml-5cy7/#environment-variable
Quote from that page:
In the following example, an environment variable is referenced to determine the Prisma service secret:
# prisma.yml (as per the docs in the above link)
secret: ${env:PRISMA_SECRET}

Dockeried setup for Node and Reactjs

I want to run two docker containers: one is node server(backend) and other has react js code(frontend).
My node contains an API as shown below:
app.get('/build', function (req, res) {
...
...
});
app.listen(3001);
console.log("Listening to PORT 3001");
I am using this API in my react code as follows:
componentDidMount() {
axios.get('http://localhost:3001/build', { headers: {"x-access-token": this.state.token}
})
.then(response => {
const builds = response.data.message;
//console.log("builds",builds);
this.setState({ builds: builds,done: true });
});
}
But when I run 2 different Docker containers, exposing 3001 for backend container and exposing 3000 for frontend container and access http://aws-ip:3000 (aws-ip is the public IP of my AWS instance where I am running the two docker containers), the request made is
http://localhost:3001/build due to which I am not able to hit the node api of docker container.
What changes should I make in the existing setup so that my react application can fetch the data from node server which is running on the same AWS instance?
You can follow his tutorial.
I think you can achieve that with docker-compose: https://docs.docker.com/compose/
Example: https://dev.to/numtostr/running-react-and-node-js-in-one-shot-with-docker-3o09
and here how I am using
version: '3'
services:
awsService:
build:
context: ./awsService
dockerfile: Dockerfile.dev
volumes:
- ./awsService/src:/app/src
ports:
- "3000:3000"
keymaster:
build:
context: ./keymaster
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./keymaster:/app
ports:
- "8080:8080"
postgres:
image: postgres:12.1
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: mypassword
volumes:
- ./postgresql/data:/var/lib/postgresql/data
service:
build:
context: ./service
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./service/config:/app/config
- ./service/src:/app/src
ports:
- "3001:3000"
ui:
build:
context: ./ui
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./ui:/app
ports:
- "8081:8080"
For future reference, if you have something running on the same port, just bind your local machine port to a different port.
Hope this helps.
As you said it right, the frontend app accessed in the browser cannot reach your API via http://localhost:3001. Instead, your react application should access the API via http://[ec2-instance-elastic-ip]:3001. Your react app should store in its code. Your ec2 instance security group should allow incoming traffic via port 3001.
The above setup is enough to solve the problem.
but here are some additional tips.
assign an elastic IP to your instance. Otherwise, the public IP address of the instance will change if you stop/start the instance.
setup a domain name for your API for flexibility, easy to remember, can redeploy anywhere and point the domain name to the new address. (many ways to do this such as setting up a load balancer, set up CloudFront with ec2 origin)
setup SSL for the better security (many ways to do this such as setting up a load balancer, set up CloudFront with ec2 origin)
Since your react app is a static website, you can easily set up a static s3 website
Hope this helps.

Docker compose networking issue with react app

I am running a react app and a json server with docker-compose.
Usually I connect to the json server from my react app by the following:
fetch('localhost:8080/classes')
.then(response => response.json())
.then(classes => this.setState({classlist:classes}));
Here is my docker-compose file:
version: "3"
services:
frontend:
container_name: react_app
build:
context: ./client
dockerfile: Dockerfile
image: praventz/react_app
ports:
- "3000:3000"
volumes:
- ./client:/usr/src/app
backend:
container_name: json_server
build:
context: ./server
dockerfile: Dockerfile
image: praventz/json_server
ports:
- "8080:8080"
volumes:
- ./server:/usr/src/app
the problem is I can't seem to get my react app to fetch this information from the json server.
on my local machine I use 192.168.99.100:3000 to see my react app
and I use 192.168.99.100:8080 to see the json server but I can't seem to connect them with any of the following:
backend:8080/classes
json_server:8080/classes
backend/classes
json_server/classes
{host:"json_server/classes", port:8080}
{host:"backend/classes", port:8080}
Both the react app and the json server are running perfectly fine independently with docker-compose up.
What should I be putting in fetch() ?
Remember that the React application always runs in some user's browser; it has no idea that Docker is involved, and can't reach or use any of the Docker-related networking setup.
on my local machine I use [...] 192.168.99.100:8080 to see the json server
Then that's what you need in your React application too.
You might consider setting up some sort of proxy in front of this that can, for example, forward URL paths beginning with /api to the backend container and forward other URLs to the frontend container (or better still, run a tool like Webpack to compile your React application to static files and serve that directly). If you have that setup, then the React application can use a path /api/v1/... with no host, and it will be resolved relative to whatever the browser thinks "the current host" is, which should usually be the proxy.
You have two solutions:
use CORS on Express server see https://www.npmjs.com/package/cors
set up proxy/reverse proxy using NGINX

Resources