Keycloak can not access when running inside a docker-compose network - node.js

I'm trying to use Keycloak quick starts in https://github.com/keycloak/keycloak-quickstarts.
I ran app-nodejs-html5 (https://github.com/keycloak/keycloak-quickstarts/tree/latest/app-nodejs-html5) , Keycloak 20.0.1 & service-nodejs (https://github.com/keycloak/keycloak-quickstarts/tree/latest/service-nodejs) all inside a docker-compose.
The front end app 'app-nodejs-html5' & key cloak server communication happen without an issue. But API service 'service-nodejs' & keycloak server communication not working properly. When I try to access API from frontend it every time return HTTP code 403.
When I move out service from docker-compose to outside all apps are working fine.
I suspect when both service & keycloak are in docker-compose, API service trying to access Keycloak using both http://localhost:3001 & http://keycloak.local:8080 URL's.
I tried using Frontend URL but that didn't work either.
I tried adding aliases & it didn't work too. ref : Keycloak and Spring Boot web app in dockerized environment
Did anyone succeed running keycloak & API service inside a docker compose ?
Extract from docker-compose
keycloak:
# keycloak admin console is available at http://localhost:3001/admin
image: keycloak:20.0.1
build:
context: .
dockerfile: Dockerfile.keycloak
container_name: keycloak
hostname: keycloak.local
ports:
- "3001:8080"
- "3002:8443"
environment:
KC_HOSTNAME: localhost
KC_HOSTNAME_STRICT: "false"
KC_DB: postgres
KC_DB_USERNAME: dba
KC_DB_PASSWORD: password
KC_DB_SCHEMA: public
KC_DB_URL: jdbc:postgresql://database.local:5432/
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: password
entrypoint: /opt/keycloak/bin/kc.sh start-dev
depends_on:
- database

Related

How to expose port of Docker container for localhost

I have dockerized simple node.js REST API. I'm building container with this API on my raspberry. This node.js app needs to be installed on few raspberry devices, on every device user will be using angular app which is hosted on my server and this client app will send request to this API.
I can connect to api by using IP of container which can be obtained from container VM by
using ifconfig - http://{containerIp}:2710.The problem is that my angular app is using HTTPS and API is using HTTP so there will be Mixed content error. That way may also generate additional problem with container IP because every machine can have different container IP but there is one client app with one config.
I suppose if I will configure access to this API for http://localhost:2710 there will be no error, but I don't know how can I make this container visible on raspberry localhost
This is how i'm building my docker environment
Dockerfile
FROM arm32v7/node
RUN mkdir -p /usr/src/app/back
WORKDIR /usr/src/app/back
COPY package.json /usr/src/app/back/
RUN npm install
COPY . /usr/src/app/back/
EXPOSE 2710
CMD [ "node", "index.js" ]
Compose
services:
backend:
image: // path to my repo with image
ports:
- 2710:2710
container_name: backend
privileged: true
restart: always

403 Forbidden, communication among docker containers

I have an application composed of react-client (frontend), express server (backend), and keycloak. For development purpose, I run keycloak inside a docker-container and expose its corresponding port (8080); frontend and backend run locally on my machine. They connect to keycloak on the aforementioned port. Backend serves some REST end-points and these end-points are protected by keycloak. Everything works fine.
However, when I tried to containerize my application for production purpose by putting backend in a container and run everything with docker-compose (frontend still run on my local machine), backend rejected all requests from frontend, although these requests are attached with a valid token. I guess the problem is that backend cannot connect with keycloak to verify the token but I don't know why and how to fix the problem.
This is my docker-compose.yml:
version: "3.8"
services:
backend:
image: "backend"
build:
context: .
dockerfile: ./backend/Dockerfile
ports:
- "5001:5001"
keycloak:
image: "jboss/keycloak"
ports:
- "8080:8080"
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- KEYCLOAK_IMPORT=/tmp/realm-export.json
volumes:
- ./realm-export.json:/tmp/realm-export.json
mongo_db:
image: "mongo:4.2-bionic"
ports:
- "27017:27017"
mongo_db_web_interface:
image: "mongo-express"
ports:
- "4000:8081"
environment:
- ME_CONFIG_MONGODB_SERVER=mongo_db
This is keycloak configuration in backend code:
{
"realm": "License-game",
"bearer-only": true,
"auth-server-url": "http://keycloak:8080/auth/",
"ssl-required": "external",
"resource": "backend",
"confidential-port": 0
}
This is keycloak configuration in frontend code:
{
URL: "http://localhost:8080/auth/",
realm: 'License-game',
clientId: 'react'
}
This is the configuration of keycloak for backend
Backend and frontend are using different Keycloak URLs in your case - http://keycloak:8080/auth/ vs http://localhost:8080/auth/, so they are expecting different issuer in the token.
So yes, token from the frontend is valid, but not for the backend. Because that one is expecting different issuer value in the token. Use the same keycloak domain everywhere and you want have this kind of problem.
I was having the same problem this days. As previously answered the problem is within the token issuer.
In order to make it work refer to this solution

How to connect NodeJS' API from AngularJS docker container from the browser?

I've two docker containers which are AngularJS and NodeJS applications. I'm using Azure App Service with Multi-Container docker configuration. I have docker compose file for my application and using the same in Azure App Service. From the browser, I got the application's UI screen once I click on the button it should call an API from NodeJS but It is not happening though. My Dockerfiles and docker compose file are very simple and straight forward.
I have provided nodejs as container name for NodeJS application and mentioned the same name in AngularJS code to access the API but it is not working from the browser.
docker-compose.yml
version: '3.3'
services:
angularjs:
container_name: 'angularjs'
image: 'ui:1'
ports:
- '4200:4200'
depends_on:
- 'nodejs'
nodejs:
container_name: 'nodejs'
image: 'api:1'
ports:
- '4000:4000'
In the AngularJS code, I'm using http://nodejs:4000/data and it is a GET method. In this way, I'm facing an error ERR_NAME_NOT_RESOLVED.

Azure Web App For Containers - Correct Port Setup and Logging Issue

I have an rshiny server setup in a docker container, and am trying to deploy this as an azure multi-container web app with some other services. I've already succeeded in deploying it in the single-container setting, for which I set the WEBSITES_PORT app setting to be 3838.
In my attempt to move to the multi-container situation, using the following docker-compose script:
version: '3.7'
services:
shiny-app:
image: my-shiny-image
restart: always
ports:
- "80:3838"
My understanding is that the 80-3838 mapping is what "informs" azure of what the website port is? This does not work when deployed, and I cannot retrieve any docker logs. I get the error message "Error retrieving logs". If I run the
az webapp log tail
command, it just hangs and then fails.
What am I doing wrong?

Docker compose networking issue with react app

I am running a react app and a json server with docker-compose.
Usually I connect to the json server from my react app by the following:
fetch('localhost:8080/classes')
.then(response => response.json())
.then(classes => this.setState({classlist:classes}));
Here is my docker-compose file:
version: "3"
services:
frontend:
container_name: react_app
build:
context: ./client
dockerfile: Dockerfile
image: praventz/react_app
ports:
- "3000:3000"
volumes:
- ./client:/usr/src/app
backend:
container_name: json_server
build:
context: ./server
dockerfile: Dockerfile
image: praventz/json_server
ports:
- "8080:8080"
volumes:
- ./server:/usr/src/app
the problem is I can't seem to get my react app to fetch this information from the json server.
on my local machine I use 192.168.99.100:3000 to see my react app
and I use 192.168.99.100:8080 to see the json server but I can't seem to connect them with any of the following:
backend:8080/classes
json_server:8080/classes
backend/classes
json_server/classes
{host:"json_server/classes", port:8080}
{host:"backend/classes", port:8080}
Both the react app and the json server are running perfectly fine independently with docker-compose up.
What should I be putting in fetch() ?
Remember that the React application always runs in some user's browser; it has no idea that Docker is involved, and can't reach or use any of the Docker-related networking setup.
on my local machine I use [...] 192.168.99.100:8080 to see the json server
Then that's what you need in your React application too.
You might consider setting up some sort of proxy in front of this that can, for example, forward URL paths beginning with /api to the backend container and forward other URLs to the frontend container (or better still, run a tool like Webpack to compile your React application to static files and serve that directly). If you have that setup, then the React application can use a path /api/v1/... with no host, and it will be resolved relative to whatever the browser thinks "the current host" is, which should usually be the proxy.
You have two solutions:
use CORS on Express server see https://www.npmjs.com/package/cors
set up proxy/reverse proxy using NGINX

Resources