How to make docker-compose services accesible with each other? - node.js

I'm trying to make a frontend app accesible to the outside. It depends on several other modules, serving as services/backend. This other services also rely on things like Kafka and OpenLink Virtuoso (Database).
How can I make all of them all accesible with each other and how should I expose my frontend to outside internet? Should I also remove any "localhost/port" in my code, and replace it with the service name? Should I also replace every port in the code for the equivalent port of docker?
Here is an extraction of my docker-compose.yml file.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
frontend:
build:
context: ./Frontend
dockerfile: ./Dockerfile
image: "jcpbr/node-frontend-app"
ports:
- "3000:3000"
# Should I use links to connect to every module the frontend access and for the other modules as well?
links:
- "auth:auth"
auth:
build:
context: ./Auth
dockerfile: ./Dockerfile
image: "jcpbr/node-auth-app"
ports:
- "3003:3003"
(...)

How can I make all of [my services] all accesible with each other?
Do absolutely nothing. Delete the obsolete links: block you have. Compose automatically creates a network named default that you can use to communicate between the containers, and they can use the other Compose service names as host names; for example, your auth container could connect to kafka:9092. Also see Networking in Compose in the Docker documentation.
(Some other setups will advocate manually creating Compose networks: and overriding the container_name:, but this isn't necessary. I'd delete these lines in the name of simplicity.)
How should I expose my frontend to outside internet?
That's what the ports: ['3000:3000'] line does. Anyone who can reach your host system on port 3000 (the first port number) will be able to access the frontend container. As far as an outside caller is concerned, they have no idea whether things are running in Docker or not, just that your host is running an HTTP server on port 3000.
Setting up a reverse proxy, maybe based on Nginx, is a little more complicated, but addresses some problems around communication from the browser application to the back-end container(s).
Should I also remove any "localhost/port" in my code?
Yes, absolutely.
...and replace it with the service name? every port?
No, because those settings will be incorrect in your non-container development environment, and will probably be incorrect again if you have a production deployment to a cloud environment.
The easiest right answer here is to use environment variables. In Node code, you might try
const kafkaHost = process.env.KAFKA_HOST || 'localhost';
const kafkaPort = process.env.KAFKA_PORT || '9092';
If you're running this locally without those environment variables set, you'll get the usually-correct developer defaults. But in your Docker-based setup, you can set those environment variables
services:
kafka:
image: confluentinc/cp-kafka:latest
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 # must match the Docker service name
app:
build: .
environment:
KAFKA_HOST: kafka
# default KAFKA_PORT is still correct

Related

SvelteKit SSR fetch() when backend is in a Docker container

I use docker compose for my project. It includes these containers:
Nginx
PostgreSQL
Backend (Node.js)
Frontend (SvelteKit)
I use SvelteKit's Load function to send request to my backend. In short, it sends http request to the backend container either on client-side or server-side. Which means, the request can be send not only by browser but also by container itself.
I can't get it to work on both client-side and server-side fetch. Only one of them is working.
I tried these URLs:
http://api.localhost/articles (only client-side request works)
http://api.host.docker.internal/articles (only server-side request works)
http://backend:8080/articles (only server-side request works)
I get this error:
From SvelteKit:
FetchError: request to http://api.localhost/articles failed, reason: connect ECONNREFUSED 127.0.0.1:80
From Nginx:
Timeout error
Docker-compose.yml file:
version: '3.8'
services:
webserver:
restart: unless-stopped
image: nginx:latest
ports:
- 80:80
- 443:443
depends_on:
- frontend
- backend
networks:
- webserver
volumes:
- ./webserver/nginx/conf/:/etc/nginx/conf.d/
- ./webserver/certbot/www:/var/www/certbot/:ro
- ./webserver/certbot/conf/:/etc/nginx/ssl/:ro
backend:
restart: unless-stopped
build:
context: ./backend
target: development
ports:
- 8080:8080
depends_on:
- db
networks:
- database
- webserver
volumes:
- ./backend:/app
frontend:
restart: unless-stopped
build:
context: ./frontend
target: development
ports:
- 3000:3000
depends_on:
- backend
networks:
- webserver
networks:
database:
driver: bridge
webserver:
driver: bridge
How can I send server-side request to docker container by using http://api.localhost/articles as URL? I also want my container to be accesible by other containers as http://backend:8080 if possible.
Use SvelteKit's externalFetch hook to have a different and overridden API URL in frontend and backend.
In docker-compose, the containers should be able to access each other by name if they are in the same Docker network.
Your frontend docker SSR should be able to call your backend docker by using the URL:
http://backend:8080
Web browser should be able to call your backend by using the URL:
(whatever reads in your Nginx configuration files)
Naturally, there are many reasons why this could fail. The best way to tackle this is to test URLs one by one, server by server using curl and entering addresses to the web browser address. It's not possible to answer the exact reason why it fails, because the question does not contain enough information, or generally repeatable recipe for the issue.
For further information, here is our sample configuration for a dockerised SvelteKit frontend. The internal backend shortcut is defined using hooks and configuration variables. Here is our externalFetch example.
From a docker compose you will be able to CURL from one container using the dns (service name you gave in the compose file)
CURL -XGET backend:8080
You can achieve this also by running all of these containers on host driver network.
Regarding the http://api.localhost/articles
You can change the /etc/hosts
And specify the IP you want your computer to try to communicate with when this url : http://api.localhost/articles is used.

Dockeried setup for Node and Reactjs

I want to run two docker containers: one is node server(backend) and other has react js code(frontend).
My node contains an API as shown below:
app.get('/build', function (req, res) {
...
...
});
app.listen(3001);
console.log("Listening to PORT 3001");
I am using this API in my react code as follows:
componentDidMount() {
axios.get('http://localhost:3001/build', { headers: {"x-access-token": this.state.token}
})
.then(response => {
const builds = response.data.message;
//console.log("builds",builds);
this.setState({ builds: builds,done: true });
});
}
But when I run 2 different Docker containers, exposing 3001 for backend container and exposing 3000 for frontend container and access http://aws-ip:3000 (aws-ip is the public IP of my AWS instance where I am running the two docker containers), the request made is
http://localhost:3001/build due to which I am not able to hit the node api of docker container.
What changes should I make in the existing setup so that my react application can fetch the data from node server which is running on the same AWS instance?
You can follow his tutorial.
I think you can achieve that with docker-compose: https://docs.docker.com/compose/
Example: https://dev.to/numtostr/running-react-and-node-js-in-one-shot-with-docker-3o09
and here how I am using
version: '3'
services:
awsService:
build:
context: ./awsService
dockerfile: Dockerfile.dev
volumes:
- ./awsService/src:/app/src
ports:
- "3000:3000"
keymaster:
build:
context: ./keymaster
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./keymaster:/app
ports:
- "8080:8080"
postgres:
image: postgres:12.1
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: mypassword
volumes:
- ./postgresql/data:/var/lib/postgresql/data
service:
build:
context: ./service
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./service/config:/app/config
- ./service/src:/app/src
ports:
- "3001:3000"
ui:
build:
context: ./ui
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./ui:/app
ports:
- "8081:8080"
For future reference, if you have something running on the same port, just bind your local machine port to a different port.
Hope this helps.
As you said it right, the frontend app accessed in the browser cannot reach your API via http://localhost:3001. Instead, your react application should access the API via http://[ec2-instance-elastic-ip]:3001. Your react app should store in its code. Your ec2 instance security group should allow incoming traffic via port 3001.
The above setup is enough to solve the problem.
but here are some additional tips.
assign an elastic IP to your instance. Otherwise, the public IP address of the instance will change if you stop/start the instance.
setup a domain name for your API for flexibility, easy to remember, can redeploy anywhere and point the domain name to the new address. (many ways to do this such as setting up a load balancer, set up CloudFront with ec2 origin)
setup SSL for the better security (many ways to do this such as setting up a load balancer, set up CloudFront with ec2 origin)
Since your react app is a static website, you can easily set up a static s3 website
Hope this helps.

How to run node js docker container using docker-composer to manage php app assets

Lets say we have three services
- php+ apache
- mysql
- nodejs
I know how to use docker-compose to setup application to link mysql with php apache service. I was wondering how we can add node.js service just to manage
js/css assets. The purpose of node.js service is to just manage javascript/css resources. Since docker provides this flexibility I was wondering to use docker service instead of setting up node.js on my host computer.
version: '3.2'
services:
web:
build: .
image: lap
volumes:
- ./webroot:/var/www/app
- ./configs/php.ini:/usr/local/etc/php/php.ini
- ./configs/vhost.conf:/etc/apache2/sites-available/000-default.conf
links:
- dbs:mysql
dbs:
image: mysql
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=rest
- MYSQL_DATABASE=symfony_rest
- MYSQL_USER=restman
volumes:
- /var/mysql:/var/lib/mysql
- ./configs/mysql.cnf:/etc/mysql/conf.d/mysql.cnf
node:
image: node
volumes:
- ./webroot:/var/app
working_dir: /var/app
I am not sure this is correct strategy , I am sharing ./webroot with both web and node service. docker-compose up -d only starts mysql and web and fails to start node container , probably there is not valid entrypoint set.
if you want to use node js separate from PHP service you must set two more options to make node stay up, one is stdin_open and the other one is tty like bellow
stdin_open: true
tty: true
this is equivalent to CLI command -it like bellow
docker container run --name nodeapp -it node:latest
if you have a separate port to run your node app (e.g. your frontend is completely separate from your backend and you must run it independently from your backend like you must run npm run start command in order to run the frontend app) you must publish your port like bellow
ports:
- 3000:3000
ports structure is systemPort:containerInnerPort.
this means publish port 3000 from inside node container to port 3000 on the system, in another way your make port 3000 inside your container accessible on your system and you can access this port like localhost:3000.
in the end, your node service would be like bellow
node:
image: node
stdin_open: true
tty: true
volumes:
- ./webroot:/var/app
working_dir: /var/app
You can also add nginx service to docker-compose, and nginx can take care of forwarding requests to php container or node.js container. You need some server that binds to 80 port and redirect requests to designated container.

How can I specify an Alternate Exposed Port for Redis / RethinkDB (using Docker Compose)?

Working on Docker-izing an nodejs App and I am trying to set it up so that it will respond from a non-standard port therby avoiding potential conflicts for team members that are already running a local Redis container or service.
Redis usually runs on 6379 (regardless of docker or no). I want it to listen on 6380. Even though I don't have it in the docker-compose file I want to do the same thing with RethinkDB.
I do not want to have to create a new Dockerfile for EITHER Redis or RethinkDB.
Here is my Docker-Compose file.
nodejsapp:
image: some-node-container
container_name: nodejsapp
ports:
- "5200:5200" #first number must match CHEAPOTLE_PORT env variable for the cheapotle service
depends_on:
- redis
- rethinkdb
volumes:
- ./:/app
environment:
- NODEJSAPP_PORT=5200 #must match first port number for cheapotle service above.
- REDIS_PORT=6380 #must match first number in redis service ->ports below.
- RETHINKDB_PORT=28016 #must match first number in redis service ->ports below.
redis:
image: redis:3.2-alpine
container_name: redis_cheapotle
ports:
- "6380:6379"
expose:
- "6380" # must match alternate "first" port above to avoid collisions
rethinkdb:
image: rethinkdb
container_name: rethinkdb_cheapotle
ports:
- "28016:28015" #The first number needs to match the RETHINKDB_PORT in environment variables for cheapotle above. You must change both or none, they must always be the same.
- "8090:8080" #this is where you will access the RethinkDB admin. If you have something on 8090, change the port to 8091:8080
expose:
- "28016" # must match alternate "first" port above to avoid collisions
After doing a few dockerizations I thought this would be easy. I would set my environment variables, use proces.env.whatever in my JS files and be out the door and on to the next.
Wrong.
While I can get to the RethinkDB admin area at 0.0.0.0:8090 (notice
the 8090 alternate port), none of my containers can talk to each other
over their specified ports.
At first I tried the above WITHOUT the 'expose' portion of the YAML but I had the same result I get WITH the 'expose' YAML added.
It seems like docker / the containers are refusing to forward the traffic coming into the Host->Alternate Port to the Container->Standard Port. I did some googling around and did not find anything in the first 20 minutes so I figured I would post this while I continue my search.
Will post an answer if I find it myself in the process.
Ok. So I was able to solve this, it seems like there may be a bug with how official rethinkDB and Redis containers handle port forwarding since the normal port:"XXXXX:YYYYY" YAML specification is disregarded and the traffic is not sent from the modified host port to the standard docker port.
The solution was to modify the Command used to start Redis / RethinkDB containers to use the command line ports flag (which differs for each system) to change to my alternate port.
At first I tried using an environment variables Env file (but apparently those are not available immediately at run-time). I also wanted users to be able to see ALL ports / settings for their stack directly in the Docker-Compose file so the above Port flag solution seems to make sense.
I still don't know why docker wont forward alternate host port traffic
to the standard container port for these two services while it WILL
forward an alternate host port for the rethinkDB admin page
(8090:8080).
Here is the docker-compose file I ended up with:
version: "2"
services:
nodejsapp:
image: some-node-container
container_name: cheapotle
ports:
- "5200:5200" #both numbers must match CHEAPOTLE_PORT env variable for the cheapotle service
depends_on:
- redis
- rethinkdb
volumes:
- ./:/app
environment:
- CHEAPOTLE_PORT=5200 #must match cheapotle service ports above.
- RETHINKDB_PORT=28016 #must match rethinkdb service->ports below.
- REDIS_PORT=6380 #must match redis service ->ports below.
- RESQUE_PORT=9292
entrypoint: foreman start -f /app/src/Procfile
redis:
image: redis:3.2-alpine
container_name: redis_cheapotle
ports:
- "6380:6380" #both numbers must match port in command below AND REDIS_PORT cheapotle service variable
command: redis-server --port 6380 #must match above ports AND REDIS_PORT cheapotle service variable
rethinkdb:
image: rethinkdb
container_name: rethinkdb_cheapotle
ports:
- "28016:28016" #The both numbers must match the RETHINKDB_PORT in environment variables for cheapotle above + command below. You must change allor none, they must always be the same.
- "8090:8080" #this is where you will access the RethinkDB admin. If you have something on 8090, change the port to 8091:8080
command: rethinkdb --driver-port 28016 --bind all #must match above ports AND REDIS_PORT cheapotle service variable
The docs for the command line utilities for RethinkDB can be found here:
https://www.rethinkdb.com/docs/cli-options/ While the docs for the Redis command line can be found here: https://redis.io/topics/config
With the above I can start everything up on alternate ports that will not collide in the likely hood that other devs on the team are already running rethinkDB and / or Redis.
This is not a production grade setup so use at your own risk for the moment. Obviously RethinkDB would require additional configuration to allow other nodes to join the cluster at some other port than 29015.
Also! As a warning before anyone working with rethinkDB command line
flags: for rethinkDB to accept a change in port the "--driver-port
28016" MUST be before the "--bind all" otherwise it's as if it is not
even there and ignored.
You should link them together:)
nodejsapp:
.
.
ports:
- "5200:5200"
.
.
links:
- redis
- rethinkdb
redis:
.
.
ports:
- "6380:6379"
expose:
- "6380"
rethinkdb:
.
.
ports:
- "28016:28015"
- "8090:8080"
expose:
- "28016"

How to expose in a network?

The below example is from the docker-compose docs.
From my understanding they want to have redis port 6379 available in the web container.
Why don't they have
expose:
- "6379"
in the redis container?
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
networks:
- front-tier
- back-tier
redis:
image: redis
volumes:
- redis-data:/var/lib/redis
networks:
- back-tier
From the official Redis image:
This image includes EXPOSE 6379 (the redis port), so standard
container linking will make it automatically available to the linked
containers (as the following examples illustrate).
which is pretty much the typical way of doing things.
Redis Dockerfile.
You don't need links anymore now that we assign containers to docker networks. And without linking, unless you publish all ports with a docker run -P, there's no value to exposing a port on the container. Containers can talk to any port opened on any other container if they are on the same network (assuming default settings for ICC), so exposing a port becomes a noop.
Typically, you only expose a port via the Dockerfile as an indicator to those running your image, or to use the -P flag. There are also some projects that look at exposed ports of other containers to know how to talk to them, specifically I'm thinking of nginx-proxy, but that's a unique case.
However, publishing a port makes that port available from the docker host, which always needs to be done from the docker-compose.yml or run command (you don't want image authors able to affect the docker host without some form of local admin acknowledgement). When you publish a specific port, it doesn't need to be exposed first.

Resources