I am running a react app and a json server with docker-compose.
Usually I connect to the json server from my react app by the following:
fetch('localhost:8080/classes')
.then(response => response.json())
.then(classes => this.setState({classlist:classes}));
Here is my docker-compose file:
version: "3"
services:
frontend:
container_name: react_app
build:
context: ./client
dockerfile: Dockerfile
image: praventz/react_app
ports:
- "3000:3000"
volumes:
- ./client:/usr/src/app
backend:
container_name: json_server
build:
context: ./server
dockerfile: Dockerfile
image: praventz/json_server
ports:
- "8080:8080"
volumes:
- ./server:/usr/src/app
the problem is I can't seem to get my react app to fetch this information from the json server.
on my local machine I use 192.168.99.100:3000 to see my react app
and I use 192.168.99.100:8080 to see the json server but I can't seem to connect them with any of the following:
backend:8080/classes
json_server:8080/classes
backend/classes
json_server/classes
{host:"json_server/classes", port:8080}
{host:"backend/classes", port:8080}
Both the react app and the json server are running perfectly fine independently with docker-compose up.
What should I be putting in fetch() ?
Remember that the React application always runs in some user's browser; it has no idea that Docker is involved, and can't reach or use any of the Docker-related networking setup.
on my local machine I use [...] 192.168.99.100:8080 to see the json server
Then that's what you need in your React application too.
You might consider setting up some sort of proxy in front of this that can, for example, forward URL paths beginning with /api to the backend container and forward other URLs to the frontend container (or better still, run a tool like Webpack to compile your React application to static files and serve that directly). If you have that setup, then the React application can use a path /api/v1/... with no host, and it will be resolved relative to whatever the browser thinks "the current host" is, which should usually be the proxy.
You have two solutions:
use CORS on Express server see https://www.npmjs.com/package/cors
set up proxy/reverse proxy using NGINX
Related
I use docker compose for my project. It includes these containers:
Nginx
PostgreSQL
Backend (Node.js)
Frontend (SvelteKit)
I use SvelteKit's Load function to send request to my backend. In short, it sends http request to the backend container either on client-side or server-side. Which means, the request can be send not only by browser but also by container itself.
I can't get it to work on both client-side and server-side fetch. Only one of them is working.
I tried these URLs:
http://api.localhost/articles (only client-side request works)
http://api.host.docker.internal/articles (only server-side request works)
http://backend:8080/articles (only server-side request works)
I get this error:
From SvelteKit:
FetchError: request to http://api.localhost/articles failed, reason: connect ECONNREFUSED 127.0.0.1:80
From Nginx:
Timeout error
Docker-compose.yml file:
version: '3.8'
services:
webserver:
restart: unless-stopped
image: nginx:latest
ports:
- 80:80
- 443:443
depends_on:
- frontend
- backend
networks:
- webserver
volumes:
- ./webserver/nginx/conf/:/etc/nginx/conf.d/
- ./webserver/certbot/www:/var/www/certbot/:ro
- ./webserver/certbot/conf/:/etc/nginx/ssl/:ro
backend:
restart: unless-stopped
build:
context: ./backend
target: development
ports:
- 8080:8080
depends_on:
- db
networks:
- database
- webserver
volumes:
- ./backend:/app
frontend:
restart: unless-stopped
build:
context: ./frontend
target: development
ports:
- 3000:3000
depends_on:
- backend
networks:
- webserver
networks:
database:
driver: bridge
webserver:
driver: bridge
How can I send server-side request to docker container by using http://api.localhost/articles as URL? I also want my container to be accesible by other containers as http://backend:8080 if possible.
Use SvelteKit's externalFetch hook to have a different and overridden API URL in frontend and backend.
In docker-compose, the containers should be able to access each other by name if they are in the same Docker network.
Your frontend docker SSR should be able to call your backend docker by using the URL:
http://backend:8080
Web browser should be able to call your backend by using the URL:
(whatever reads in your Nginx configuration files)
Naturally, there are many reasons why this could fail. The best way to tackle this is to test URLs one by one, server by server using curl and entering addresses to the web browser address. It's not possible to answer the exact reason why it fails, because the question does not contain enough information, or generally repeatable recipe for the issue.
For further information, here is our sample configuration for a dockerised SvelteKit frontend. The internal backend shortcut is defined using hooks and configuration variables. Here is our externalFetch example.
From a docker compose you will be able to CURL from one container using the dns (service name you gave in the compose file)
CURL -XGET backend:8080
You can achieve this also by running all of these containers on host driver network.
Regarding the http://api.localhost/articles
You can change the /etc/hosts
And specify the IP you want your computer to try to communicate with when this url : http://api.localhost/articles is used.
I want to run two docker containers: one is node server(backend) and other has react js code(frontend).
My node contains an API as shown below:
app.get('/build', function (req, res) {
...
...
});
app.listen(3001);
console.log("Listening to PORT 3001");
I am using this API in my react code as follows:
componentDidMount() {
axios.get('http://localhost:3001/build', { headers: {"x-access-token": this.state.token}
})
.then(response => {
const builds = response.data.message;
//console.log("builds",builds);
this.setState({ builds: builds,done: true });
});
}
But when I run 2 different Docker containers, exposing 3001 for backend container and exposing 3000 for frontend container and access http://aws-ip:3000 (aws-ip is the public IP of my AWS instance where I am running the two docker containers), the request made is
http://localhost:3001/build due to which I am not able to hit the node api of docker container.
What changes should I make in the existing setup so that my react application can fetch the data from node server which is running on the same AWS instance?
You can follow his tutorial.
I think you can achieve that with docker-compose: https://docs.docker.com/compose/
Example: https://dev.to/numtostr/running-react-and-node-js-in-one-shot-with-docker-3o09
and here how I am using
version: '3'
services:
awsService:
build:
context: ./awsService
dockerfile: Dockerfile.dev
volumes:
- ./awsService/src:/app/src
ports:
- "3000:3000"
keymaster:
build:
context: ./keymaster
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./keymaster:/app
ports:
- "8080:8080"
postgres:
image: postgres:12.1
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: mypassword
volumes:
- ./postgresql/data:/var/lib/postgresql/data
service:
build:
context: ./service
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./service/config:/app/config
- ./service/src:/app/src
ports:
- "3001:3000"
ui:
build:
context: ./ui
dockerfile: Dockerfile.dev
volumes:
- /app/node_modules
- ./ui:/app
ports:
- "8081:8080"
For future reference, if you have something running on the same port, just bind your local machine port to a different port.
Hope this helps.
As you said it right, the frontend app accessed in the browser cannot reach your API via http://localhost:3001. Instead, your react application should access the API via http://[ec2-instance-elastic-ip]:3001. Your react app should store in its code. Your ec2 instance security group should allow incoming traffic via port 3001.
The above setup is enough to solve the problem.
but here are some additional tips.
assign an elastic IP to your instance. Otherwise, the public IP address of the instance will change if you stop/start the instance.
setup a domain name for your API for flexibility, easy to remember, can redeploy anywhere and point the domain name to the new address. (many ways to do this such as setting up a load balancer, set up CloudFront with ec2 origin)
setup SSL for the better security (many ways to do this such as setting up a load balancer, set up CloudFront with ec2 origin)
Since your react app is a static website, you can easily set up a static s3 website
Hope this helps.
I have 2 applications.
1.Backend nodejs application
2.Frontend nodejs application
Both are running in docker containers.
My goal is upload images from backend application and access them from frontend application. As i guess, the only way to share image files between containers is volumes.
So i created a volume "assets" in docker-compose file. But how can i write data to volume folder from backend app and how can i access to volume folder from frontend application.
Expected behaviour
// on backend app
fs.writeFileSync("{volume_magic_path}/sample.txt", "Hey there!");
// on frontend app
fs.readFileSync("{volume_magic_path}/sample.txt", 'utf8');
docker-compose.yml
version: "3.7"
services:
express:
build: ./
restart: always
ports:
- "5000:5000"
volumes:
- .:/usr/src/app
volumes:
assets:
So basically, what should i write for "volume_magic_path" to access volume folder?
It looks like a XY problem. A front-end app never has to access physical files directly. Front-end code is made to be ran in browsers and they doesn't have file system APIs like that. The interface to access files is HTTP.
What I think you're looking for is to upload files from front-end, and make them available as HTTP resources. I'm order to do that, you'll have to create endpoints for file upload and resource access. I would recommend using express.static() if you're using Express, or whatever equivalent for the HTTP library you're using, for serving your files.
Please find the below example , In backend or front end you have write the path inside your container location_in_the_container
services:
nginx:
build: ./nginx/
ports:
- 80:80
links:
- php
volumes:
- app-volume: location_in_the_container
express:
build: ./
restart: always
ports:
- "5000:5000"
volumes:
- app-volume: location_in_the_container
volumes:
app-volume:
Please replace location_in_the_container.
Please check Volume configuration reference
Lets say we have three services
- php+ apache
- mysql
- nodejs
I know how to use docker-compose to setup application to link mysql with php apache service. I was wondering how we can add node.js service just to manage
js/css assets. The purpose of node.js service is to just manage javascript/css resources. Since docker provides this flexibility I was wondering to use docker service instead of setting up node.js on my host computer.
version: '3.2'
services:
web:
build: .
image: lap
volumes:
- ./webroot:/var/www/app
- ./configs/php.ini:/usr/local/etc/php/php.ini
- ./configs/vhost.conf:/etc/apache2/sites-available/000-default.conf
links:
- dbs:mysql
dbs:
image: mysql
ports:
- "3307:3306"
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_PASSWORD=rest
- MYSQL_DATABASE=symfony_rest
- MYSQL_USER=restman
volumes:
- /var/mysql:/var/lib/mysql
- ./configs/mysql.cnf:/etc/mysql/conf.d/mysql.cnf
node:
image: node
volumes:
- ./webroot:/var/app
working_dir: /var/app
I am not sure this is correct strategy , I am sharing ./webroot with both web and node service. docker-compose up -d only starts mysql and web and fails to start node container , probably there is not valid entrypoint set.
if you want to use node js separate from PHP service you must set two more options to make node stay up, one is stdin_open and the other one is tty like bellow
stdin_open: true
tty: true
this is equivalent to CLI command -it like bellow
docker container run --name nodeapp -it node:latest
if you have a separate port to run your node app (e.g. your frontend is completely separate from your backend and you must run it independently from your backend like you must run npm run start command in order to run the frontend app) you must publish your port like bellow
ports:
- 3000:3000
ports structure is systemPort:containerInnerPort.
this means publish port 3000 from inside node container to port 3000 on the system, in another way your make port 3000 inside your container accessible on your system and you can access this port like localhost:3000.
in the end, your node service would be like bellow
node:
image: node
stdin_open: true
tty: true
volumes:
- ./webroot:/var/app
working_dir: /var/app
You can also add nginx service to docker-compose, and nginx can take care of forwarding requests to php container or node.js container. You need some server that binds to 80 port and redirect requests to designated container.
I am building a chat application that i am implementing in Docker. I have a NodeJS container with socket.io and a container with apache server and website on it.
The thing is i need to connect to the website(with javascript) to the NodeJS server. I have looked at the Docker-compose docks and read about networking. The docs said that the address should be the name of the container. But when i try that i get the following error in my browser console:
GET http://nodejs:3000/socket.io/socket.io.js net::ERR_NAME_NOT_RESOLVED
The whole project works outside containers.The only thing that i cannot figure out is the connection between the NodeJs container and the Apache container.
code that throws the error:
<script type="text/javascript" src="//nodejs:3000/socket.io/socket.io.js"></script>
My docker compose file:
version: '3.5'
services:
apache:
build:
context: ./
dockerfile: ./Dockerfile
networks:
default:
ports:
- 8080:80
volumes:
- ./:/var/www/html
container_name: apache
nodejs:
image: node:latest
working_dir: /home/node/app
networks:
default:
ports:
- '3001:3000'
volumes:
- './node_server/:/home/node/app'
command: [npm, start]
depends_on:
- mongodb
container_name: nodejs
networks:
default:
driver: bridge
Can anyone explain me how to succesfully connect the apache container to the NodeJS container so it can serve the socket.io.js file?
I can give more of the source code if needed.
The nodejs service is exposing port 3001 not 3000. 3001:3000 is a port mapping which forwards :3001 to the :3000 container port. So you would need to point it to nodejs:3001.
However, I don't think that'll work since the nodejs hostname is not accessible by the browser. You need to point it to the host in which docker is running since you are exposing those ports there. If you are running this locally it might look like:
<script type="text/javascript" src="//localhost:3001/socket.io/socket.io.js"></script>
In other words, you are not connecting to the nodejs server from the apache service, you are accessing it externally through the browser.