Recently, I've been building a nodejs bot to run games on a phpbb forum. I have the bot running locally on my machine (logging in, posting, scraping threads, ect.), so, naturally, I've started to dockerize it.
However, even when running through the login workflow from inside of my docker container, I'm receiving ECONNREFUSED errors from inside of the container when trying to hit the forum's api. There's no official docs for the aforementioned api, so a lot of this has been self-investigation.
I'm running node v14.4.0 with axios v0.19.2 + toughcookie v4.0.0 for requests. I'm able to successfully curl the endpoint with the same payload from inside of the container, so I suspect there's something that's going over my head with axios/node.js. I've tried to look at other issues with docker and ECONNREFUSED, but most of them on stackoverflow relate to inter-container communication instead of issues with external apis/access.
The container is running as a root user, and I don't have any sort of proxies or wonky networking set up.
Does anyone have any advice or inklings? My docker-compose is pretty bare bones, but I've included it below for reference.
version: '3.2'
services:
bot:
build: .
env_file:
- .env
ports:
- '80:80'
Any tips or theories would be much appreciated; I've hit the end of my list!
Cheers!
I assume you try to reach the host from the container. A simple way of accomplishing this is to use host networking instead of container networking. Try
version: '3.2'
services:
network_mode: host
bot:
build: .
env_file:
- .env
A better way is probably to put the rest of your application in container as well and use the service discovery in Compose, e.g.
version: '3.2'
services:
bot:
build: .
env_file:
- .env
webapp:
...
Then you can reach the PHP app by connecting to the host name "webapp" in the above example.
Related
I'm trying to make a frontend app accesible to the outside. It depends on several other modules, serving as services/backend. This other services also rely on things like Kafka and OpenLink Virtuoso (Database).
How can I make all of them all accesible with each other and how should I expose my frontend to outside internet? Should I also remove any "localhost/port" in my code, and replace it with the service name? Should I also replace every port in the code for the equivalent port of docker?
Here is an extraction of my docker-compose.yml file.
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
kafka:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper
ports:
- 29092:29092
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
frontend:
build:
context: ./Frontend
dockerfile: ./Dockerfile
image: "jcpbr/node-frontend-app"
ports:
- "3000:3000"
# Should I use links to connect to every module the frontend access and for the other modules as well?
links:
- "auth:auth"
auth:
build:
context: ./Auth
dockerfile: ./Dockerfile
image: "jcpbr/node-auth-app"
ports:
- "3003:3003"
(...)
How can I make all of [my services] all accesible with each other?
Do absolutely nothing. Delete the obsolete links: block you have. Compose automatically creates a network named default that you can use to communicate between the containers, and they can use the other Compose service names as host names; for example, your auth container could connect to kafka:9092. Also see Networking in Compose in the Docker documentation.
(Some other setups will advocate manually creating Compose networks: and overriding the container_name:, but this isn't necessary. I'd delete these lines in the name of simplicity.)
How should I expose my frontend to outside internet?
That's what the ports: ['3000:3000'] line does. Anyone who can reach your host system on port 3000 (the first port number) will be able to access the frontend container. As far as an outside caller is concerned, they have no idea whether things are running in Docker or not, just that your host is running an HTTP server on port 3000.
Setting up a reverse proxy, maybe based on Nginx, is a little more complicated, but addresses some problems around communication from the browser application to the back-end container(s).
Should I also remove any "localhost/port" in my code?
Yes, absolutely.
...and replace it with the service name? every port?
No, because those settings will be incorrect in your non-container development environment, and will probably be incorrect again if you have a production deployment to a cloud environment.
The easiest right answer here is to use environment variables. In Node code, you might try
const kafkaHost = process.env.KAFKA_HOST || 'localhost';
const kafkaPort = process.env.KAFKA_PORT || '9092';
If you're running this locally without those environment variables set, you'll get the usually-correct developer defaults. But in your Docker-based setup, you can set those environment variables
services:
kafka:
image: confluentinc/cp-kafka:latest
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 # must match the Docker service name
app:
build: .
environment:
KAFKA_HOST: kafka
# default KAFKA_PORT is still correct
I have a WebApp that runs in Linux Service Plan as docker-compose. My config is:
version: '3'
networks:
my-network:
driver: bridge
services:
web-site:
image: server.azurecr.io/site/site-production:latest
container_name: web-site
networks:
- my-network
nginx:
image: server.azurecr.io/nginx/nginx-production:latest
container_name: nginx
ports:
- "8080:8080"
networks:
- my-network
And I realize that my app is sometimes freezing for a while (usually less than 1 minute) and when I get to check on Diagnose (Linux - Number of Running Containers per Host) I can see this:
How could it be possible to have 20+ containers running?
Thanks.
I've created a new service plan (P2v2) for my app (and nothing else) and my app (which has just two containers - .net 3.1 and nginx) and it shows 4 containers... but this is not a problem for me... at all...
The problem I've found in Application Insigths was about a method that retrieves a blob to serve an image... blobs are really fast for uploads and downloads but they are terrible for search... my method was checking if the blob exists before sending it to api and this (assync) proccess was blocking my api responses... I just removed the check and my app is running as desired (all under 1sec - almost all under 250ms reponse).
Thanks for you help.
I am running a react app and a json server with docker-compose.
Usually I connect to the json server from my react app by the following:
fetch('localhost:8080/classes')
.then(response => response.json())
.then(classes => this.setState({classlist:classes}));
Here is my docker-compose file:
version: "3"
services:
frontend:
container_name: react_app
build:
context: ./client
dockerfile: Dockerfile
image: praventz/react_app
ports:
- "3000:3000"
volumes:
- ./client:/usr/src/app
backend:
container_name: json_server
build:
context: ./server
dockerfile: Dockerfile
image: praventz/json_server
ports:
- "8080:8080"
volumes:
- ./server:/usr/src/app
the problem is I can't seem to get my react app to fetch this information from the json server.
on my local machine I use 192.168.99.100:3000 to see my react app
and I use 192.168.99.100:8080 to see the json server but I can't seem to connect them with any of the following:
backend:8080/classes
json_server:8080/classes
backend/classes
json_server/classes
{host:"json_server/classes", port:8080}
{host:"backend/classes", port:8080}
Both the react app and the json server are running perfectly fine independently with docker-compose up.
What should I be putting in fetch() ?
Remember that the React application always runs in some user's browser; it has no idea that Docker is involved, and can't reach or use any of the Docker-related networking setup.
on my local machine I use [...] 192.168.99.100:8080 to see the json server
Then that's what you need in your React application too.
You might consider setting up some sort of proxy in front of this that can, for example, forward URL paths beginning with /api to the backend container and forward other URLs to the frontend container (or better still, run a tool like Webpack to compile your React application to static files and serve that directly). If you have that setup, then the React application can use a path /api/v1/... with no host, and it will be resolved relative to whatever the browser thinks "the current host" is, which should usually be the proxy.
You have two solutions:
use CORS on Express server see https://www.npmjs.com/package/cors
set up proxy/reverse proxy using NGINX
I am under a weird dilemma. I have created a node application and this application needs to connect to MongoDB (through docker container) I created a docker-compose file as follows:
version: "3"
services:
mongo:
image: mongo
expose:
- 27017
volumes:
- ./data/db:/data/db
my-node:
image: "<MY_IMAGE_PATH>:latest"
deploy:
replicas: 1
restart_policy:
condition: on-failure
working_dir: /opt/app
ports:
- "2000:2000"
volumes:
- ./mod:/usr/app
networks:
- webnet
command: "node app.js"
networks:
webnet:
I am using mongo official image. I have ommited my docker image from the above configuration .I have set up with many configuration but i am unable to connect to mongoDB (yes i have changed the MongoDB uri inside the node.js application too). but whenever i am deploying my docker-compose my application on start up gives me always MongoNetworkError of TransientTransactionError. I am unable to understand where is the problem since many hours.
One more weird thing is when i running my docker-compose file i receive following logs:
Creating network server_default
Creating network server_webnet
Creating service server_mongo
Creating service server_feed-grabber
Could it be that both services are in a different network? If yes then how to fix that?
Other Info:
In node.js application MongoDB uri that i tried is
mongodb://mongo:27017/MyDB
I am running my docker-compose with the command: docker stack deploy -c docker-compose.yml server
My node.js image is Ubuntu 18
Anyone can help me with this?
Ok So i have tried few things and i figured out at last after spending many many hours. There were two things i was doing wrong and they were hitting me on last point:
First thing logging of the startup of docker showed me that it was creating two networks server_default and server_webnet this is the first mistake. Both containers should be in the same network while working.
The second thing I needed to run the Mongo container first as my Node.js application depend_on the Mongo container to be run first. This is exactly what I did in my docker-compose configuration by introducing depend_on property.
For me it was:
1- get your ip by running command
docker-machine ip
2- don't go to localhost/port, go to your ip/port, example : http://192.168.99.100:8080
I am looking for the best/simplest way to manage a local development environment for multiple stacks. For example on one project I'm building a MEAN stack backend.
I was recommended to use docker, however I believe it would complicate the deployment process because shouldn't you have one container for mongo, one for express etc? As found in this question on stack.
How do developers manage multiple environments without VMs?
And in particular, what are best practices doing this on ubuntu?
Thanks a lot.
With Docker-Compose you can easily create multiple containers in one go. For development, the containers are usually configured to mount a local folder into the containers filesystem. This way you can easily work on your code and have live reloading. A sample docker-compse.yml could look like this:
version: '2' services: node:
build: ./node
ports:
- "3000:3000"
volumes:
- ./node:/src
- /src/node_modules
links:
- mongo
command: nodemon --legacy-watch /src/bin/www
mongo:
image: mongo
You can then just type
docker-compose up
And you Stack will be up in seconds.