ECONNREFUSED when attempting to access Docker service in GitLab CI - node.js

I am trying to access a Docker container which exposes an Express API (using Docker Compose services) in GitLab CI in order to run a number of tests against it.
I setup and instantiate the Docker services necessary as one task, then I attempt accessing it via axios requests in my tests. I have set 0.0.0.0 as the endpoint base.
However, I keep receiving the error:
[Error: connect ECONNREFUSED 0.0.0.0:3000]
My docker-compose.yml:
version: "3"
services:
st-sample:
container_name: st-sample
image: sample
restart: always
build: .
expose:
- "3000"
ports:
- "3000:3000"
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- /sampledb
expose:
- "27017"
ports:
- "27017:27017"
My gitlab-ci.yml:
image: docker:latest
services:
- node
- mongo
- docker:dind
stages:
- prepare_image
- setup_application
- test
- teardown_application
prepare_image:
stage: prepare_image
script:
- docker build -t sample .
setup_application:
stage: setup_application
script:
- docker-compose -f docker-compose.yml up -d
test:
image: node:latest
stage: test
allow_failure: true
before_script:
- npm install
script:
- npm test
teardown_application:
stage: teardown_application
script:
- docker-compose -f docker-compose.yml stop
Note that I also have registered the runner in my machine, giving it privileged permissions.
Locally everything works as expected - docker containers are initiated and are accessed for the tests.
However I am unable to do this via GitLab CI. The Docker containers build and get set up normally, however I am unable to access the exposed API.
I have tried many things, like setting the hostname for accessing the container, setting a static IP, using the container name etc, but to no success - I just keep receiving ECONNREFUSED.
I understand that they have their own network isolation strategy for security reasons, but I am just unable to expose the docker service to be tested.
Can you give an insight to this please? Thank you.

I finally figured this out, following 4 days of reading, searching and lots of trial and error. The job running the tests was in a different container from the ones that exposed the API and the database.
I resolved this by creating a docker network in the device the runner was on:
sudo network create mynetwork
Following that, I set the network to the docker-compose.yml file, with external config, and associated both services with it:
st-sample:
# ....
networks:
- mynetwork
mongo:
# ....
networks:
- mynetwork
networks:
mynetwork:
external: true
Also, I created a custom docker image including tests (name: test),
and in gitlab-ci.yml, I setup the job to run it within mynetwork.
docker run --network=mynetwork test
Following that, the containers/services were accessible by their names along each other, so I was able to run tests against http://st-sample.
It was a long journey to figure it all out, but it was well-worth it - I learned a lot!

Related

Running into issues while deploying multi container instances on Azure using Docker compose

docker compose-up locally is able to build and bring up the services, but when doing the same on Azure Container Instances I get the below error
containerinstance.ContainerGroupsClient#CreateOrUpdate: Failure
sending request: StatusCode=400 -- Original Error:
Code="InaccessibleImage" Message="The image
'docker/aci-hostnames-sidecar:1.0' in container group 'djangodeploy'
is not accessible. Please check the image and registry credential."
Also, what is the purpose of this image docker/aci-hostnames-sidecar
The ACI deployment was working fine and now suddenly it doesn't work anymore
The docker-compose.yml contents are provided below
version: '3.7'
services:
django-gunicorn:
image: oasisdatacr.azurecr.io/oasis_django:latest
env_file:
- .env
build:
context: .
command: >
sh -c "
python3 manage.py makemigrations &&
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
python3 manage.py runserver 0:8000"
ports:
- "8000:8000"
celery-worker:
image: oasisdatacr.azurecr.io/oasis_django_celery_worker:latest
restart: always
build:
context: .
command: celery -A oasis worker -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
celery-beat:
image: oasisdatacr.azurecr.io/oasis_django_celery_beat:latest
build:
context: .
command: celery -A oasis beat -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
- celery-worker
UPDATE There might have been some issue from Azure end as I was able to deploy the containers as I usually do without any changes whatsoever
When you use the docker-compose to deploy multiple containers to ACI, firstly you need to build the images locally, and then you need to push the images to the ACR through the command docker-compose push, of course, you need to log in to your ACR first. See the example here.
And if you already push the images to your ACR, then you need to make sure if you log in to your ACR with the right credential and the image name with the tag is absolutely right.

docker-compose network error cant connect to other host

I'm new to docker and I'm having issues with connecting to my managed database cluster on the cloud services which was separated from the docker machine and network.
So recently I attempted to use docker-compose because manually writing docker run command every update is a hassle so I configure the yml file.
Whenever I use docker compose, I'm having issues connecting to the database with this error
Unhandled error event: Error: connect ENOENT %22rediss://default:password#test.ondigitalocean.com:25061%22
But if I run it on the actual docker run command with the ENV in dockerfile, then everything will work fine.
docker run -d -p 4000:4000 --restart always test
But I don't want to expose all the confidential data to the code repository with all the details on the dockerfile.
Here is my dockerfile and docker-compose
dockerfile
FROM node:14.3.0
WORKDIR /kpb
COPY package.json /kpb
RUN npm install
COPY . /kpb
CMD ["npm", "start"]
docker-compose
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION="${PRODUCTION}"
- DB_SSL="${DB_SSL}"
- DB_CERT="${DB_CERT}"
- DB_URL="${DB_URL}"
- REDIS_URL="${REDIS_URL}"
- SESSION_KEY="${SESSION_KEY}"
- AWS_BUCKET_REGION="${AWS_BUCKET_REGION}"
- AWS_BUCKET="${AWS_BUCKET}"
- AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}"
- AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}"
You should not include the " for the values of your environment variables in your docker-compose.
This should work:
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION=${PRODUCTION}
- DB_SSL=${DB_SSL}
- DB_CERT=${DB_CERT}
- DB_URL=${DB_URL}
- REDIS_URL=${REDIS_URL}
- SESSION_KEY=${SESSION_KEY}
- AWS_BUCKET_REGION=${AWS_BUCKET_REGION}
- AWS_BUCKET=${AWS_BUCKET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}

How to connect to Node API docker container from Angular Nginx container

I am currently working on an angular app using Rest API (Express, Nodejs) and Postgresql. Everything worked well when hosted on my local machine. After testing, I moved the images to Ubuntu server so the app can be hosted on an external port. I am able to access the angular frontend using the https://server-external-ip:80 but when trying to login, Nginx is not connecting to NodeApi. Here is my docker-compose file:
version: '3.0'
services:
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
ports:
- 5432:5432
restart: always
volumes:
- ./postgres-data:/var/lib/postgresql/data
networks:
- my-network
backend: # name of the second service
image: myId/mynodeapi
ports:
- 3000:3000
environment:
POSTGRES_DB: myDb
POSTGRES_PASSWORD: myPwd
POSTGRES_PORT: 5432
POSTGRES_HOST: db
depends_on:
- db
networks:
- my-network
command: bash -c "sleep 20 && node server.js"
myapp:
image: myId/myangularapp
ports:
- "80:80"
depends_on:
- backend
networks:
- my-network
networks:
my-network:
I am not sure what the apiUrl should be? I have tried the following and nothing worked:
apiUrl: "http://backend:3000/api"
apiUrl: "http://server-external-ip:3000/api"
apiUrl: "http://server-internal-ip:3000/api"
apiUrl: "http://localhost:3000/api"
I think you should use the docker-compose service as a DNS. It seems you've several docker hosts/ports available, there are the following in your docker-compose structure:
db:5432
http://backend:3000
http://myapp
Make sure to use db as POSTGRES_DB in the environment part for backend service.
Take a look to my repo, I think is the best way to learn how a similar project works and how to build several apps with nginx, you also can check my docker-compose.yml, it uses several services and are proxied using nginx and are worked together.
On this link you’ll find a nginx/default.conf file and it contains several nginx upstream configuration please take a look at how I used docker-compose service references there as hosts.
Inside the client/ directory, I also have another nginx as a web server of a react.js project.
On server/ directory, it has a Node.js API, It connects to Redis and Postgres SQL database also built from docker-compose.yml.
If you need set or redirect traffic to /api you can use some ngnix config like this
I think this use case can be useful for you and other users!

Docker - How to wait for container running

i want launch three containers for my web application.
The containers are: frontend, backend and mongo database.
To do this i write the following docker-compose.yml
version: '3.7'
services:
web:
image: node
container_name: web
ports:
- "3000:3000"
working_dir: /node/client
volumes:
- ./client:/node/client
links:
- api
depends_on:
- api
command: npm start
api:
image: node
container_name: api
ports:
- "3001:3001"
working_dir: /node/api
volumes:
- ./server:/node/api
links:
- mongodb
depends_on:
- mongodb
command: npm start
mongodb:
restart: always
image: mongo
container_name: mongodb
ports:
- "27017:27017"
volumes:
- ./database/data:/data/db
- ./database/config:/data/configdb
and update connection string on my .env file
MONGO_URI = 'mongodb://mongodb:27017/test'
I run it with docker-compose up -d and all go on.
The problem is when i run docker logs api -f for monitoring the backend status: i have MongoNetworkError: failed to connect to server [mongodb:27017] on first connect error, because my mongodb container is up but not in waiting connections (he goes up after backend try to connect).
How can i check if mongodb is in waiting connections status before run api container?
Thanks in advance
Several possible solutions in order of preference:
Configure your application to retry after a short delay and eventually timeout after too many connection failures. This is an ideal solution for portability and can also be used to handle the database restarting after your application is already running and connected.
Use an entrypoint that waits for mongo to become available. You can attempt a full mongo client connect + login, or a simple tcp port check with a script like wait-for-it. Once that check finishes (or times out and fails) you can continue the entrypoint to launching your application.
Configure docker to retry starting your application with a restart policy, or deploy it with orchestration that automatically recovers when the application crashes. This is a less than ideal solution, but extremely easy to implement.
Here's an example of option 3:
api:
image: node
deploy:
restart_policy:
condition: unless-stopped
Note, looking at your compose file, you have a mix of v2 and v3 syntax in your compose file, and many options like depends_on, links, and container_name, are not valid with swarm mode. You are also defining settings like working_dir, which should really be done in your Dockerfile instead.

Where to put test files for webdriverIO testing - using docker container?

I do not understand how to run webdriverIO e2e tests of my nodeJS application.
As you can see my nodeJS application is also running as a docker container.
But now I got stucked with some very basic things:
So where do I have to put the test files which I want to run? Do I have to copy them into webdriverio container? If yes, in which folder?
How do I run the tests then?
This is my docker compose setup for all needed docker container:
services:
webdriverio:
image: huli/webdriverio:latest
depends_on:
- chrome
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
hub:
image: selenium/hub
ports:
- 4444:4444
chrome:
image: selenium/node-chrome
ports:
- 5900
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
depends_on:
- hub
myApp:
container_name: myApp
image: 'registry.example.com/project/app:latest'
restart: always
links:
- 'mongodb'
environment:
- ROOT_URL=https://example.com
- MONGO_URL=mongodb://mongodb/db
mongodb:
container_name: mongodb
image: 'mongo:3.4'
restart: 'always'
volumes:
- '/opt/mongo/db/live:/data/db'
I have a complete minimal example in this link: Github
Q. So where do I have to put the test files which I want to run?
Put them in the container that will run Webdriver.io.
Q. Do I have to copy them into Webdriverio container? If yes, in which folder?
Yes. Put them wherever you want. Then you can run the sequentially, from a command: script if you like.
Q. How do I run the tests then?
With docker-compose up the test container will start and begin to run them all.

Resources