I do not understand how to run webdriverIO e2e tests of my nodeJS application.
As you can see my nodeJS application is also running as a docker container.
But now I got stucked with some very basic things:
So where do I have to put the test files which I want to run? Do I have to copy them into webdriverio container? If yes, in which folder?
How do I run the tests then?
This is my docker compose setup for all needed docker container:
services:
webdriverio:
image: huli/webdriverio:latest
depends_on:
- chrome
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
hub:
image: selenium/hub
ports:
- 4444:4444
chrome:
image: selenium/node-chrome
ports:
- 5900
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
depends_on:
- hub
myApp:
container_name: myApp
image: 'registry.example.com/project/app:latest'
restart: always
links:
- 'mongodb'
environment:
- ROOT_URL=https://example.com
- MONGO_URL=mongodb://mongodb/db
mongodb:
container_name: mongodb
image: 'mongo:3.4'
restart: 'always'
volumes:
- '/opt/mongo/db/live:/data/db'
I have a complete minimal example in this link: Github
Q. So where do I have to put the test files which I want to run?
Put them in the container that will run Webdriver.io.
Q. Do I have to copy them into Webdriverio container? If yes, in which folder?
Yes. Put them wherever you want. Then you can run the sequentially, from a command: script if you like.
Q. How do I run the tests then?
With docker-compose up the test container will start and begin to run them all.
Related
I am trying to access a Docker container which exposes an Express API (using Docker Compose services) in GitLab CI in order to run a number of tests against it.
I setup and instantiate the Docker services necessary as one task, then I attempt accessing it via axios requests in my tests. I have set 0.0.0.0 as the endpoint base.
However, I keep receiving the error:
[Error: connect ECONNREFUSED 0.0.0.0:3000]
My docker-compose.yml:
version: "3"
services:
st-sample:
container_name: st-sample
image: sample
restart: always
build: .
expose:
- "3000"
ports:
- "3000:3000"
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- /sampledb
expose:
- "27017"
ports:
- "27017:27017"
My gitlab-ci.yml:
image: docker:latest
services:
- node
- mongo
- docker:dind
stages:
- prepare_image
- setup_application
- test
- teardown_application
prepare_image:
stage: prepare_image
script:
- docker build -t sample .
setup_application:
stage: setup_application
script:
- docker-compose -f docker-compose.yml up -d
test:
image: node:latest
stage: test
allow_failure: true
before_script:
- npm install
script:
- npm test
teardown_application:
stage: teardown_application
script:
- docker-compose -f docker-compose.yml stop
Note that I also have registered the runner in my machine, giving it privileged permissions.
Locally everything works as expected - docker containers are initiated and are accessed for the tests.
However I am unable to do this via GitLab CI. The Docker containers build and get set up normally, however I am unable to access the exposed API.
I have tried many things, like setting the hostname for accessing the container, setting a static IP, using the container name etc, but to no success - I just keep receiving ECONNREFUSED.
I understand that they have their own network isolation strategy for security reasons, but I am just unable to expose the docker service to be tested.
Can you give an insight to this please? Thank you.
I finally figured this out, following 4 days of reading, searching and lots of trial and error. The job running the tests was in a different container from the ones that exposed the API and the database.
I resolved this by creating a docker network in the device the runner was on:
sudo network create mynetwork
Following that, I set the network to the docker-compose.yml file, with external config, and associated both services with it:
st-sample:
# ....
networks:
- mynetwork
mongo:
# ....
networks:
- mynetwork
networks:
mynetwork:
external: true
Also, I created a custom docker image including tests (name: test),
and in gitlab-ci.yml, I setup the job to run it within mynetwork.
docker run --network=mynetwork test
Following that, the containers/services were accessible by their names along each other, so I was able to run tests against http://st-sample.
It was a long journey to figure it all out, but it was well-worth it - I learned a lot!
The problem:
I'm trying to set up a Docker WordPress development environment on Windows 11 with wsl2. I created a docker-compose.yml and everything works apart from the node install. The container tries to start it and then just stops. Is something in my docker-compose file wrong?
I want to use node because I use gulp, npm and browser sync for my WordPress themes.
This is my docker-compose.yml:
version: "3.9"
services:
db:
image: mysql:5.7
volumes:
- dbdata:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
volumes:
- wp-content:/var/www/html/wp-content
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
node:
restart: "no"
image: node:latest
container_name: nodejs
depends_on:
- wordpress
volumes:
- node-data:/usr/src/app
ports:
- 3000:3000
- 3001:3001
volumes:
dbdata:
wp-content:
node-data:
You have to use a Dockerfile otherwise it will never ever work!
Problem is that you have mentioned to spin up the docker container but just think for a sec how will you command node to tell that what it needs to do .When it sees no action commanded it basically closes up .
Solution
Long way - You can use up a dockerfile, as mentioned by Tom , dockerfile is generally used to containerise your application so whatever code you need do just write it , then build your code into a docker image with node using Dockerfile , but in that case trouble is that everytime you are doing any code change you have reiterate over the same process and again build your code.
Another simple way I could think of is add a command:npm start in your node image and this should help you probably
Forgive me if I have suggested something wrong because I am also learner in docker :)
I'm new to docker and I'm having issues with connecting to my managed database cluster on the cloud services which was separated from the docker machine and network.
So recently I attempted to use docker-compose because manually writing docker run command every update is a hassle so I configure the yml file.
Whenever I use docker compose, I'm having issues connecting to the database with this error
Unhandled error event: Error: connect ENOENT %22rediss://default:password#test.ondigitalocean.com:25061%22
But if I run it on the actual docker run command with the ENV in dockerfile, then everything will work fine.
docker run -d -p 4000:4000 --restart always test
But I don't want to expose all the confidential data to the code repository with all the details on the dockerfile.
Here is my dockerfile and docker-compose
dockerfile
FROM node:14.3.0
WORKDIR /kpb
COPY package.json /kpb
RUN npm install
COPY . /kpb
CMD ["npm", "start"]
docker-compose
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION="${PRODUCTION}"
- DB_SSL="${DB_SSL}"
- DB_CERT="${DB_CERT}"
- DB_URL="${DB_URL}"
- REDIS_URL="${REDIS_URL}"
- SESSION_KEY="${SESSION_KEY}"
- AWS_BUCKET_REGION="${AWS_BUCKET_REGION}"
- AWS_BUCKET="${AWS_BUCKET}"
- AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}"
- AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}"
You should not include the " for the values of your environment variables in your docker-compose.
This should work:
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION=${PRODUCTION}
- DB_SSL=${DB_SSL}
- DB_CERT=${DB_CERT}
- DB_URL=${DB_URL}
- REDIS_URL=${REDIS_URL}
- SESSION_KEY=${SESSION_KEY}
- AWS_BUCKET_REGION=${AWS_BUCKET_REGION}
- AWS_BUCKET=${AWS_BUCKET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
I'm working on an ecommerce, I want to have the ability to upload product photos from the client and save them in a directory on the serve.
I implemented this feature but then I understood that since we use docker for our deployment, the directory in which I save the pictures won't persist. as I searched, I kinda realized that I should use volumes and map that directory in docker compose. I'm a complete novice backend developer (I work on frontend) so I'm not really sure what I should do.
Here is the compose file:
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
If I want to store the my photos in ../static/images (ralative to the root of my project), what should I do and how should refer to this path in my backend code?
Backend is in nodejs (Nestjs).
You have to create a volume and tell to docker-compose/docker stack mount it within the container specify the path you wamth. See the volumes to the very end of the file and the volumes option on nodejs service.
version: '3'
services:
nodejs:
image: node:latest
environment:
- MYSQL_HOST=[REDACTED]
- FRONT_SITE_ADDRESS=[REDACTED]
- SITE_ADDRESS=[REDACTED]
container_name: [REDACTED]
working_dir: /home/node/app
ports:
- "8888:7070"
volumes:
- ./:/home/node/app
- static-files:/home/node/static/images
command: node dist/main.js
links:
- mysql
mysql:
environment:
- MYSQL_ROOT_PASSWORD=[REDACTED]
container_name: product-mysql
image: 'mysql:5.7'
volumes:
- ../data:/var/lib/mysql
volumes:
static-files:{}
Doing this an empty container will be crated persisting your data and every time a new container mounts this path you can get the data stored on it. I would suggest to use the same approach with mysql instead of saving data within the host.
https://docs.docker.com/compose/compose-file/#volume-configuration-reference
I've created two basic MEAN stack apps with a common database (mongo db). I've also built docker for these apps.
Problem:
When i start a mean stack container(example-app-1) using
docker-compose up -d --build
The container runs smoothly and I'm also able to hit the container and view my page locally.
When i try to start another mean stack container(example-app-2) using
docker-compose up -d --build
my previous container is stopped and the current container works without any flaw.
Required:
I want both these containers to run simultaneously using a shared database. I need help in achieving this.
docker-compose.yml Example app -1
version: '3'
services:
example_app_1:
build:
dockerfile: dockerfile
context: ../../
image: example_app_1:1.0.0
backend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/backend/example-app-1-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_1
- BACKEND_PORT=8888
- BACKEND_IP=0.0.0.0
restart: always
ports:
- '8888:8888'
command: ['node', 'main.js']
networks:
- default
expose:
- 8888
frontend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/frontend/example_app_1
ports:
- '5200:5200'
command: ['http-server', '-p', '5200', '-o', '/app/example_app_1/frontend/example-app-1']
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose.yml for Example app 2
version: '3'
services:
example-app-2:
build:
dockerfile: dockerfile
context: ../../
image: example_app_2:1.0.0
backend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/backend/example-app-2-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_2
- BACKEND_PORT=3333
- BACKEND_IP=0.0.0.0
restart: always
networks:
- default
ports:
- '3333:3333'
command: ['node', 'main.js']
expose:
- 3333
frontend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/frontend/example-app-2
ports:
- '4200:4200'
command: ['http-server', '-p', '4200', '-o', '/app/example_app_2/frontend/example-app-2
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose will create containers named project_service. The project, by default, comes from the last component of the directory. So when you start the second docker-compose, it will stop the containers with those names, and start new ones.
Either move these two docker-compose files to separate directories so the container names are different, or run docker-compose with --project-name flag containing separate project names so the containers can be started with different names.