I need to create a pipeline in Azure with my autotests using Docker container. I made it successfully on my local machine using the following algorithm:
Create Selenium node with next command:
docker run -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:4.0.0-beta-1-20210215
Build image using command: docker build -t my_tests .
next
Here is my dockerfile:
FROM maven:onbuild
COPY src /home/bns_bdd_automation/src
COPY pom.xml /home/bns_bdd_automation
COPY .gitignore /home/bns_bdd_automation
CMD mvn -f /home/bns_bdd_automation/pom.xml clean test
Everything works fine, but locally.
In the cloud I faced an issue: I need to RUN Selenium Node at first, and after that build my image.
As I understood from some articles I need to use docker-compose (for run first image), but I don't know how. Can you help me with that?
Well, here is my docker-compose.yml file:
version: "3"
services:
selenium-hub:
image: selenium/hub
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
bns_bdd_automation:
depends_on:
- selenium-hub
- chrome
build: .
But it works not as I expected. It builds and RUN tests BEFORE hub and chrome was executed. And after that it shows me in terminal:
WARNING: Image for service bns_bdd_automation was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Starting selenium-hub ... done
Starting bns_bdd_automation_chrome_1 ... done
Recreating bns_bdd_automation_bns_bdd_automation_1 ... error
ERROR: for bns_bdd_automation_bns_bdd_automation_1 no such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889: No such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889
ERROR: for bns_bdd_automation no such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889: No such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]
Related
I am trying to access a Docker container which exposes an Express API (using Docker Compose services) in GitLab CI in order to run a number of tests against it.
I setup and instantiate the Docker services necessary as one task, then I attempt accessing it via axios requests in my tests. I have set 0.0.0.0 as the endpoint base.
However, I keep receiving the error:
[Error: connect ECONNREFUSED 0.0.0.0:3000]
My docker-compose.yml:
version: "3"
services:
st-sample:
container_name: st-sample
image: sample
restart: always
build: .
expose:
- "3000"
ports:
- "3000:3000"
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- /sampledb
expose:
- "27017"
ports:
- "27017:27017"
My gitlab-ci.yml:
image: docker:latest
services:
- node
- mongo
- docker:dind
stages:
- prepare_image
- setup_application
- test
- teardown_application
prepare_image:
stage: prepare_image
script:
- docker build -t sample .
setup_application:
stage: setup_application
script:
- docker-compose -f docker-compose.yml up -d
test:
image: node:latest
stage: test
allow_failure: true
before_script:
- npm install
script:
- npm test
teardown_application:
stage: teardown_application
script:
- docker-compose -f docker-compose.yml stop
Note that I also have registered the runner in my machine, giving it privileged permissions.
Locally everything works as expected - docker containers are initiated and are accessed for the tests.
However I am unable to do this via GitLab CI. The Docker containers build and get set up normally, however I am unable to access the exposed API.
I have tried many things, like setting the hostname for accessing the container, setting a static IP, using the container name etc, but to no success - I just keep receiving ECONNREFUSED.
I understand that they have their own network isolation strategy for security reasons, but I am just unable to expose the docker service to be tested.
Can you give an insight to this please? Thank you.
I finally figured this out, following 4 days of reading, searching and lots of trial and error. The job running the tests was in a different container from the ones that exposed the API and the database.
I resolved this by creating a docker network in the device the runner was on:
sudo network create mynetwork
Following that, I set the network to the docker-compose.yml file, with external config, and associated both services with it:
st-sample:
# ....
networks:
- mynetwork
mongo:
# ....
networks:
- mynetwork
networks:
mynetwork:
external: true
Also, I created a custom docker image including tests (name: test),
and in gitlab-ci.yml, I setup the job to run it within mynetwork.
docker run --network=mynetwork test
Following that, the containers/services were accessible by their names along each other, so I was able to run tests against http://st-sample.
It was a long journey to figure it all out, but it was well-worth it - I learned a lot!
docker compose-up locally is able to build and bring up the services, but when doing the same on Azure Container Instances I get the below error
containerinstance.ContainerGroupsClient#CreateOrUpdate: Failure
sending request: StatusCode=400 -- Original Error:
Code="InaccessibleImage" Message="The image
'docker/aci-hostnames-sidecar:1.0' in container group 'djangodeploy'
is not accessible. Please check the image and registry credential."
Also, what is the purpose of this image docker/aci-hostnames-sidecar
The ACI deployment was working fine and now suddenly it doesn't work anymore
The docker-compose.yml contents are provided below
version: '3.7'
services:
django-gunicorn:
image: oasisdatacr.azurecr.io/oasis_django:latest
env_file:
- .env
build:
context: .
command: >
sh -c "
python3 manage.py makemigrations &&
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
python3 manage.py runserver 0:8000"
ports:
- "8000:8000"
celery-worker:
image: oasisdatacr.azurecr.io/oasis_django_celery_worker:latest
restart: always
build:
context: .
command: celery -A oasis worker -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
celery-beat:
image: oasisdatacr.azurecr.io/oasis_django_celery_beat:latest
build:
context: .
command: celery -A oasis beat -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
- celery-worker
UPDATE There might have been some issue from Azure end as I was able to deploy the containers as I usually do without any changes whatsoever
When you use the docker-compose to deploy multiple containers to ACI, firstly you need to build the images locally, and then you need to push the images to the ACR through the command docker-compose push, of course, you need to log in to your ACR first. See the example here.
And if you already push the images to your ACR, then you need to make sure if you log in to your ACR with the right credential and the image name with the tag is absolutely right.
I'm new to docker and I'm having issues with connecting to my managed database cluster on the cloud services which was separated from the docker machine and network.
So recently I attempted to use docker-compose because manually writing docker run command every update is a hassle so I configure the yml file.
Whenever I use docker compose, I'm having issues connecting to the database with this error
Unhandled error event: Error: connect ENOENT %22rediss://default:password#test.ondigitalocean.com:25061%22
But if I run it on the actual docker run command with the ENV in dockerfile, then everything will work fine.
docker run -d -p 4000:4000 --restart always test
But I don't want to expose all the confidential data to the code repository with all the details on the dockerfile.
Here is my dockerfile and docker-compose
dockerfile
FROM node:14.3.0
WORKDIR /kpb
COPY package.json /kpb
RUN npm install
COPY . /kpb
CMD ["npm", "start"]
docker-compose
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION="${PRODUCTION}"
- DB_SSL="${DB_SSL}"
- DB_CERT="${DB_CERT}"
- DB_URL="${DB_URL}"
- REDIS_URL="${REDIS_URL}"
- SESSION_KEY="${SESSION_KEY}"
- AWS_BUCKET_REGION="${AWS_BUCKET_REGION}"
- AWS_BUCKET="${AWS_BUCKET}"
- AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}"
- AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}"
You should not include the " for the values of your environment variables in your docker-compose.
This should work:
version: '3.8'
services:
app:
container_name: kibblepaw-graphql
restart: always
build: .
ports:
- '4000:4000'
environment:
- PRODUCTION=${PRODUCTION}
- DB_SSL=${DB_SSL}
- DB_CERT=${DB_CERT}
- DB_URL=${DB_URL}
- REDIS_URL=${REDIS_URL}
- SESSION_KEY=${SESSION_KEY}
- AWS_BUCKET_REGION=${AWS_BUCKET_REGION}
- AWS_BUCKET=${AWS_BUCKET}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
I am trying to run my python application which consists of test-cases by bundling them into a docker image. Then run them on a selenium grid on Chrome and Firefox nodes.
I am not able to run my application successfully, by building a docker image for my python app.
1- I've tried building my image, pushing it to my docker hub and retrieving it through a docker-compose file with all the services together (Selenium grid, nodes, app).
2 - I've tried building my image separately from the docker-compose file. After composing the (Selenium Grid and nodes) up; I manually build my docker image for the python app and use {docker run [image-name]} to run the application.
None of the above two methods worked for me.
Dockerfile for python app
FROM python: latest
WORKDIR /app
COPY ./myapp /app
RUN pip install -r /app/requirements.txt
CMD ["pytest","-n2","-s"]
Docker-compose file
Do I need a network ?? can I achieve what I want without the network I've created within this docker-compose file?
version: '3.7'
services:
hub:
image: selenium/hub:3.141.59
networks:
q2cnw: {}
environment:
- GRID_MAX_SESSION=50
- GRID_BROWSER_TIMEOUT=60000
- GRID_TIMEOUT=60000
- GRID_NEW_SESSION_WAIT_TIMEOUT=60000
- GRID_MAX_INSTANCES=3
ports:
- 4444:4444
chrome:
image: selenium/node-chrome-debug
depends_on:
- hub
networks:
q2cnw: {}
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
ports:
- "9001:5900"
links:
- hub
firefox:
image: selenium/node-firefox-debug
depends_on:
- hub
networks:
q2cnw: {}
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
ports:
- "9002:5900"
links:
- hub
app:
container_name: demo_pytest_app
image: {docker-image-from-repository}
networks:
q2cnw: {}
ports:
- "8080"
networks:
q2cnw:
driver: bridge
my conftest.py file
are there any changes for the URL, according to the docker-compose above?
driver = webdriver.Remote(
command_executor="http://127.0.0.1:4444/wd/hub",
desired_capabilities=DesiredCapabilities.FIREFOX
)
Expected result:
(this shows it ran my browser and the application which threw an expected error; which I'm fine with)
try:
driver.find_element_by_xpath("//div[#class='alert alert-danger alert-dismissible']")
> raise Exception("ERROR MESSAGE BANNER PROBLEM")
E Exception: ERROR MESSAGE BANNER PROBLEM
Actual result:
(i used this command to run the pytest image -> [docker run -it pytest-image])
The below error indicates that connection is being refused due to host URL. Does anyone knows about this error that occurs when we are trying to connect our app to selenium grid + nodes through a docker image ??
E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=4444): Max retries exceeded with url: /wd/hub/session (Caused by NewConnect
ionError('<urllib3.connection.HTTPConnection object at 0x7f29f02f2a10>: Failed to establish a new connection: [Errno 111] Connection refused'))
Can you try this image - https://hub.docker.com/repository/docker/benjose22/selenium_pytest_remote
How to run -
Step 1: Spin Docker-compose per https://github.com/SeleniumHQ/docker-selenium#via-docker-compose
Step 2: docker run --network="host" -it -v ~/host_scripts:/src benjose22/selenium_pytest_remote:v1.0 test_sample.py
conftest.py can be like -
caps = {'browserName': os.getenv('BROWSER', 'firefox')}<br/>
self.browser = webdriver.Remote(command_executor='http://localhost:4444/wd/hub', desired_capabilities=caps)
I am currently trying out this tutorial for node express with mongodb
https://medium.com/#sunnykay/docker-development-workflow-node-express-mongo-4bb3b1f7eb1e
the first part works fine where to build the docker-compose.yml
it works totally fine building it locally so I tried to tag it and push into my dockerhub to learn and try more.
this is originally what's in the yml file followed by the tutorial
version: "2"
services:
web:
build: .
volumes:
- ./:/app
ports:
- "3000:3000"
this works like a charm when I use docker-compose build and docker-compose up
so I tried to push it to my dockerhub and I also tag it as node-test
I then changed the yml file into
version: "2"
services:
web:
image: "et4891/node-test"
volumes:
- ./:/app
ports:
- "3000:3000"
then I removed all images I have previously to make sure this also works...but when I run docker-compose build I see this message error: web uses an image, skipping and nothing happens.
I tried googling the error but nothing much I can find.
Can someone please give me a hand?
I found out, I was being stupid.
I didn't need to run docker-compose build I can just directly run docker-compose up since then it'll pull the images down, the build is just to build locally
in my case below command worked:
docker-compose up --force-recreate
I hope this helps!
Clarification: This message (<service> uses an image, skipping)
is NOT an error. It's informing the user that the service uses Image and it's therefore pre-built, So it's skipped by the build command.
In other words - You don't need build , you need to up the service.
Solution:
run sudo docker-compose up <your-service>
PS: In case you changed some configuration on your docker-compose use --force-recreate flag to apply the changes and creating it again.
sudo docker-compose up --force-recreate <your-service>
My problem was that I wanted to upgrade the image so I tried to use:
docker build --no-cache
docker-compose up --force-recreate
docker-compose up --build
None of which rebuild the image.
What is missing ( from this post ) is:
docker-compose stop
docker-compose rm -f # remove old images
docker-compose pull # download new images
docker-compose up -d