Docker log formatting - python-3.x

I have a Docker container for my application with REST API.
I want to check what happened via docker logs, but it prints in the strange format:
docker logs -f 2b46ac8629f5
* Running on http://0.0.0.0:9002/ (Press CTRL+C to quit)
172.18.0.5 - - [03/Mar/2019 13:53:38] code 400, message Bad HTTP/0.9 request type
("\x16\x03\x01\x00«\x01\x00\x00§\x03\x03\x0euçÍ'ïá\x98\x12\\W5¥Ä\x01\x08")
172.18.0.5 - - [03/Mar/2019 13:53:38] "«§uçÍ'ïá\W5¥µuìz«Ôw48À,À0̨̩̪À+À/À$À(kÀ#À'gÀ" HTTPStatus.BAD_REQUEST -
Is it possible to fix these strings somehow?
UPDATE:
Part of my docker-compose file looks like:
api:
container_name: api
restart: always
build: ./web
ports:
- "9002:9002"
volumes:
- ./myapp.co.cert:/usr/src/app/myapp.co.cert
- /usr/src/app/myapp/static
- ./myapp.co.key:/usr/src/appmyapp.co.key
depends_on:
- postgres
Docker file of web container looks:
cat web/Dockerfile
# For better understanding of what is going on please follow this link:
# https://github.com/docker-library/python/blob/f12c2df135aef8c3f645d90aae582b2c65dbc3b5/3.6/jessie/onbuild/Dockerfile
FROM python:3.6.4-onbuild
# Start myapp API.
CMD ["python", "api.py"]
api.py starts Flask app using Python 3.6

Related

GET request not responding in docker container

I have a fastapi app running in a docker container that is connected to a PostgreSQL DB which is running as a container too. I have both the infos in the docker-compose.yml file.
In the app, I have a POST endpoint that is requesting data from an external API (https://restcountries.com/v2/all) using the requests library. Once the data is extracted, I am trying to save it in a table in the DB. When I trigger the endpoint from docker container, it takes forever and the data is not being extracted from the API. But at the same time, when I run the same code outside the docker container, it gets executed instantly and the data is received.
The docker-compose.yml file:
version: "3.6"
services:
backend-api:
build:
context: .
dockerfile: docker/Dockerfile.api
volumes:
- ./:/srv/recruiting/
command: uvicorn --reload --reload-dir "/srv/recruiting/backend" --host 0.0.0.0 --port 8080 --log-level "debug" "backend.main:app"
ports:
- "8080:8080"
networks:
- backend_network
depends_on:
- postgres
postgres:
image: "postgres:13"
volumes:
- postgres_data:/var/lib/postgresql/data/pgdata:delegated
environment:
- PGDATA=/var/lib/postgresql/data/pgdata
- POSTGRES_PASSWORD=pgpw12
expose:
- "5432"
ports:
- "5432:5432"
networks:
- backend_network
volumes:
postgres_data: {}
networks:
backend_network: {}
The code that is making the request:
req = requests.get('https://restcountries.com/v2/all')
json_data = req.json()
Am I missing something or doing something wrong?
I often use python requests inside my python containers, and have come across this problem a couple of times. I haven't found a proper solution, but restarting Docker Desktop (entirely restarting it, not just restarting your container) seems to work every time. What I have found helpful in the past, is to specify a timeout period in the invocation of the HTTP call:
requests.request(method="POST", url=url, json=data, headers=headers, timeout=2)
When the server does not send data within the timeout period, an exception will be raised. At least you'll be aware of the issue, and will waste less time identifying when this occurs again.

Running into issues while deploying multi container instances on Azure using Docker compose

docker compose-up locally is able to build and bring up the services, but when doing the same on Azure Container Instances I get the below error
containerinstance.ContainerGroupsClient#CreateOrUpdate: Failure
sending request: StatusCode=400 -- Original Error:
Code="InaccessibleImage" Message="The image
'docker/aci-hostnames-sidecar:1.0' in container group 'djangodeploy'
is not accessible. Please check the image and registry credential."
Also, what is the purpose of this image docker/aci-hostnames-sidecar
The ACI deployment was working fine and now suddenly it doesn't work anymore
The docker-compose.yml contents are provided below
version: '3.7'
services:
django-gunicorn:
image: oasisdatacr.azurecr.io/oasis_django:latest
env_file:
- .env
build:
context: .
command: >
sh -c "
python3 manage.py makemigrations &&
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
python3 manage.py runserver 0:8000"
ports:
- "8000:8000"
celery-worker:
image: oasisdatacr.azurecr.io/oasis_django_celery_worker:latest
restart: always
build:
context: .
command: celery -A oasis worker -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
celery-beat:
image: oasisdatacr.azurecr.io/oasis_django_celery_beat:latest
build:
context: .
command: celery -A oasis beat -l INFO
env_file:
- .env
depends_on:
- django-gunicorn
- celery-worker
UPDATE There might have been some issue from Azure end as I was able to deploy the containers as I usually do without any changes whatsoever
When you use the docker-compose to deploy multiple containers to ACI, firstly you need to build the images locally, and then you need to push the images to the ACR through the command docker-compose push, of course, you need to log in to your ACR first. See the example here.
And if you already push the images to your ACR, then you need to make sure if you log in to your ACR with the right credential and the image name with the tag is absolutely right.

Is there a way to run my python test on selenium grid through my docker image?

I am trying to run my python application which consists of test-cases by bundling them into a docker image. Then run them on a selenium grid on Chrome and Firefox nodes.
I am not able to run my application successfully, by building a docker image for my python app.
1- I've tried building my image, pushing it to my docker hub and retrieving it through a docker-compose file with all the services together (Selenium grid, nodes, app).
2 - I've tried building my image separately from the docker-compose file. After composing the (Selenium Grid and nodes) up; I manually build my docker image for the python app and use {docker run [image-name]} to run the application.
None of the above two methods worked for me.
Dockerfile for python app
FROM python: latest
WORKDIR /app
COPY ./myapp /app
RUN pip install -r /app/requirements.txt
CMD ["pytest","-n2","-s"]
Docker-compose file
Do I need a network ?? can I achieve what I want without the network I've created within this docker-compose file?
version: '3.7'
services:
hub:
image: selenium/hub:3.141.59
networks:
q2cnw: {}
environment:
- GRID_MAX_SESSION=50
- GRID_BROWSER_TIMEOUT=60000
- GRID_TIMEOUT=60000
- GRID_NEW_SESSION_WAIT_TIMEOUT=60000
- GRID_MAX_INSTANCES=3
ports:
- 4444:4444
chrome:
image: selenium/node-chrome-debug
depends_on:
- hub
networks:
q2cnw: {}
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
ports:
- "9001:5900"
links:
- hub
firefox:
image: selenium/node-firefox-debug
depends_on:
- hub
networks:
q2cnw: {}
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
ports:
- "9002:5900"
links:
- hub
app:
container_name: demo_pytest_app
image: {docker-image-from-repository}
networks:
q2cnw: {}
ports:
- "8080"
networks:
q2cnw:
driver: bridge
my conftest.py file
are there any changes for the URL, according to the docker-compose above?
driver = webdriver.Remote(
command_executor="http://127.0.0.1:4444/wd/hub",
desired_capabilities=DesiredCapabilities.FIREFOX
)
Expected result:
(this shows it ran my browser and the application which threw an expected error; which I'm fine with)
try:
driver.find_element_by_xpath("//div[#class='alert alert-danger alert-dismissible']")
> raise Exception("ERROR MESSAGE BANNER PROBLEM")
E Exception: ERROR MESSAGE BANNER PROBLEM
Actual result:
(i used this command to run the pytest image -> [docker run -it pytest-image])
The below error indicates that connection is being refused due to host URL. Does anyone knows about this error that occurs when we are trying to connect our app to selenium grid + nodes through a docker image ??
E urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=4444): Max retries exceeded with url: /wd/hub/session (Caused by NewConnect
ionError('<urllib3.connection.HTTPConnection object at 0x7f29f02f2a10>: Failed to establish a new connection: [Errno 111] Connection refused'))
Can you try this image - https://hub.docker.com/repository/docker/benjose22/selenium_pytest_remote
How to run -
 Step 1: Spin Docker-compose per https://github.com/SeleniumHQ/docker-selenium#via-docker-compose
 Step 2: docker run --network="host" -it -v ~/host_scripts:/src benjose22/selenium_pytest_remote:v1.0 test_sample.py
conftest.py can be like -
caps = {'browserName': os.getenv('BROWSER', 'firefox')}<br/>
self.browser = webdriver.Remote(command_executor='http://localhost:4444/wd/hub', desired_capabilities=caps)

Docker - How to wait for container running

i want launch three containers for my web application.
The containers are: frontend, backend and mongo database.
To do this i write the following docker-compose.yml
version: '3.7'
services:
web:
image: node
container_name: web
ports:
- "3000:3000"
working_dir: /node/client
volumes:
- ./client:/node/client
links:
- api
depends_on:
- api
command: npm start
api:
image: node
container_name: api
ports:
- "3001:3001"
working_dir: /node/api
volumes:
- ./server:/node/api
links:
- mongodb
depends_on:
- mongodb
command: npm start
mongodb:
restart: always
image: mongo
container_name: mongodb
ports:
- "27017:27017"
volumes:
- ./database/data:/data/db
- ./database/config:/data/configdb
and update connection string on my .env file
MONGO_URI = 'mongodb://mongodb:27017/test'
I run it with docker-compose up -d and all go on.
The problem is when i run docker logs api -f for monitoring the backend status: i have MongoNetworkError: failed to connect to server [mongodb:27017] on first connect error, because my mongodb container is up but not in waiting connections (he goes up after backend try to connect).
How can i check if mongodb is in waiting connections status before run api container?
Thanks in advance
Several possible solutions in order of preference:
Configure your application to retry after a short delay and eventually timeout after too many connection failures. This is an ideal solution for portability and can also be used to handle the database restarting after your application is already running and connected.
Use an entrypoint that waits for mongo to become available. You can attempt a full mongo client connect + login, or a simple tcp port check with a script like wait-for-it. Once that check finishes (or times out and fails) you can continue the entrypoint to launching your application.
Configure docker to retry starting your application with a restart policy, or deploy it with orchestration that automatically recovers when the application crashes. This is a less than ideal solution, but extremely easy to implement.
Here's an example of option 3:
api:
image: node
deploy:
restart_policy:
condition: unless-stopped
Note, looking at your compose file, you have a mix of v2 and v3 syntax in your compose file, and many options like depends_on, links, and container_name, are not valid with swarm mode. You are also defining settings like working_dir, which should really be done in your Dockerfile instead.

Docker-compose access container with service name in python file?

I have a two container and How I can access another container in python file with django web server
Docker-compose.yml file
version: '2'
services:
web:
build: ./web/
command: python3 manage.py runserver 0.0.0.0:8001
volumes:
- ./web:/code
ports:
- "8001:80"
networks:
- dock_net
container_name: con_web
depends_on:
- "api"
links:
- api
api:
build: ./api/
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- ./api:/code
ports:
- "8000:80"
networks:
- dock_net
container_name: con_api
networks:
dock_net:
driver: bridge
Python File:
I can get mail_string from form
mail = request.POST.get('mail_string', '')
url = 'http://con_api:8000/api/'+mail+'/?format=json'
resp = requests.get(url=url)
return HttpResponse(resp.text)
I request api container and get value but I dont know ip address
Updated Answer
In your python file, you can use url = 'http://api/'+mail+'/?format=json'. This will enable you to access the url you are trying to get request from.
Original Answer
If the two containers are independent, then you can create a network and when you make both the containers a part of same network then you can access them using their hostname which you can specify by --hostname=HOSTNAME.
Another easy way is to use docker-compose file which creates a network by default and all the services declared in the file are a part of that network. By that you can access other services by their service name. Simply like http://container1/ when your docker-compose file is like this:
version: '3'
services:
container1:
image: some_image
container2:
image: another_or_same_image
Now enter into container2 by:
docker exec -it container2 /bin/bash
and run ping http://contianer1
You will receive packets and therefore be able to access other container.

Resources