Getting error on running docker-compose up - python-3.x

I am trying to run my python code through docker-compose. It is not a flask app so I didn't provide a port number on my yml file. Here is my docker-compose.yml file:
version: '3'
services:
main:
build: .
image: ddn4
environment:
- neo4j_uri=bolt://54.209.5.141:7687
- neo4j_username=neo4j
- password=provis234
- blob_conn_string=httpsxxxx
main.py is my python code. After running
docker-compose build
, I get that an image was successfully built. Also, upon checking using
docker images
, I see that the image ddn4 was built successfully. But upon running
docker-compose up
, I am getting the following error:
main_1 | Error !!!! File Exception:
main_1 | 'function' object is not subscriptable
main_1 | Error !!!!:
main_1 | 'NoneType' object has no attribute 'columns' dd-n4_main_1 exited with code 0
dd-n4 is the location of my dockerfile, requirements.txt, python codes and docker-compose.yml file.
Here is the python code for the variables which seem to be causing the error:
def neo4jconn():
"""
This code is to create a connection string for connecting to Neo4j
"""
try:
neo_conn = Graph(os.getenv['neo4j_uri'], user=os.getenv['neo4j_username'], password=os.getenv['password'])
return neo_conn
except Exception as ex:
print('Error !!!!:')
print(ex)

You cannot use both build and image statement.
If you use build the built image will automatically be used !

try to use docker-compose up for running docker-compose.yml file, if you would like to re-build it add --build flag, detach mode add -d, it looks like:
docker-compose up -d --build
you can pass all of the environment variables to .env file and install additionaly python-dotenv:
pip install python-dotenv
and pass it to docker-compose:
youre_service:
image: image:1.7
container_name: container
env_file:
- .env

I was able to get it resolved by updating my python code to use os.getenv() instead of os.getenv[]. Thanks #David Maze

Related

Docker compose unable to find file in ADD path

I am using the following docker compose:
dns_to_redis:
build:
context: ./DNS_to_redis/
image: dns_to_redis
depends_on:
- redis
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
networks:
sensor:
ipv4_address: 172.24.1.4
to build and run an image. Inside the Dockerfile I use the following ADD:
ADD home/new_prototypes/dns_to_redis/dns_redis.R /home/
However, when I run sudo docker-compose up, I get the following error:
ERROR: Service 'dns_to_redis' failed to build: ADD failed: file not found in build context or excluded by .dockerignore: stat home/new_prototypes/dns_to_redis/dns_redis.R: file does not exist
The file is located in /home/new_prototypes/dns_to_redis, I am thinking that this is somehow the problem, but I can't modify it in any way to make it work.
How can I run this from docker compose?
Thank you.
As stated in the error message:
file not found in build context
The build context is a copy of the path you set for dns_to_redis.build.context.
Your file needs to be in the ./DNS_to_redis/ directory.
Note that it is generally preferred to use COPY instead of ADD.

Debugging an Azure Function running in a Docker container

I'd like to attach to the container and step through the code.
Can I do this with a 'Compose Up' from 'docker-compose.debug' ?
Does this start the 'func: host start' - required for the functions runtime?
Please review my docker-compose.debug below.
Thank you.
docker-compose.debug.yaml as follows:
version: '3.4'
services:
nfunc:
image: nfunc
build:
context: .
dockerfile: ./Dockerfile
command: ["sh", "-c", "pip install debugpy -t /tmp && python /tmp/debugpy --wait-for-client --listen 0.0.0.0:5678 mytrigger\__init__.py "]
ports:
- 5678:5678
Debugging works but not with the container, which should output a simple log message every minute.
In this case, I believe you'll have to do the profiling. You will have to follow these:
If its not a blessed image, then first you would have to install SSH
if you want to get into the container.
Then you will have to make
use of tools such as cProfile or other related python modules to
profile the code.
Here is a documentation for windows application. You might want to take a look : https://azureossd.github.io/2017/09/01/profile-python-applications-in-azure-app-services/index.html
This issue has been tracked : https://github.com/Azure/azure-functions-docker/issues/17

Run Azure pipline with Selenium tests in Docker

I need to create a pipeline in Azure with my autotests using Docker container. I made it successfully on my local machine using the following algorithm:
Create Selenium node with next command:
docker run -d -p 4444:4444 -v /dev/shm:/dev/shm selenium/standalone-chrome:4.0.0-beta-1-20210215
Build image using command: docker build -t my_tests .
next
Here is my dockerfile:
FROM maven:onbuild
COPY src /home/bns_bdd_automation/src
COPY pom.xml /home/bns_bdd_automation
COPY .gitignore /home/bns_bdd_automation
CMD mvn -f /home/bns_bdd_automation/pom.xml clean test
Everything works fine, but locally.
In the cloud I faced an issue: I need to RUN Selenium Node at first, and after that build my image.
As I understood from some articles I need to use docker-compose (for run first image), but I don't know how. Can you help me with that?
Well, here is my docker-compose.yml file:
version: "3"
services:
selenium-hub:
image: selenium/hub
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
bns_bdd_automation:
depends_on:
- selenium-hub
- chrome
build: .
But it works not as I expected. It builds and RUN tests BEFORE hub and chrome was executed. And after that it shows me in terminal:
WARNING: Image for service bns_bdd_automation was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Starting selenium-hub ... done
Starting bns_bdd_automation_chrome_1 ... done
Recreating bns_bdd_automation_bns_bdd_automation_1 ... error
ERROR: for bns_bdd_automation_bns_bdd_automation_1 no such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889: No such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889
ERROR: for bns_bdd_automation no such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889: No such image: sha256:e5cd6f2618fd9ee29d5ebfe610acd48aff7582e91211bf61689f5161fbb5f889
ERROR: The image for the service you're trying to recreate has been removed. If you continue, volume data could be lost. Consider backing up your data before continuing.
Continue with the new image? [yN]

Docker: Uses an image, skipping (docker-compose)

I am currently trying out this tutorial for node express with mongodb
https://medium.com/#sunnykay/docker-development-workflow-node-express-mongo-4bb3b1f7eb1e
the first part works fine where to build the docker-compose.yml
it works totally fine building it locally so I tried to tag it and push into my dockerhub to learn and try more.
this is originally what's in the yml file followed by the tutorial
version: "2"
services:
web:
build: .
volumes:
- ./:/app
ports:
- "3000:3000"
this works like a charm when I use docker-compose build and docker-compose up
so I tried to push it to my dockerhub and I also tag it as node-test
I then changed the yml file into
version: "2"
services:
web:
image: "et4891/node-test"
volumes:
- ./:/app
ports:
- "3000:3000"
then I removed all images I have previously to make sure this also works...but when I run docker-compose build I see this message error: web uses an image, skipping and nothing happens.
I tried googling the error but nothing much I can find.
Can someone please give me a hand?
I found out, I was being stupid.
I didn't need to run docker-compose build I can just directly run docker-compose up since then it'll pull the images down, the build is just to build locally
in my case below command worked:
docker-compose up --force-recreate
I hope this helps!
Clarification: This message (<service> uses an image, skipping)
is NOT an error. It's informing the user that the service uses Image and it's therefore pre-built, So it's skipped by the build command.
In other words - You don't need build , you need to up the service.
Solution:
run sudo docker-compose up <your-service>
PS: In case you changed some configuration on your docker-compose use --force-recreate flag to apply the changes and creating it again.
sudo docker-compose up --force-recreate <your-service>
My problem was that I wanted to upgrade the image so I tried to use:
docker build --no-cache
docker-compose up --force-recreate
docker-compose up --build
None of which rebuild the image.
What is missing ( from this post ) is:
docker-compose stop
docker-compose rm -f # remove old images
docker-compose pull # download new images
docker-compose up -d

Building sphinx documents inside Docker container

I have a Flask project that runs inside a Docker container. I have managed to build my application and run it successfully. However, I would like to also build the sphinx documentation, so its static files can be served. The documentation is normally built using make html in the docs/ file. I've found a docker source for sphinx, and have set up a docker-compose config that runs successfully, however, I am not able to pass the make html command to sphinx -- I believe because I am running the command a level up, since make html needs to be run from within docs/ and not from within the base directory.
I get the following error when I try to build the sphinx documentation:
docker-compose run --rm sphinx make html
Starting web_project
Pulling sphinx (nickjer/docker-sphinx:latest)...
latest: Pulling from nickjer/docker-sphinx
c62795f78da9: Pull complete
d4fceeeb758e: Pull complete
5c9125a401ae: Pull complete
0062f774e994: Pull complete
6b33fd031fac: Pull complete
aac5b231ab1e: Pull complete
97be0ae484bc: Pull complete
ec7c8cca5e46: Pull complete
82cc981959eb: Pull complete
151a33a826a1: Pull complete
Digest: sha256:8125ca919069235278a5da631c002926cc57d741fa041b59c758183ebd48121f
Status: Downloaded newer image for nickjer/docker-sphinx:latest
make: *** No rule to make target 'html'. Stop.
My project has the following directory structure
docs/
web/
Dockerfile
run.py
requirements.txt
....
docker-compose.yml
README.md
And the following docker-compose configuration
version: '2'
services:
web:
restart: always
build: ./web
ports:
- "7000:7000"
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn -w 2 -b :7000 run:app
sphinx:
image: "nickjer/docker-sphinx"
volumes:
- "${PWD}:/docs"
user: "1000:1000"
depends_on:
- web
How do I build my sphinx documentation within the Docker container? Do I need to add another Dockerconfig file to my docs module?
I believe because I am running the command a level up, since make html
needs to be run from within docs/ and not from within the base
directory.
To test this theory, could you try something like this command?
docker-compose run --rm sphinx bash -c "cd docs; make html"
or possibly
docker-compose exec sphinx bash -c "cd docs; make html"
I had success with the following to build and deploy my sphinx docs for static read by the Flask app.
WORKDIR /pathapp/app
ENV PYTHON /pathapp/app
RUN python /pathapp/app/setup.py build_sphinx -b html
RUN python /pathapp/app/scripts/script_to_copy_build_sphinx_html_to_docs.py
The move script is just a simple copy directory.

Resources