AWS Mythical Mysfits - Nomodule named flask - ImportError - python-3.x

I am very new to aws and especially docker
so I have been following this link to understand stuff about developing a web application -
https://aws.amazon.com/getting-started/hands-on/build-modern-app-fargate-lambda-dynamodb-python/module-two/
I am stuck at the point where it asks us to test the service locally using a 'docker run' command(in module - 2)
when I run the command I am getting an 'ImportError'.
Here are my code-snippets ....please help me out !
mythicalMysfitsService.py -
from flask import Flask, jsonify, json, Response, request
from flask_cors import CORS
# A very basic API created using Flask that has two possible routes for requests.
app = application = Flask(__name__)
CORS(application)
# The service basepath has a short response just to ensure that healthchecks
# sent to the service root will receive a healthy response.
#app.route("/")
def healthCheckResponse():
return jsonify({"message" : "Nothing here, used for health check. Try /mysfits instead."})
# The main API resource that the next version of the Mythical Mysfits website
# will utilize. It returns the data for all of the Mysfits to be displayed on
# the website. Because we do not yet have any persistent storage available for
# our application, the mysfits are simply stored in a static JSON file. Which is
# read from the the filesystem, and directly used as the service response.
#app.route("/mysfits")
def getMysfits():
# read the mysfits JSON from the listed file.
response = Response(open("mysfits-response.json").read())
# set the Content-Type header so that the browser is aware that the response
# is formatted as JSON and our frontend JavaScript code is able to
# appropriately parse the response.
response.headers["Content-Type"]= "application/json"
return response
# Run the service on the local server it has been deployed to,
# listening on port 8080.
if __name__ == "__main__":
application.run(host="0.0.0.0", port=8080)
Dockerfile -
FROM ubuntu:latest
RUN echo Updating existing packages, installing and upgrading python and pip.
RUN apt-get update -y
RUN apt-get install -y python3-pip python-dev build-essential
RUN pip3 install --upgrade pip
RUN echo Copying the Mythical Mysfits Flask service into a service directory.
COPY ./service /MythicalMysfitsService
WORKDIR /MythicalMysfitsService
RUN echo Installing Python packages listed in requirements.txt
RUN pip3 install -r ./requirements.txt
RUN echo Starting python and starting the Flask service...
ENTRYPOINT ["python"]
CMD ["mythicalMysfitsService.py"]
docker build -
ec2-user:~/environment/aws-modern-application-workshop/module-2/app (python) $ docker build . -t 054166015944.dkr.ecr.us-east-2.amazonaws.com/mythicalmysfits/service:latest
OUTPUT -
Sending build context to Docker daemon 14.85kB
Step 1/14 : FROM ubuntu:latest
---> 1d622ef86b13
Step 2/14 : RUN echo Updating existing packages, installing and upgrading python and pip.
---> Using cache
---> 7cdaa818c9c2
Step 3/14 : RUN apt-get update -y
---> Using cache
---> a1fd46891d1e
Step 4/14 : RUN apt-get install -y python3-pip python-dev build-essential
---> Using cache
---> f81dc75dfc3f
Step 5/14 : RUN pip3 install --upgrade pip
---> Using cache
---> 0cee88c88f43
Step 6/14 : RUN echo Copying the Mythical Mysfits Flask service into a service directory.
---> Using cache
---> 709b50aac794
Step 7/14 : COPY ./service /MythicalMysfitsService
---> Using cache
---> f70454638e83
Step 8/14 : WORKDIR /MythicalMysfitsService
---> Using cache
---> 418a08d8ac32
Step 9/14 : RUN echo Installing Python packages listed in requirements.txt
---> Using cache
---> 03659d0cb00e
Step 10/14 : RUN pip3 install -r ./requirements.txt
---> Using cache
---> cb65480b6104
Step 11/14 : RUN echo Starting python and starting the Flask service...
---> Using cache
---> 89738a63f437
Step 12/14 : ENTRYPOINT ["python"]
---> Using cache
---> 97b4f200a571
Step 13/14 : CMD ["mythicalMysfitsService.py"]
---> Using cache
---> a22fe5fb5bdb
Step 14/14 : RUN unset -v PYTHONPATH
---> Using cache
---> f43930d6410a
Successfully built f43930d6410a
Successfully tagged 054166015944.dkr.ecr.us-east-2.amazonaws.com/mythicalmysfits/service:latest
docker run to test service locally -
$ docker run -p 8080:8080 054166015944.dkr.ecr.us-east-2.amazonaws.com/mythicalmysfits/service:latest
error -
Traceback (most recent call last):
File "mythicalMysfitsService.py", line 1, in <module>
from flask import Flask, jsonify, json, Response, request
ImportError: No module named flask
checked for flask using flask --version
OUTPUT -
Flask 1.0.4
Werkzeug 1.0.0
So I have 'flask' installed but it still says No module named flask.

Using ubuntu:latest as base image is a bad idea. If you pulled a few months ago, you'd get Ubuntu 18.04. If you pulled today, you'd get Ubuntu 20.04. So it's possible to get inconsistent builds. Better to have a specific release as base image, e.g. ubuntu:20.04. Better yet, use python:3.8-slim-buster as the base image and you'll get latest Python 3.8 pre-installed on Debian, and can easily switch to 3.9 when it's out.
My guess is you installed libraries on Python 3 (pip3 install) but you're running the code with Python 2. Try switching ENTRYPOINT to python3 instead of python.
This article talks more about choosing a good tag for your base image: https://pythonspeed.com/articles/reproducible-docker-builds-python/

I just tried your instruction and was able to get the flask app to run.
I would suggest that you delete the image that was built previously and build again. Delete the image using docker rmi 054166015944.dkr.ecr.us-east-2.amazonaws.com/mythicalmysfits/service:latest.
If you get an error stating that the image is being referenced by a container, you'll have to delete the referenced containers using docker rm <container id> or run docker rmi -f 054166015944.dkr.ecr.us-east-2.amazonaws.com/mythicalmysfits/service:latest to force the delete although referenced.

You may want to try this (It solved the same problem I have):
Update python at 4 places in the Dockerfile
FROM ubuntu:latest
RUN echo Updating existing packages, installing and upgrading python and pip.
RUN apt-get update -y
RUN apt-get install -y python3-pip python-dev build-essential
RUN pip3 install --upgrade pip
RUN echo Copying the Mythical Mysfits Flask service into a service directory.
COPY ./service /MythicalMysfitsService
WORKDIR /MythicalMysfitsService
RUN echo Installing Python packages listed in requirements.txt
RUN pip3 install -r ./requirements.txt
RUN echo Starting python and starting the Flask service...
ENTRYPOINT ["python3"]
CMD ["mythicalMysfitsService.py"]

Related

Unable to containerize python-tkinter based project on alpine3.15

This is the link to my project HERE
I am trying to containerize this and i have written a dockerfile for this which below,
FROM python:3-alpine3.15
WORKDIR /app
COPY . /app
RUN apk update
RUN apk add python3
RUN apk add python3-tkinter
#RUN pip install --no-cache-dir -r requirements.txt
EXPOSE 3000
CMD python3 ./sortingAlgs.py
It doesn't run since "tkinter-tclerror-no-display-name-and-no-display-environment-variable" (the error in docker desktop terminal)
After a lot of struggle I found "matplotlib" module can help and i tried including
import matplotlib
matplotlib.use('Agg')
at the starting of my source-code file ("sortingAlgs.py")
but no luck..
I am unable to make use of "requirements.txt" since i am new to docker.. and i am getting errors like (module not found) when it comes at "pip install -r requirements.txt" and fails.
and i only have experience on debian linux (ubuntu). hencce, working with alpine is a problem as well..
i want to host this application as an containerized application and push it on docker hub

Docker run exits when running an amd64/python image

I am very elementary at Docker and I know a few commands to test my tasks. I have built a Docker image for 64-bit python because I had to install CatBoost in it. My dockerfile looks like this:
FROM amd64/python:3.9-buster
RUN apt update
RUN apt install -y libgl1-mesa-glx apt-utils
RUN pip install -U pip
RUN pip install --upgrade setuptools
# We copy just the requirements.txt first to leverage Docker cache
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip --default-timeout=100 install -r requirements.txt
COPY . /app
EXPOSE 5000
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
I built the image with docker build -t myimage ., it gets built for a long time presumably successfully. When I run the image using the following command: docker run -p 5000:5000 myimage, it gives a warning WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested and downloads some things for while and then exits. Not sure whats happening. How do I tackle this?
I managed to solve it by running the same thing on a linux machine.

Azure ACR Flask webapp deployment

I'm trying to deploy a Flask app on Azure by using the container registry. I used Az DevOps to pipeline building and pushing the image. Then I configure a web app to pull this image from ACR. But I'm running into one or the other error. The container is timing out and crashing and there are some errors in the application log as well. 'No module named flask'.
This is my Dockerfile
FROM python:3.7.11
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
RUN apt-get update
RUN apt-get install -y wget && rm -rf /var/lib/apt/lists/*
WORKDIR /app
ADD . /app/
RUN wget \
https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN python -m pip install --upgrade pip
RUN pip3 install Flask==2.0.2
RUN pip3 install --no-cache-dir -r /app/requirements.txt
RUN conda install python=3.7
RUN conda install -c rdkit rdkit -y
EXPOSE 5000
ENV NAME cheminformatics
CMD python app.py
I had to install miniconda because the rdkit package can be installed only from conda. I also add a PORT: 5000 key value to the configuration of the web app, but it hasn't loaded even once. I've been at this for 2 hours now. Previously, I've built images on local machine and pushed to dockerhub and ACR and was able to pull those images but it's the first time I used DevOps and I'm not sure what's going wrong here. Can someone please help?

Multistage build deletes directories of previous stages? [duplicate]

I have the following Dockerfile with content:
FROM ubuntu:bionic AS os
RUN apt-get update
RUN apt-get install -y git
RUN git --version
FROM node:13.10.1-buster-slim
FROM python:3.7.7-slim-stretch as test
RUN pip install --user pipenv
RUN git --version
RUN git clone git#gitlab.com:silentdata/cdtc-identity-service.git
WORKDIR cdtc-identity-service
RUN pipenv install
CMD python service_test.py
Building the image, I've got the following output:
Sending build context to Docker daemon 43.59MB
Step 1/12 : FROM ubuntu:bionic AS os
---> 72300a873c2c
Step 2/12 : RUN apt-get update
---> Using cache
---> 42013f860b31
Step 3/12 : RUN apt-get install -y git
---> Using cache
---> 8f27d95fcb6e
Step 4/12 : RUN git --version
---> Using cache
---> ae49a9465233
Step 5/12 : FROM node:13.10.1-buster-slim
---> 500c5a190476
Step 6/12 : FROM python:3.7.7-slim-stretch as test
---> c9ec5ac0f580
Step 7/12 : RUN pip install --user pipenv
---> Using cache
---> 3a9358e72deb
Step 8/12 : RUN git --version
---> Running in 545659570a84
/bin/sh: 1: git: not found
The command '/bin/sh -c git --version' returned a non-zero code: 127
Why the git command could not be found at the second time?
Multi-stage builds do not merge multiple images together. They allow you to build multiple docker images, and give you a useful syntax to copy artifacts between those images. Merging images would be a non-trivial task (some commands modify files rather than create them, e.g. the package management DB, so even two compatible images would result in issues for the end user).
For your use case, you probably want to pick the most appropriate base image and install your tools, code, compiled app there. Once you've gotten that to work, then adding a new stage for the minimal release can be added on.
For more on multi-stage builds, see: https://docs.docker.com/develop/develop-images/multistage-build/

Jenkins in a Docker Container - How do I install custom Python libraries?

So, after building out a pipeline, I realized I will need some custom libraries for a python script I will be pulling from SCM. To install Jenkins in Docker, I used the following tutorial:
https://jenkins.io/doc/book/installing/
Like so:
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
Now, I will say I'm not a Docker guru, but I'm aware the Dockerfile allows for passing in library installs for Python. However, because I'm pulling the docker image from dockerhub, I'm not sure if it's possible to add a "RUN pip install " as an argument. Maybe there is an alternate approach someone may have.
Any help is appreciated.
EDIT 1: Here's the output of the first commenter's recommendation:
Step 1/6 : FROM jenkinsci/blueocean
---> b7eef16a711e
Step 2/6 : USER root
---> Running in 150bba5c4994
Removing intermediate container 150bba5c4994
---> 882bcec61ccf
Step 3/6 : RUN apt-get update
---> Running in 324f28f384e0
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Error:
/bin/sh: apt-get: not found
The command '/bin/sh -c apt-get update' returned a non-zero code: 127
Observation:
This error comes when the container that you want to run is not Debian based, hence does not support 'apt'.
To resolve this, we need to find out which package manager it utilizes.
In my case it was: 'apk'.
Resolution:
Replace 'apt-get' with 'apk' in your Dockerfile. (If this does not work you can try 'yum' package manager as well).
Command in your Dockerfile should look like:
RUN apk update
You can create a Dockerfile
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
You can build the custom image using
docker build -t jenkinspython .
Similar to what Hemant Sing's answer, but 2 slightly different things.
First, create a unique directory: mkdir foo
"cd" to that directory and run:
docker build -f jenkinspython .
Where jenkinspython contains:
FROM jenkins:latest
USER root
RUN apt-get update
RUN apt-get install -y python-pip
# Install app dependencies
RUN pip install --upgrade pip
Notice that my change has -f, not -t. And notice that the build output does indeed contain:
Step 5/5 : RUN pip install --upgrade pip
---> Running in d460e0ebb11d
Collecting pip
Downloading https://files.pythonhosted.org/packages/5f/25/e52d3f31441505a5f3af41213346e5b6c221c9e086a166f3703d2ddaf940/pip-18.0-py2.py3-none-any.whl (1.3MB)
Installing collected packages: pip
Found existing installation: pip 9.0.1
Not uninstalling pip at /usr/lib/python2.7/dist-packages, outside environment /usr
Successfully installed pip-18.0
Removing intermediate container d460e0ebb11d
---> b7d342751a79
Successfully built b7d342751a79
So now that the image has been built (in my case, b7d342751a79), fire it up and verify that pip has indeed been updated:
$ docker run -it b7d342751a79 bash
root#9f559d448be9:/# pip --version
pip 18.0 from /usr/local/lib/python2.7/dist-packages/pip (python 2.7)
So now your image has pip installed, so then you can feel free to pip install whatever crazy packages you need :)

Resources