How to pass args to docker run for python? - python-3.x

I edit my Dockerfile from the feedback I received.
I run my python script with args python3 getappVotes.py --abc 116 --xyz 2 --processall
This is my Dockerfile. I can build successfully but not sure how I can pass above args during docker run. Arguments are optional.
FROM python:3.7.5-slim
WORKDIR /usr/src/app
RUN python -m pip install \
parse \
argparse \
datetime \
urllib3 \
python-dateutil \
couchdb \
realpython-reader
RUN mkdir -p /var/log/appvoteslog/
COPY getappVotes.py .
ENTRYPOINT ["python", "./getappVotes.py"]
CMD ["--xx 116", "--yy 2", "--processall"]

As mentioned in the comments, for your use case, you need to use both ENTRYPOINT (for the program and mandatory arguments) and CMD (for optional arguments).
More precisely, you should edit your Dockerfile in the following way:
FROM python:3.7.5-slim
WORKDIR /usr/src/app
RUN python -m pip install \
parse \
argparse \
datetime \
urllib3 \
python-dateutil \
couchdb \
realpython-reader
RUN mkdir -p /var/log/appvoteslog/
COPY getappVotes.py .
ENTRYPOINT ["python", "./getappVotes.py"]
CMD ["--xx", "116", "--yy", "2", "--processall"]
(as from a shell perspective, --xx 116 is recognized as two separated arguments, not a single argument)
For more details:
see the answer What is the difference between CMD and ENTRYPOINT in a Dockerfile?;
see also the example from this answer: Install python package in docker file − where the "manual" command python -m pip install … is replaced with a more idiomatic command, pip install --no-cache-dir -r requirements.txt

Related

Running jupyter-dash in docker container

I have created a docker file, by following https://www.youtube.com/watch?v=QkOKkrKqI-k
The docker builds perfectly fine and runs jupyter lab. Also, I can run a simple dash application.
import plotly.express as px
from jupyter_dash import JupyterDash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
# Load Data
df = px.data.tips()
# Build App
app = JupyterDash(__name__)
app.layout = html.Div([
html.H1("JupyterDash Demo"),
dcc.Graph(id='graph'),
html.Label([
"colorscale",
dcc.Dropdown(
id='colorscale-dropdown', clearable=False,
value='plasma', options=[
{'label': c, 'value': c}
for c in px.colors.named_colorscales()
])
]),
])
# Define callback to update graph
#app.callback(
Output('graph', 'figure'),
[Input("colorscale-dropdown", "value")]
)
def update_figure(colorscale):
return px.scatter(
df, x="total_bill", y="tip", color="size",
color_continuous_scale=colorscale,
render_mode="webgl", title="Tips"
)
# Run app and display result inline in the notebook
app.run_server(mode='inline') --> This doesn't work
app.run_server(mode='jupyterlab') --> This doesn't work
# app.run_server(mode='external') --> This doesn't work
In dockerfile I expose port 8888 and 8050. Also after the build of the docker file, I run the the docker file with expose port commands to 8888 and 8050 respectively like below.
docker run -it -p 8888:8888 -p 8050:8050 <image-name>
However, when I run my dash application on an external mode:
Then it shows me
and when I try to connect that URL, I found it's not working.
Does anyone know how to fix this? I know how to use the flask to open up the application, but I would like to follow the traditional way as suggested in the video.
Dockerfile:
FROM python:3.8.0
RUN git clone --depth=1 https://github.com/Bash-it/bash-it.git ~/.bash_it && \
bash ~/.bash_it/install.sh --silent
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash - && \
apt-get upgrade -y && \
apt-get install -y nodejs && \
rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip && \
pip install --upgrade \
numpy \
pandas \
dash \
Jupyterlab \
ipywidgets \
jupyterlab-git \
jupyter-dash
RUN jupyter lab build
# RUN pip install --upgrade pip && \
# pip install --upgrade \
# jupyterlab "pywidgets>=7.5"
RUN jupyter labextension install \
jupyterlab-plotly#4.14.3 \
#jupyter-widgets/jupyterlab-manager \
#jupyterlab/git
COPY entrypoint.sh /usr/local/bin/
RUN chmod 755 /usr/local/bin/entrypoint.sh
COPY config/ /root/.jupyter/
EXPOSE 8888 8050
VOLUME /notebooks
WORKDIR /notebooks
# ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD jupyter lab --ip=* --port=8888 --allow-root
I'm not sure if my answer is the right approach, but I made it work.
if anyone has any better solutions please post here.
the docker file remains the same.
after the build of the docker image I run,
docker run -it -p 8888:8888 -p 8050:8050 <ImageName>
The only change I made is (the hostname in app.run_server)
So my main command looks like this :
app.run_server(mode='inline', host="0.0.0.0", port=8050, dev_tools_ui=True)
this works for both inline, external, and jupyterlab.

Where are Python libraries installed on Docker container from AWS Lambda Python image?

I'm trying to deploy a lambda function using a Docker image, but I want to modify some of the code in the Python packages I'm installing. I can't find where the packages are installed for me to modify the source code.
My Dockerfile is as follows:
FROM public.ecr.aws/lambda/python:3.8
WORKDIR /usr/src/project
COPY lambda_handler.py ${LAMBDA_TASK_ROOT}
COPY requirements.txt ./
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
CMD [ "lambda_handler.lambda_handler" ]
Question: Where are the packages from the requirements.txt file installed? I tried going into the container but it doesn't give me the bash terminal because the container is made from a Lambda image, which requires the entry point to be the lambda handler.
You can override the entrypoint of the image and then print the paths that python searches for packages.
docker run --rm -it --entrypoint python \
public.ecr.aws/lambda/python:3.8 -c 'import sys; print("\n".join(sys.path))'
/var/lang/lib/python38.zip
/var/lang/lib/python3.8
/var/lang/lib/python3.8/lib-dynload
/var/lang/lib/python3.8/site-packages
Given what we find above, you can enter a bash shell in your built image and look at the contents of
/var/lang/lib/python3.8/site-packages
with the following:
docker run --rm -it --entrypoint bash public.ecr.aws/lambda/python:3.8
ls /var/lang/lib/python3.8/site-packages

Run npm test inside a docker image and exit

I have basically a docker image of a node js application.
REPOSITORY TAG IMAGE ID CREATED SIZE
abc-test 0.1 1ba85e0ca455 7 hours ago 1.37GB
I want to run npm test from folder /data/node/src but that doesn't seems to be working.
Here is the command what I am trying:
docker run -p 80:80 --entrypoint="cd /data/node/src && npm run test" abc-test:0.1
But that doesn't seems to be working.
Here is my dockerfile:
FROM python:2.7.13-slim
RUN apt-get update && apt-get install -y apt-utils curl
RUN echo 'deb http://nginx.org/packages/debian/ jessie nginx' > /etc/apt/sources.list.d/nginx.list
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
git \
libcurl4-openssl-dev \
libldap-2.4-2 \
libldap2-dev \
libmysqlclient-dev \
libpq-dev \
libsasl2-dev \
nano \
nginx=1.8.* \
nodejs \
python-dev \
supervisor
ENV SERVER_DIR /data/applicationui/current/server
ADD src/application/server $SERVER_DIR
EXPOSE 14000 80
# version A: only start tornado, without nginx.
WORKDIR $SERVER_DIR/src
CMD ["npm","run","start:staging"]
Can anyone please help me here.
Pretty sure you can only run one command with ENTRYPOINT and with CMD.
From their docs:
There can only be one CMD instruction in a Dockerfile. If you list more than one CMD then only the last CMD will take effect.
Same thing with Entrypoint:
ENTRYPOINT has two forms:
ENTRYPOINT ["executable", "param1", "param2"] (exec form, preferred)
ENTRYPOINT command param1 param2 (shell form)
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
A work around that I do is the following
FROM ubuntu:16.04
WORKDIR /home/coins
RUN apt-get update
...
OTHER DOCKERFILE STUFF HERE
...
COPY ./entrypoint.sh /home/coins/
RUN chmod +x ./entrypoint.sh
ENTRYPOINT ./entrypoint.sh
entrypoint.sh:
#!/bin/bash
Can write whatever sh commands you need here..
exec sh ./some_script
EDIT:
One idea is you can add a test sh script and just trigger those 2 commands in it, and you'd be able to launch it with --entrypoint="test.sh"

Python 3 virtualenv and Docker

I'm trying to build a docker image with python 3 and virtualenv.
I understand that I wouldn't need to use wirtualenv in a docker image as I'm going to use only python 3, yet I see some clean isolation benefits of using virtualenv anyways.
What's the best practice? Should I avoid using virtualenv on docker?
If that's the case, how can I setup python3 and pip3 to be used as python and pip (without the 3)?
This is my Dockerfile:
FROM openjdk:8-alpine
RUN apk update && apk add bash gcc musl-dev
RUN apk add python3 python3-dev
RUN apk add py3-pip
RUN apk add libxslt-dev libxml2-dev
ENV PROJECT_HOME /opt/app
RUN mkdir -p /opt/app
RUN mkdir -p /opt/app/modules
ENV LD_LIBRARY_PATH /usr/lib/python3.6/site-packages/jep
ENV LD_PRELOAD /usr/lib/libpython3.6m.so
RUN pip3 install jep
RUN pip3 install ads
RUN pip3 install gspread
RUN pip3 list
COPY target/my-server-1.0-SNAPSHOT.jar $PROJECT_HOME/my-server-1.0-SNAPSHOT.jar
WORKDIR $PROJECT_HOME
CMD ["java", "-Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb","-jar","./my-server-1.0-SNAPSHOT.jar"]
Thanks
=== UPDATE 1 ===
I'm trying to create a new virtual env in the WORKDIR, install some libs and then execute a shell script, even though I see it creates the whole thing when I build the image, when running the container the environment folder is empty.
This is from my Dockerfile:
RUN virtualenv ./env && source ./env/bin/activate && pip install jep \
googleads gspread oauth2client
ENTRYPOINT ["/bin/bash", "./startup.sh"]
startup.sh:
#!/bin/sh
source ./env/bin/activate
java -Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb -jar ./my-server-1.0-SNAPSHOT.jar
It builds fine but on docker-compose up -d this is the output:
./startup.sh: source: line 2: can't open './env/bin/activate'
The env folder exists, but it's empty.
Any ideas?
Thanks!
=== UPDATE 2 ===
This is the working config:
RUN virtualenv ./my-env && source ./my-env/bin/activate \
&& pip install gspread==0.6.2 jep oauth2client googleads pandas
CMD ["/bin/bash", "-c", "./startup.sh"]
This is startup.sh:
#!/bin/sh
source ./my-env/bin/activate
java -Dspring.data.mongodb.uri=mongodb://my-mongo:27017/mydb -jar ./my-server-1.0-SNAPSHOT.jar
I don't think using virtualenv in docker is something really negative, it will slow down your container builds just a bit.
As for renaming pip3 and python3, you can create a hard link like this:
ln /usr/bin/python3 /usr/bin/python
ln /usr/bin/pip3 /usr/bin/pip
assuming python3 executable is in /usr/bin/. You can find its location by running which python3
P.S.: Your dockerfile contains loads of RUN instructions, that are creating unnecessary intermediate containers. Combine them to save space and time:
RUN apk update && apk add bash gcc musl-dev \
python3 python3-dev py3-pip \
libxslt-dev libxml2-dev
RUN mkdir -p /opt/app/modules # you don't need the first one, -p will create it for you
RUN pip3 install jep ads gspread
Or combine them even further, if you aren't planning to change them often:
RUN apk update
&& apk add bash gcc musl-dev \
python3 python3-dev py3-pip \
libxslt-dev libxml2-dev \
&& mkdir -p /opt/app/modules \
&& pip3 install jep ads gspread
The only "workaround" I've found in order to use virtualenv from my docker container is to enter to the docker by ssh, create the environment, install the libs and set its folder as a volume in the docker-compose config so it won't be deleted and I can use it afterward.
(Or to have it ready and just copy the folder at build time) which could be a good option for saving build time, isn't it?
Otherwise, If I create it on Dockerfile and install the libs there, its folder gets empty when the container runs. Don't know why.
I appreciate if anyone can suggest a better way to deal with that.

Docker pass in arguments to python script that uses argparse

I have the following docker image
FROM ubuntu
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
ENTRYPOINT python3 src/main.py
In my python script that the ENTRYPOINT points too I have some parameters I would like to pass in. I used argparse in my python script to construct them. Example would be --key as an arg option. This --key argument will change on each run of the script. How do I pass this argument into my script so that it executes with the correct parameters?
I have tried
docker run my_image_name --key 100
but the argument is not getting to python script.
You can use CMD command to pass parameters (and set defaults ones for an entrypoint), for example:
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
Take a look here for details.

Resources