Running jupyter-dash in docker container - python-3.x

I have created a docker file, by following https://www.youtube.com/watch?v=QkOKkrKqI-k
The docker builds perfectly fine and runs jupyter lab. Also, I can run a simple dash application.
import plotly.express as px
from jupyter_dash import JupyterDash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
# Load Data
df = px.data.tips()
# Build App
app = JupyterDash(__name__)
app.layout = html.Div([
html.H1("JupyterDash Demo"),
dcc.Graph(id='graph'),
html.Label([
"colorscale",
dcc.Dropdown(
id='colorscale-dropdown', clearable=False,
value='plasma', options=[
{'label': c, 'value': c}
for c in px.colors.named_colorscales()
])
]),
])
# Define callback to update graph
#app.callback(
Output('graph', 'figure'),
[Input("colorscale-dropdown", "value")]
)
def update_figure(colorscale):
return px.scatter(
df, x="total_bill", y="tip", color="size",
color_continuous_scale=colorscale,
render_mode="webgl", title="Tips"
)
# Run app and display result inline in the notebook
app.run_server(mode='inline') --> This doesn't work
app.run_server(mode='jupyterlab') --> This doesn't work
# app.run_server(mode='external') --> This doesn't work
In dockerfile I expose port 8888 and 8050. Also after the build of the docker file, I run the the docker file with expose port commands to 8888 and 8050 respectively like below.
docker run -it -p 8888:8888 -p 8050:8050 <image-name>
However, when I run my dash application on an external mode:
Then it shows me
and when I try to connect that URL, I found it's not working.
Does anyone know how to fix this? I know how to use the flask to open up the application, but I would like to follow the traditional way as suggested in the video.
Dockerfile:
FROM python:3.8.0
RUN git clone --depth=1 https://github.com/Bash-it/bash-it.git ~/.bash_it && \
bash ~/.bash_it/install.sh --silent
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash - && \
apt-get upgrade -y && \
apt-get install -y nodejs && \
rm -rf /var/lib/apt/lists/*
RUN pip install --upgrade pip && \
pip install --upgrade \
numpy \
pandas \
dash \
Jupyterlab \
ipywidgets \
jupyterlab-git \
jupyter-dash
RUN jupyter lab build
# RUN pip install --upgrade pip && \
# pip install --upgrade \
# jupyterlab "pywidgets>=7.5"
RUN jupyter labextension install \
jupyterlab-plotly#4.14.3 \
#jupyter-widgets/jupyterlab-manager \
#jupyterlab/git
COPY entrypoint.sh /usr/local/bin/
RUN chmod 755 /usr/local/bin/entrypoint.sh
COPY config/ /root/.jupyter/
EXPOSE 8888 8050
VOLUME /notebooks
WORKDIR /notebooks
# ENTRYPOINT ["/usr/local/bin/entrypoint.sh"]
CMD jupyter lab --ip=* --port=8888 --allow-root

I'm not sure if my answer is the right approach, but I made it work.
if anyone has any better solutions please post here.
the docker file remains the same.
after the build of the docker image I run,
docker run -it -p 8888:8888 -p 8050:8050 <ImageName>
The only change I made is (the hostname in app.run_server)
So my main command looks like this :
app.run_server(mode='inline', host="0.0.0.0", port=8050, dev_tools_ui=True)
this works for both inline, external, and jupyterlab.

Related

How to pass args to docker run for python?

I edit my Dockerfile from the feedback I received.
I run my python script with args python3 getappVotes.py --abc 116 --xyz 2 --processall
This is my Dockerfile. I can build successfully but not sure how I can pass above args during docker run. Arguments are optional.
FROM python:3.7.5-slim
WORKDIR /usr/src/app
RUN python -m pip install \
parse \
argparse \
datetime \
urllib3 \
python-dateutil \
couchdb \
realpython-reader
RUN mkdir -p /var/log/appvoteslog/
COPY getappVotes.py .
ENTRYPOINT ["python", "./getappVotes.py"]
CMD ["--xx 116", "--yy 2", "--processall"]
As mentioned in the comments, for your use case, you need to use both ENTRYPOINT (for the program and mandatory arguments) and CMD (for optional arguments).
More precisely, you should edit your Dockerfile in the following way:
FROM python:3.7.5-slim
WORKDIR /usr/src/app
RUN python -m pip install \
parse \
argparse \
datetime \
urllib3 \
python-dateutil \
couchdb \
realpython-reader
RUN mkdir -p /var/log/appvoteslog/
COPY getappVotes.py .
ENTRYPOINT ["python", "./getappVotes.py"]
CMD ["--xx", "116", "--yy", "2", "--processall"]
(as from a shell perspective, --xx 116 is recognized as two separated arguments, not a single argument)
For more details:
see the answer What is the difference between CMD and ENTRYPOINT in a Dockerfile?;
see also the example from this answer: Install python package in docker file − where the "manual" command python -m pip install … is replaced with a more idiomatic command, pip install --no-cache-dir -r requirements.txt

Dockerized Tensorflow script can't see GPUs

I use docker containers to train deep learning models. These Docker containers are located on a Linux server and are trained there with several GPUs.
The problem is that Tensorflow does not recognize the GPUs inside the container. The Docker container looks like this:
FROM nvidia/cuda:10.2-runtime-ubuntu18.04
RUN apt-get update && apt-get install -y apt-utils
RUN apt-get install -y \
git \
pkg-config \
python3-pip \
python3.6 \
nano \
wget \
yasm
FROM python:3.6
COPY requirements.txt ./
# Here tensorflow-gpu == 2.1 is installed
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
COPY . /
ENTRYPOINT ["python", "./main.py"]
If you now comment out the python specific lines from the docker file and replace them with CMD["nvidia-smi"] you can see that the GPUs inside the container are visible. Now the only question I have to ask myself is how it is possible for Tensorflow to detect the GPUs.
In Python code the GPUs are included as follows:
physical_devices = tf.config.experimental.list_physical_devices('GPU')
for physical_device in physical_devices:
tf.config.experimental.set_memory_growth(physical_device, True)

Unable to run aliyun-cli in Docker:stable container after installing it. Errors as command not found

I am unsure if stack overflow or system fault is the right stack exchange site but I'm going with stack overflow cause the alicloud site posted to add a tag and ask a question here.
So. I'm currently building an image based on Docker:stable, that is an alpine distro, that will have aliyun-cli installed and available for use. However I am getting a weird error of Command Not Found when I'm running it. I have followed the guide here https://partners-intl.aliyun.com/help/doc-detail/139508.htm and moved the aliyun binary to /usr/sbin
Here is my Dockerfile for example
FROM docker:stable
RUN apk update && apk add curl
#Install python 3
RUN apk update && apk add python3 py3-pip
#Install AWS Cli
RUN pip3 install awscli --upgrade
# Install Aliyun CLI
RUN curl -L -o aliyun-cli.tgz https://aliyuncli.alicdn.com/aliyun-cli-linux-3.0.30-amd64.tgz
RUN tar -xzvf aliyun-cli.tgz
RUN mv aliyun /usr/bin
RUN chmod +x /usr/bin/aliyun
RUN rm aliyun-cli.tgz
However when i'm running aliyun (which can be auto-completed) I am getting this
/ # aliyun
sh: aliyun: not found
I've tried moving it to other bins. Cding into the folder and calling it explicitly but still always getting a command not found. Any suggestions would be welcome.
Did you check this Dockerfile?
Also why you need to install aws-cli in the same image and why you will need to maintain it for your self when AWS provide managed aws-cli image.
docker run --rm -it amazon/aws-cli --version
that's it for aws-cli image,but if you want in existing image then you can try
RUN pip install awscli --upgrade
DockerFile
FROM python:2-alpine3.8
LABEL com.frapsoft.maintainer="Maik Ellerbrock" \
com.frapsoft.version="0.1.0"
ARG SERVICE_USER
ENV SERVICE_USER ${SERVICE_USER:-aliyun}
RUN apk add --no-cache curl
RUN curl https://raw.githubusercontent.com/ellerbrock/docker-collection/master/dockerfiles/alpine-aliyuncli/requirements.txt > /tmp/requirements.txt
RUN \
adduser -s /sbin/nologin -u 1000 -H -D ${SERVICE_USER} && \
apk add --no-cache build-base && \
pip install aliyuncli && \
pip install --no-cache-dir -r /tmp/requirements.txt && \
apk del build-base && \
rm -rf /tmp/*
USER ${SERVICE_USER}
WORKDIR /usr/local/bin
ENTRYPOINT [ "aliyuncli" ]
CMD [ "--help" ]
build and run
docker build -t aliyuncli .
docker run -it --rm aliyuncli
output
docker run -it --rm abc aliyuncli
usage: aliyuncli <command> <operation> [options and parameters]
<aliyuncli> the valid command as follows:
batchcompute | bsn
bss | cms
crm | drds
ecs | ess
ft | ocs
oms | ossadmin
ram | rds
risk | slb
ubsms | yundun
After a lot of lookup I found a github issue in the official aliyun-cli that sort of describes that it is not compatible with alpine linux because of it's not muslc compatible.
Link here: https://github.com/aliyun/aliyun-cli/issues/54
Following the workarounds there I build a multi-stage docker file with the following that simply fixed my issue.
Dockerfile
#Build aliyun-cli binary ourselves because of issue
#in alpine https://github.com/aliyun/aliyun-cli/issues/54
FROM golang:1.13-alpine3.11 as cli_builder
RUN apk update && apk add curl git make
RUN mkdir /srv/aliyun
WORKDIR /srv/aliyun
RUN git clone https://github.com/aliyun/aliyun-cli.git
RUN git clone https://github.com/aliyun/aliyun-openapi-meta.git
ENV GOPROXY=https://goproxy.cn
WORKDIR aliyun-cli
RUN make deps; \
make testdeps; \
make build;
FROM docker:19
#Install python 3 & jq
RUN apk update && apk add python3 py3-pip python3-dev jq
#Install AWS Cli
RUN pip3 install awscli --upgrade
# Install Aliyun CLI from builder
COPY --from=cli_builder /srv/aliyun/aliyun-cli/out/aliyun /usr/bin
RUN aliyun configure set --profile default --mode EcsRamRole --ram-role-name build --region cn-shanghai

How to access generated output file in Docker

I have dockerized my python application. This application connects with Oracle database, pull out 10 rows from a table and then generate excel. I was able to build my image successfully with all dependent libraries and it's also executing fine. Now, I am not sure how to get generated excel file (batchtable.xlsx) in docker.
I am new to docker and would need your suggestion. I have checked output without storing records into excel and it's coming fine on console, so there is no code issue.
Dockerfile
FROM python:3.7.4-slim-buster
RUN apt-get update && apt-get install -y libaio1 wget unzip
WORKDIR /opt/oracle
COPY File.py /opt/oracle
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip && \
unzip instantclient-basiclite-linuxx64.zip && rm -f instantclient-basiclite-linuxx64.zip && \
cd /opt/oracle/instantclient* && rm -f *jdbc* *occi* *mysql* *README *jar uidrvci genezi adrci && \
echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf && ldconfig
RUN python -m pip install --upgrade pip
RUN python -m pip install cx_Oracle
RUN python -m pip install pandas
RUN python -m pip install openpyxl
CMD [ "python", "/opt/oracle/File.py" ]
File.py
import cx_Oracle
import pandas as pd
#creating database connection
dsn_tns = cx_Oracle.makedsn('dev-tr01.com', '1222', service_name='ast041.com')
conn = cx_Oracle.connect(user=r'usr', password='3451', dsn=dsn_tns)
c = conn.cursor()
query ='SELECT * FROM Employee WHERE ROWNUM <10'
result = pd.read_sql(query, con=conn)
result.to_excel("batchtable.xlsx")
conn.close()
You can access your data by mounting a volume into your container e.g.
docker run -ti -v $(pwd):/data IMAGE
https://docs.docker.com/storage/volumes/#start-a-container-with-a-volume
Add a -v switch to your docker run command. For instance:
docker run -v <path>:/output YOUR_IMAGE_NAME
Replace path with a valid path on your machine, for instance c:\temp on Windows.
Change your program to write to that directory:
result.to_excel("/output/batchtable.xlsx")
If running Docker Desktop, in the Docker Desktop settings make sure your drive is shared.

Docker pass in arguments to python script that uses argparse

I have the following docker image
FROM ubuntu
RUN apt-get update \
&& apt-get install -y python3 \
&& apt-get install -y python3-pip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& pip3 install boto3
ENV INSTALL_PATH /docker-flowcell-restore
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY /src/* $INSTALL_PATH/src/
ENTRYPOINT python3 src/main.py
In my python script that the ENTRYPOINT points too I have some parameters I would like to pass in. I used argparse in my python script to construct them. Example would be --key as an arg option. This --key argument will change on each run of the script. How do I pass this argument into my script so that it executes with the correct parameters?
I have tried
docker run my_image_name --key 100
but the argument is not getting to python script.
You can use CMD command to pass parameters (and set defaults ones for an entrypoint), for example:
CMD [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
Take a look here for details.

Resources