Using Pyro4 to connect python scripts in separate containers using docker-compose - python-3.x

The Problem
I want to use Pyro4 for remote procedure calls across multiple containers using docker-compose. Currently, I am just trying to implement a simplified version of the Pyro4 warehouse example that I have setup to run on different machines, instead of the default localhost, since I am using multiple containers.
I can successfully start the Pyro name server in its own container, but in another container I can not publish the Warehouse class and start Pyro's request loop. I am get the error OSError: [Errno 99] Cannot assign requested address.
My attempt and additional information
I am using balena to deploy this to a Raspberry Pi 3 B+, and I have an environment variable (device variable in balena cloud) "PYRO_HOST=pyro-ns" to set the address of the pyro name server.
I see the pyro name server get created
05.02.20 15:27:33 (-0500) pyro-ns Broadcast server running on 0.0.0.0:9091
05.02.20 15:27:33 (-0500) pyro-ns NS running on pyro-ns:9090 (172.17.0.3)
05.02.20 15:27:33 (-0500) pyro-ns Warning: HMAC key not set. Anyone can connect to this server!
05.02.20 15:27:33 (-0500) pyro-ns URI = PYRO:Pyro.NameServer#pyro-ns:9090
However, I am get the error OSError: [Errno 99] Cannot assign requested address when I try to publish the Warehouse class and start Pyro's request loop using
Pyro4.Daemon.serveSimple(
{
Warehouse: "example.warehouse"
},
ns=True, verbose=True)
I get the following
05.02.20 16:52:00 (-0500) container_B Traceback (most recent call last):
05.02.20 16:52:00 (-0500) container_B File "src/container_B_main.py", line 33, in <module>
05.02.20 16:52:00 (-0500) container_B main()
05.02.20 16:52:00 (-0500) container_B File "src/container_B_main.py", line 30, in main
05.02.20 16:52:00 (-0500) container_B ns=True, verbose=True)
05.02.20 16:52:00 (-0500) container_B File "/usr/local/lib/python3.5/site-packages/Pyro4/core.py", line 1204, in serveSimple
05.02.20 16:52:00 (-0500) container_B daemon = Daemon(host, port)
05.02.20 16:52:00 (-0500) container_B File "/usr/local/lib/python3.5/site-packages/Pyro4/core.py", line 1141, in __init__
05.02.20 16:52:00 (-0500) container_B self.transportServer.init(self, host, port, unixsocket)
05.02.20 16:52:00 (-0500) container_B File "/usr/local/lib/python3.5/site-packages/Pyro4/socketserver/threadpoolserver.py", line 134, in init
05.02.20 16:52:00 (-0500) container_B sslContext=sslContext)
05.02.20 16:52:00 (-0500) container_B File "/usr/local/lib/python3.5/site-packages/Pyro4/socketutil.py", line 298, in createSocket
05.02.20 16:52:00 (-0500) container_B bindOnUnusedPort(sock, bind[0])
05.02.20 16:52:00 (-0500) container_B File "/usr/local/lib/python3.5/site-packages/Pyro4/socketutil.py", line 542, in bindOnUnusedPort
05.02.20 16:52:00 (-0500) container_B sock.bind((host, 0))
05.02.20 16:52:00 (-0500) container_B OSError: [Errno 99] Cannot assign requested address
What am I missing that will allow Pyro to work across the multiple containers using docker-compose?
Following is my code:
docker-compose.yml
version: '2'
services:
pyro-ns:
privileged: true
restart: always
build: ./pyro-ns
ports:
- "9090:9090"
container_A:
privileged: true
restart: always
build: ./container_A
depends_on:
- pyro-ns
- container_B
container_B:
privileged: true
restart: always
build: ./container_B
depends_on:
- pyro-ns
pyro-ns
Dockerfile.template
FROM balenalib/%%BALENA_MACHINE_NAME%%-python:3-stretch-run
# enable container init system.
ENV INITSYSTEM on
# use `install_packages` if you need to install dependencies,
# for instance if you need git, just uncomment the line below.
# RUN install_packages git
RUN pip install --upgrade pip
RUN pip install Pyro4 dill
ENV PYRO_SERIALIZERS_ACCEPTED=serpent,json,marshal,pickle,dill
ENV PYTHONUNBUFFERED=0
CMD ["python", "-m", "Pyro4.naming"]
EXPOSE 9090
container_A
Dockerfile.template
FROM balenalib/%%BALENA_MACHINE_NAME%%-python:3-stretch-run
# enable container init system.
ENV INITSYSTEM on
# use `install_packages` if you need to install dependencies,
# for instance if you need git, just uncomment the line below.
# RUN install_packages git
RUN pip install --upgrade pip
RUN pip install Pyro4 dill
# Set our working directory
WORKDIR /usr/src/container_A
# Copy requirements.txt first for better cache on later pushes
COPY requirements.txt requirements.txt
# pip install python deps from requirements.txt on the resin.io build server
RUN pip install -r requirements.txt
# This will copy all files in our root to the working directory in the container
COPY . ./
# main.py will run when container starts up on the device
CMD ["python","-u","src/container_A_main.py"]
container_A_main.py
import Pyro4
import Pyro4.util
import sys
sys.excepthook = Pyro4.util.excepthook
try:
print('Top of container A')
warehouse = Pyro4.Proxy("PYRONAME:example.warehouse")
print('The warehouse contains: ', warehouse.list_contents())
except Exception as ex:
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(ex).__name__, ex.args)
print(message)
container_B
Dockerfile.template
FROM balenalib/%%BALENA_MACHINE_NAME%%-python:3-stretch-run
# enable container init system.
ENV INITSYSTEM on
# use `install_packages` if you need to install dependencies,
# for instance if you need git, just uncomment the line below.
# RUN install_packages git
RUN pip install --upgrade pip
# Set our working directory
WORKDIR /usr/src/container_B
# Copy requirements.txt first for better cache on later pushes
COPY requirements.txt requirements.txt
# pip install python deps from requirements.txt on the resin.io build server
RUN pip install -r requirements.txt
# This will copy all files in our root to the working directory in the container
COPY . ./
# container_B_main.py will run when container starts up on the device
CMD ["python","-u","src/container_B_main.py"]
container_B_main.py
from __future__ import print_function
import Pyro4
#Pyro4.expose
#Pyro4.behavior(instance_mode="single")
class Warehouse(object):
def __init__(self):
self.contents = ["chair", "bike", "flashlight", "laptop", "couch"]
def list_contents(self):
return self.contents
def take(self, name, item):
self.contents.remove(item)
print("{0} took the {1}.".format(name, item))
def store(self, name, item):
self.contents.append(item)
print("{0} stored the {1}.".format(name, item))
def main():
#Pyro4.config.HOST = "pyro-ns"
Pyro4.Daemon.serveSimple(
{
Warehouse: "example.warehouse"
},
ns=True, verbose=True)
if __name__ == "__main__":
main()
For both container_A and container_B the requirements.txt file is the same.
requirements.txt
Pyro4

After some help from the balena forums, I was able to successfully get my Pyro example to run.
Following are my updated and working files for reference.
---
docker-compose.yml
version: '2'
services:
pyro-ns:
privileged: true
restart: always
build: ./pyro-ns
command: ["--host=pyro-ns"]
container_A:
privileged: true
restart: always
build: ./container_A
depends_on:
- pyro-ns
- container_B
container_B:
privileged: true
restart: always
build: ./container_B
depends_on:
- pyro-ns
pyro-ns
Dockerfile.template
FROM balenalib/%%BALENA_MACHINE_NAME%%-python:3-stretch-run
# enable container init system.
ENV INITSYSTEM on
# use `install_packages` if you need to install dependencies,
# for instance if you need git, just uncomment the line below.
# RUN install_packages git
RUN pip install --upgrade pip
RUN pip install Pyro4 dill
ENV PYRO_SERIALIZERS_ACCEPTED=serpent,json,marshal,pickle,dill
ENV PYTHONUNBUFFERED=0
ENTRYPOINT ["pyro4-ns"]
EXPOSE 9090
container_A
Dockerfile.template
FROM balenalib/%%BALENA_MACHINE_NAME%%-python:3-stretch-run
# enable container init system.
ENV INITSYSTEM on
# use `install_packages` if you need to install dependencies,
# for instance if you need git, just uncomment the line below.
# RUN install_packages git
RUN pip install --upgrade pip
RUN pip install Pyro4 dill
# Set our working directory
WORKDIR /usr/src/container_A
# Copy requirements.txt first for better cache on later pushes
COPY requirements.txt requirements.txt
# pip install python deps from requirements.txt on the resin.io build server
RUN pip install -r requirements.txt
# This will copy all files in our root to the working directory in the container
COPY . ./
# main.py will run when container starts up on the device
CMD ["python","-u","src/container_A_main.py"]
container_A_main.py
import Pyro4
import Pyro4.util
import sys
sys.excepthook = Pyro4.util.excepthook
try:
print('Top of container A')
warehouse = Pyro4.Proxy("PYRONAME:example.warehouse")
print('The warehouse contains: ', warehouse.list_contents())
print('Inifite loop after running the Warehouse class via Pyro4')
while True:
# Infinite loop.
pass
except Exception as ex:
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(ex).__name__, ex.args)
print(message)
container_B
Dockerfile.template
FROM balenalib/%%BALENA_MACHINE_NAME%%-python:3-stretch-run
# enable container init system.
ENV INITSYSTEM on
# use `install_packages` if you need to install dependencies,
# for instance if you need git, just uncomment the line below.
# RUN install_packages git
RUN pip install --upgrade pip
# Set our working directory
WORKDIR /usr/src/container_B
# Copy requirements.txt first for better cache on later pushes
COPY requirements.txt requirements.txt
# pip install python deps from requirements.txt on the resin.io build server
RUN pip install -r requirements.txt
# This will copy all files in our root to the working directory in the container
COPY . ./
# main.py will run when container starts up on the device
CMD ["python","-u","src/container_B_main.py"]
EXPOSE 9100
container_B_main.py
from __future__ import print_function
import Pyro4
#Pyro4.expose
#Pyro4.behavior(instance_mode="single")
class Warehouse(object):
def __init__(self):
self.contents = ["chair", "bike", "flashlight", "laptop", "couch"]
def list_contents(self):
return self.contents
def take(self, name, item):
self.contents.remove(item)
print("{0} took the {1}.".format(name, item))
def store(self, name, item):
self.contents.append(item)
print("{0} stored the {1}.".format(name, item))
def main():
Pyro4.config.SERIALIZER = 'pickle'
daemon = Pyro4.Daemon(host="container_B", port=9100)
uri = daemon.register(Warehouse)
print('The uri of the registered Warehouse class')
print(uri)
try:
ns = Pyro4.locateNS(host="pyro-ns", port=9090)
print('The located name server')
print(ns)
ns.register("example.warehouse", uri)
except Exception as ex:
template = "An exception of type {0} occurred. Arguments:\n{1!r}"
message = template.format(type(ex).__name__, ex.args)
print(message)
daemon.requestLoop()
if __name__ == "__main__":
main()
For both container_A and container_B the requirements.txt file is the same.
requirements.txt
`Pyro4`

Related

Flask App working locally but not working on local docker

The app is running locally but when i build docker image and try to run the app from local docker then the browser shows the following error:
This site can’t be reached http://172.17.0.2:8080/ is unreachable.
ERR_ADDRESS_UNREACHABLE and also taking too long to respond
what changes should i make in docker file or in the app code so that i can run it form local docker
Flask App code:
from flask import Flask,request, url_for, redirect, render_template, jsonify
from pycaret.regression import *
import pandas as pd
import pickle
import numpy as np
app = Flask(__name__)
model = load_model('deployment_28042020')
cols = ['age', 'sex', 'bmi', 'children', 'smoker', 'region']
#app.route('/')
def home():
return render_template("home.html")
#app.route('/predict',methods=['POST'])
def predict():
int_features = [x for x in request.form.values()]
final = np.array(int_features)
data_unseen = pd.DataFrame([final], columns = cols)
prediction = predict_model(model, data=data_unseen, round = 0)
prediction = int(prediction.Label[0])
return render_template('home.html',pred='Expected Bill will be{}'.format(prediction))
if __name__ == '__main__':
app.run(debug=True, port=8080, host='0.0.0.0')
Docker file:
FROM python:3.7
RUN pip install virtualenv
ENV VIRTUAL_ENV=/venv
RUN virtualenv venv -p python3
ENV PATH="VIRTUAL_ENV/bin:$PATH"
WORKDIR /app
COPY requirements.txt requirements.txt
ADD . /app
# install dependencies
RUN pip install -r requirements.txt
COPY . .
# expose port
# EXPOSE 5000
# EXPOSE 8000
EXPOSE 8080
# run application
CMD ["python", "app.py", "--host=0.0.0.0"]
Add a docker-compose.yml file
version: "3"
services:
app:
build: .
ports:
- "8080:8080"
Run: docker-compose up --build
An important thing to notice is that 172.17.0.2 belongs to your container network. You can access your site on
http://localhost:8080.

Web application using Python3 not working when Dockerized

HelloWorld-1.py
app = Flask(__name__)
#app.route('/')
def printHelloWorld():
print("+++++++++++++++++++++")
print("+ HELLO WORLD-1 +")
print("+++++++++++++++++++++")
return '<h1>Bishwajit</h1>'
# return '<h1>Hello %s!<h1>' %name
if name == '__main__':
app.run(debug='true')
Dockerfile
FROM python:3
ADD HelloWorld-1.py /HelloWorld-1.py
RUN pip install flask
EXPOSE 80
CMD [ "python", "/HelloWorld-1.py"]
Building docker using the below command
docker build -t helloworld .
Running docker image using below command
docker run -d --name helloworld -p 80:80 helloworld
when i run the below command
docker ps -a
i get the below output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cebfe8a22493 helloworld "python /home/HelloW…" 2 minutes ago Up 2 minutes (unhealthy) 0.0.0.0:80->80/tcp helloworld
If I hit in the browser(127.0.0.1:5000), it does not give response,
But when i run the python file individually, it runs properly in the browser.
I reproduced your problem and there were four main problems:
Not importing flask.
Using name instead of __name__
Not assigning the correct port.
Not assigning the host.
This is how your HelloWorld-1.py should look like:
from flask import Flask
app = Flask(__name__)
#app.route('/')
def printHelloWorld():
print("+++++++++++++++++++++")
print("+ HELLO WORLD-1 +")
print("+++++++++++++++++++++")
return '<h1>Bishwajit</h1>'
# return '<h1>Hello %s!<h1>' %name
if __name__ == '__main__':
app.run(host='0.0.0.0')
This is how you Dockerfile should look like:
FROM python:3
ADD HelloWorld-1.py .
RUN pip install flask
CMD [ "python", "/HelloWorld-1.py"]
Then simply build and run:
docker build . -t helloflask
docker run -dit -p 5000:5000 helloflask
Now go to localhost:5000 and it should work.
Additionally: You could actually assign any other port, for example 4444, and then go to localhost:4444:
docker run -dit -p 4444:5000 helloflask

Docker NameError: name 'app' is not defined

I am trying to make a Flask framework with python and trying to host it on Docker.
#importing dependencies
from flask import Flask
#initializing the name of the application
app = Flask(__name__)
#app.route('/')
def hello(parameter_list):
return 'Hello, this is my first try on Docker'
if __name__ == "__main__":
app.run(host="0.0.0.0", debug= True)
I am getting at Line 5 that name 'app' is not defined
what should i do to remove this error?
this is my first time asking a question over here, Please let me know if any other clarification is needed or suggestions for future posts.
Thanks in advance
The error you have shown in the image and the code does not seem matched. to reproduce your error is to pass app to flask object instead of __name__.
Here you go with HelloWorld
FROM python:alpine3.7
RUN pip install flask==0.10.1
COPY . /app
WORKDIR /app
EXPOSE 5000
CMD python app.py
and app.py
from flask import Flask
app = Flask(__name__)
#app.route("/")
def index():
return "Welcome to the Data Science Learner!"
if __name__ == "__main__":
app.run(host="0.0.0.0", port=int("5000"), debug=True)
build
docker build -t flask-test .
run
docker run -it --rm flask-test
You can use the same with Docker compose,
docker-compose rm -f && docker-compose up --build

in docker python cannot find app

When I run "docker-compose up" on Windows 10 docker client, I get the following errors:
counterapp_web_1 | File "manage.py", line 7, in <module>
counterapp_web_1 | from app import app
counterapp_db_1 | 2018-01-26T05:09:23.517693Z 0 [Warning] InnoDB: Creating foreign key constraint system tables.
counterapp_web_1 | ImportError: No module named 'app',
Here is my dockerfile:
FROM python:3.4.5-slim
## make a local directory
RUN mkdir /counter_app
ENV PATH=$PATH:/counter_app
ENV PYTHONPATH /counter_app
# set "counter_app" as the working directory from which CMD, RUN, ADD references
WORKDIR /counter_app
# now copy all the files in this directory to /counter_app
ADD . .
# pip install the local requirements.txt
RUN pip install -r requirements.txt
# Listen to port 5000 at runtime
EXPOSE 5000
# Define our command to be run when launching the container
CMD ["python", "manage.py", "runserver"]
There is a manage.py to call folder app, under app, there is __init__.py.
Double-check is this error on docker run is because of your directory name:
__init__.py is imported using a directory. if you want to import it as app you should put __init__.py file in directory named app
a better option is just to rename __init__.py to app.py
So a folder named app, not counter_app.

Docker-compose setup mongodb with data and a nodejs server

i want to spin up a whole nodejs/mongodb environment with docker compose.i created a mongodump into radcupDevSample.json which i want to mongo restore i a new mongodb container. After this i want a new node container with my api linked to the mondodb container with the sample data.
i have the following files:
1. docker-compose.yml:
db:
image: mongo
ports:
- "27017:27017"`
mongo-importer:
build: .
web:
build: web
links:
- db
ports:
- "3000:3000"
volumes:
- ./src:/home/env
environment:
NODE_ENV: development
2. web Dockerfile:
FROM node
RUN apt-get update -y
RUN apt-get install -y git
RUN apt-get install -y vim
RUN git clone https://sdsds-oauth-basic#github.com/jdklfj /home/app
WORKDIR /home/app/src
RUN npm install
CMD "npm start"
EXPOSE 3000
3. "." dockerfile
FROM mongo
COPY radcupDevSample.json /radcupDevSample.json
CMD mongorestore -h db /radcupDevSample.json
I used this answer mentioned here:
How do I seed a mongo database using docker-compose?
My problem is: if i try docker-compose up i get this error:
docker-compose up
radcupbackend_db_1 is up-to-date
Building web
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "/compose/compose/cli/main.py", line 54, in main
File "/compose/compose/cli/docopt_command.py", line 23, in sys_dispatch
File "/compose/compose/cli/docopt_command.py", line 26, in dispatch
File "/compose/compose/cli/main.py", line 171, in perform_command
File "/compose/compose/cli/main.py", line 587, in up
File "/compose/compose/project.py", line 313, in up
File "/compose/compose/service.py", line 404, in execute_convergence_plan
File "/compose/compose/service.py", line 303, in create_container
File "/compose/compose/service.py", line 326, in ensure_image_exists
File "/compose/compose/service.py", line 723, in build
File "/compose/venv/lib/python2.7/site-packages/docker/api/build.py", line 41, in build
TypeError: You must specify a directory to build in path
docker-compose returned -1
Have someone a "hint" why i get this errors ? Or maybe could someone help me out to set up this environment ?
Thank you!
PS. building the 2 Dockerfiles separately don't throw any error...
Looks like the docker is unable to find web/ directory from your current directory where you are running docker-compose up
Does your structure look like this?
anovil#ubuntu-anovil:~/tmp/docker-compose-mongo$ tree .
.
├── docker-compose.yml
├── Dockerfile
├── radcupDevSample.json
├── src
└── web
└── Dockerfile
2 directories, 4 files
anovil#ubuntu-anovil:~/tmp/docker-compose-mongo$
I created this kind of structure and then when I ran my build,
anovil#ubuntu-anovil:~/docker-compose-mongo$ docker-compose up
dockercomposemongo_db_1 is up-to-date
Starting dockercomposemongo_web_1
Building mongo-importer
Step 1 : FROM mongo
---> 94a166215fe3
...
In order to make sure everything is in order, I recommend you to run a docker-compose build --no-cache first from the same directory, this will ensure that you start from a clean slate.
anovil#ubuntu-anovil:~/docker-compose-mongo$ docker-compose build --no-cache
db uses an image, skipping
Building web
Step 1 : FROM node
---> ac9b478bfbbd
...
And then run a docker-compose up
Please let me know how it went.

Resources