How to use dask.distributed API to specify the options for starting Bokeh web interface? - python-3.x

I'm trying to use dask.distributed Python API to start a scheduler. The example provided in http://distributed.dask.org/en/latest/setup.html#using-the-python-api works as expected but it does not provide insight on how to supply the options need to start Bokeh web interface.
Upon inspection of dask.distributed source code, I have understood I need to provide the Bokeh options using Scheduler(services={}). Unfortunately, I have failed on trying to find the correct dictionary format for services={}.
Below is the code for dask scheduler function.
import dask.distributed as daskd
import tornado
import threading
def create_dask_scheduler(scheduler_options_dict):
# Define and start tornado
tornado_loop = tornado.ioloop.IOLoop.current()
tornado_thread = threading.Thread(target=tornado_loop.start,daemon=True)
tornado_thread.start()
# Define and start scheduler
dask_scheduler = daskd.Scheduler(loop=tornado_loop,synchronize_worker_interval=scheduler_options_dict['synchronize_worker_interval'],allowed_failures=scheduler_options_dict['allowed_failures'],services=scheduler_options_dict['services'])
dask_scheduler.start('tcp://:8786')
return dask_scheduler
scheduler_options_dict = collections.OrderedDict()
scheduler_options_dict = {'synchronize_worker_interval':60,'allowed_failures':3,'services':{('http://hpcsrv',8787):8787}}
dask_scheduler = create_dask_scheduler(scheduler_options_dict)
The error I get is:
Exception in thread Thread-4: Traceback (most recent call last):
/uf5a/nbobolea/bin/anaconda2019.03_python3.7/envs/optimization/lib/python3.7/site-packages/ipykernel_launcher.py:18:
UserWarning: Could not launch service 'http‍://hpcsrv' on port 8787.
Got the following message: 'int' object is not callable
distributed.scheduler - INFO - Scheduler at:
tcp://xxx.xxx.xxx.xxx:8786
Help and insight is very appreciated.

You want
'services': {('bokeh', dashboard_address): BokehScheduler, {}}
where dashboard_address is something like "localhost:8787" and BokehScheduler is in distributed.bokeh.scheduler. You will need to read up on the Bokeh server to see what additional kwargs could be passed in that empty dictionary.

Related

Using person detection in Google Colab using Google AI

def detect_person(input_uri, output_uri):
"""Detects people in a video."""
client = videointelligence.VideoIntelligenceServiceClient(credentials=service_account.Credentials.from_service_account_file(
'./key.json'))
# Configure the request
config = videointelligence.types.PersonDetectionConfig(
include_bounding_boxes=True,
include_attributes=True,
include_pose_landmarks=True,
)
context = videointelligence.types.VideoContext(person_detection_config=config)
# Start the asynchronous request
operation = client.annotate_video(
input_uri=input_uri,
output_uri=output_uri,
features=[videointelligence.enums.Feature.PERSON_DETECTION],
video_context=context,
)
return operation
I then get an error when runnig:
operation = detect_person(input_uri, output_uri)
ERROR: AttributeError: module 'google.cloud.videointelligence_v1p3beta1' has no attribute 'enums' . Attempting to use Person detection in Google API but I get this error?
For some reason when I try to run the first code in Google Colab, nothing happens. I'm very new to this so I'm unsure what else I can do. Thank you so much! I am trying to follow this tutorial to create my own table tennis shot detection. https://github.com/google/making_with_ml/blob/master/sports_ai/Sports_AI_Analysis.ipynb
Import enums before running the code.
from google.cloud import videointelligence
from google.cloud.videointelligence import enums, types

What's the proper way to test a MongoDB connection with motor io?

I've got a simple FastAPI webapp going and I'd like to be able to check the database connection on startup (and retry connection if it fails)
I've got the following code, but it doesn't feel right
# main.py
import uvicorn
from backend.app import app
if __name__ == "__main__":
uvicorn.run(app, port=8001)
# app.py
# ... omitted for brevity
from backend.database import notes, tags
# ... omitted for brevity
# database.py
from motor.motor_asyncio import AsyncIOMotorClient
from asyncio import get_event_loop
client = AsyncIOMotorClient("localhost", 27027)
loop = get_event_loop()
data = loop.run_until_complete(client.server_info())
db = client.notes_db
notes = db.notes
tags = db.tags
Without get_event_loop() and the subsequent loop.run_until_complete() call it won't test the database connection until you actually try to access / write to it.
My goal is to be able to halt the startup process until it can successfully connect to a database, is there any clean way to do this with Python and motor.io (https://motor.readthedocs.io/, sorry there's no tag for it) ?
the startup event in FastAPI is the deal here I guess. I addition this repository is a nice example and this thread could even provide you with more information. You could execute your tests within the startup event. This means the application won't start until the startup event has been successfully executed.

Multiprocessing with flask sqlalchemy - psycopg2.DatabaseError: error with status PGRES_TUPLES_OK and no message from the libpq

am getting below exception while trying to use multiprocessing with flask sqlalchemy.
sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.
[12/Aug/2019 18:09:52] "GET /api/resources HTTP/1.1" 500 -
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/SQLAlchemy-1.3.6-py3.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1244, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/site-packages/SQLAlchemy-1.3.6-py3.7-linux-x86_64.egg/sqlalchemy/engine/default.py", line 552, in do_execute
cursor.execute(statement, parameters)
psycopg2.DatabaseError: error with status PGRES_TUPLES_OK and no message from the libpq
Without multiprocessing the code works perfect, but when i add the multiprocessing as below, am running into this issue.
worker = multiprocessing.Process(target=<target_method_which_has_business_logic_with_DB>, args=(data,), name='PROCESS_ID', daemon=False)
worker.start()
return Response("Request Accepted", status=202)
I see an answer to similar question in SO (https://stackoverflow.com/a/33331954/8085047), which suggests to use engine.dispose(), but in my case am using db.session directly, not creating the engine and scope manually.
Please help to resolve the issue. Thanks!
I had the same issue. Following Sam's link helped me solve it.
Before I had (not working):
from multiprocessing import Pool
with Pool() as pool:
pool.map(f, [arg1, arg2, ...])
This works for me:
from multiprocessing import get_context
with get_context("spawn").Pool() as pool:
pool.map(f, [arg1, arg2, ...])
The answer from dibrovsd#github was really useful for me. If you are using a PREFORKING server like uwsgi or gunicorn, this would also help you.
Post his comment here for your reference.
Found. This happens when uwsgi (or gunicorn) starts when multiple workers are forked from the first process.
If there is a request in the first process when it starts, then this opens a database connection and the connection is forked to the next process. But in the database, of course, no new connection is opened and a broken connection occurs.
You had to specify lazy: true, lazy-apps: true (uwsgi) or preload_app = False (gunicorn)
In this case, add. workers do not fork, but run themselves and open their normal connections themselves
Refer to link: https://github.com/psycopg/psycopg2/issues/281#issuecomment-985387977

Batch file creation to execute POST method in Flask API (windows)

I've to create a batch file so that it calls the following API POST method and executes it seamlessly on Windows. There is no input that is required to be provided to the POST method here
this is for a ML module which is being called by the API. I've tried calling the mentioned module directly through batch file and anaconda prompt but that doesn't work fine.
import CLassName1
from flask import Flask
app=Flask(__name__)
#app.route('/api/model/testing', methods=['POST'])
def test_model():
response=ClassName1.method_name1()
return response
#app.route('/test')
def post_health():
return "health"
if __name__ == '__main__':
app.run(host='127.0.0.1',port=15010, debug=True)
expected to run method_name1 and subsequent methods and then populate another file- 'Output' created in the parent folder.
actually- when method_name1 is executed directly from anaconda prompt throws an Import error after some time and keeps looping over
Can you please share the exact error stack trace that you are getting when you calls method_name1() because as long as method_name1() returns string or there is no error in method_name1(), this code should run.

I hope someone can improve the python-grpc example of veriblock

Veriblock has no python-grpc example. The return information may not be available due to coding problems. I'm not sure. I hope someone can make an example. Thank you very much.
I'm working on a more comprehensive example, but for connecting via gRPC and displaying current block number and node info this should get you started.
from __future__ import print_function
import json
import grpc
import veriblock_pb2 as vbk
import veriblock_pb2_grpc as vbkrpc
channel = grpc.insecure_channel('localhost:10500')
stub = vbkrpc.AdminStub(channel)
def GetStateInfoRequest():
response = stub.GetStateInfo(vbk.GetStateInfoRequest())
response = json.dumps({"connected_peer_count": response.connected_peer_count,
"network_height": response.network_height,
"local_blockchain_height": response.local_blockchain_height,
"network_version": response.network_version,
"program_version": response.program_version,
"nodecore_starttime": response.nodecore_starttime,
"wallet_cache_sync_height": response.wallet_cache_sync_height})
print(response)
def getBlock():
response = stub.GetInfo(vbk.GetInfoRequest())
response = (response.number_of_blocks - 1)
print(response)
getBlock()
GetStateInfoRequest()
Hope it helps.
Is there a specific python question, like calling a function or API or expecting output?
VeriBlock NodeCore does support python, via grpc (https://grpc.io/docs/tutorials/basic/python.html)
FWIW, there is a pre-compiled output for grpc that includes python
https://github.com/VeriBlock/nodecore-releases/releases/tag/v0.4.1-grpc
python
veriblock_pb2.py
veriblock_pb2_grpc.py
There is a C# example here: https://github.com/VeriBlock/VeriBlock.Demo.Rpc.Client (obviously not python, but maybe useful as a conceptual example)

Resources