I hope someone can improve the python-grpc example of veriblock - veriblock

Veriblock has no python-grpc example. The return information may not be available due to coding problems. I'm not sure. I hope someone can make an example. Thank you very much.

I'm working on a more comprehensive example, but for connecting via gRPC and displaying current block number and node info this should get you started.
from __future__ import print_function
import json
import grpc
import veriblock_pb2 as vbk
import veriblock_pb2_grpc as vbkrpc
channel = grpc.insecure_channel('localhost:10500')
stub = vbkrpc.AdminStub(channel)
def GetStateInfoRequest():
response = stub.GetStateInfo(vbk.GetStateInfoRequest())
response = json.dumps({"connected_peer_count": response.connected_peer_count,
"network_height": response.network_height,
"local_blockchain_height": response.local_blockchain_height,
"network_version": response.network_version,
"program_version": response.program_version,
"nodecore_starttime": response.nodecore_starttime,
"wallet_cache_sync_height": response.wallet_cache_sync_height})
print(response)
def getBlock():
response = stub.GetInfo(vbk.GetInfoRequest())
response = (response.number_of_blocks - 1)
print(response)
getBlock()
GetStateInfoRequest()
Hope it helps.

Is there a specific python question, like calling a function or API or expecting output?
VeriBlock NodeCore does support python, via grpc (https://grpc.io/docs/tutorials/basic/python.html)
FWIW, there is a pre-compiled output for grpc that includes python
https://github.com/VeriBlock/nodecore-releases/releases/tag/v0.4.1-grpc
python
veriblock_pb2.py
veriblock_pb2_grpc.py
There is a C# example here: https://github.com/VeriBlock/VeriBlock.Demo.Rpc.Client (obviously not python, but maybe useful as a conceptual example)

Related

Using person detection in Google Colab using Google AI

def detect_person(input_uri, output_uri):
"""Detects people in a video."""
client = videointelligence.VideoIntelligenceServiceClient(credentials=service_account.Credentials.from_service_account_file(
'./key.json'))
# Configure the request
config = videointelligence.types.PersonDetectionConfig(
include_bounding_boxes=True,
include_attributes=True,
include_pose_landmarks=True,
)
context = videointelligence.types.VideoContext(person_detection_config=config)
# Start the asynchronous request
operation = client.annotate_video(
input_uri=input_uri,
output_uri=output_uri,
features=[videointelligence.enums.Feature.PERSON_DETECTION],
video_context=context,
)
return operation
I then get an error when runnig:
operation = detect_person(input_uri, output_uri)
ERROR: AttributeError: module 'google.cloud.videointelligence_v1p3beta1' has no attribute 'enums' . Attempting to use Person detection in Google API but I get this error?
For some reason when I try to run the first code in Google Colab, nothing happens. I'm very new to this so I'm unsure what else I can do. Thank you so much! I am trying to follow this tutorial to create my own table tennis shot detection. https://github.com/google/making_with_ml/blob/master/sports_ai/Sports_AI_Analysis.ipynb
Import enums before running the code.
from google.cloud import videointelligence
from google.cloud.videointelligence import enums, types

how to properly call a REST-API with an * in the URL

i searched the internet (and stackoverflow :D) to find an answer for the following question - and found none that i understood.
background:
we want to use a python script to connect our companies CMDB with our AWX/Ansible infrastructure.
the CMDB has a REST API which supports a (halfway) proper export.
i'm currently stuck with the implementation of the correct API call.
i can call the API itself and authenticate, but i can't call the proper filter to get the results i need.
the filter is realized by having the following string within the URL (more in the attached code example)
Label LIKE "host*"
it seems that python has a problem with the *.
error message:
InvalidURL(f"URL can't contain control characters. {url!r} "
I found some bug reports that there is an issue within some python versions, but i'm way to new to properly understand if this affects me here :D
used python version 3.7.4
PS: let's see if i can get the markup right :D
i switched the called URL to determine where exactly the problem occurs.
it only occurs when i use the SQL like filter part.
this part is essential since i just want our "hosts" to be returned and not the whole CMDB itself.
#import the required classes and such
from http.client import HTTPConnection
import json
#create a HTTP connection client
client = HTTPConnection("cmdb.example.company")
#basic auth and some header details
headers = {'Content-Type': 'application/json',
'Authorization' : 'Basic my-auth-token'}
#working API call
client.request('GET', '/cmdb/rest/hosts?attributes=Label,Keywords,Tag,Description&limit=10', headers=headers)
#broken API call returns - InvalidURL(f"URL can't contain control characters. {url!r} "
client.request('GET', '/cmdb/rest/hosts?filter=Label LIKE "host*"&attributes=Label,Keywords,Tag,Description&limit=10', headers=headers)
#check and convert the response into a readable (JSON) format
response = client.getresponse()
data = response.read()
#debugging print - show that the returned data is bytes?!
print(data)
#convert the returned data into json
my_json = data.decode('utf8').replace("'", '"')
data = json.loads(my_json)
#only return the data part from the JSON and ignore the meta-overhead
text = json.dumps(data["data"], sort_keys=True, indent=4)
print(text)
so, i want to know how to properly call the API with the described filter and resolve the displayed error.
can you give me an example i can try or pin-point a beginners mistake i made?
am i affected by the mentioned python bug regarding the URL call with * in it?
thanks for helping me out :)
soooo i found my beginners mistake myself:
i used the URL from my browser - and my browser automaticly encodes the special characters within the URL.
i found the following piece of code within Python3 URL encoding guide and modified the string to fit my needs :)
import urllib.parse
query = ' "host*"'
urllib.parse.quote(query)
'%20%22host%2A%22'
Result: '%20%22host%2A%22'
%20 = " "
%22 = " " "
%2A = "*"
so the final code looks somewhat like this:
#broken API call returns - InvalidURL(f"URL can't contain control characters. {url!r} "
client.request('GET', '/cmdb/rest/hosts?filter=Label LIKE "host*"&attributes=Label,Keywords,Tag,Description&limit=10', headers=headers)
filter=Label LIKE "host*"
#fixed API call
client.request('GET', '/cmdb/rest/hosts?filter=Label%20LIKE%20%22host%2A%22&attributes=Label,Keywords,Tag,Description&limit=10', headers=headers)
filter=Label%20LIKE%20%22host%2A%22

How to use dask.distributed API to specify the options for starting Bokeh web interface?

I'm trying to use dask.distributed Python API to start a scheduler. The example provided in http://distributed.dask.org/en/latest/setup.html#using-the-python-api works as expected but it does not provide insight on how to supply the options need to start Bokeh web interface.
Upon inspection of dask.distributed source code, I have understood I need to provide the Bokeh options using Scheduler(services={}). Unfortunately, I have failed on trying to find the correct dictionary format for services={}.
Below is the code for dask scheduler function.
import dask.distributed as daskd
import tornado
import threading
def create_dask_scheduler(scheduler_options_dict):
# Define and start tornado
tornado_loop = tornado.ioloop.IOLoop.current()
tornado_thread = threading.Thread(target=tornado_loop.start,daemon=True)
tornado_thread.start()
# Define and start scheduler
dask_scheduler = daskd.Scheduler(loop=tornado_loop,synchronize_worker_interval=scheduler_options_dict['synchronize_worker_interval'],allowed_failures=scheduler_options_dict['allowed_failures'],services=scheduler_options_dict['services'])
dask_scheduler.start('tcp://:8786')
return dask_scheduler
scheduler_options_dict = collections.OrderedDict()
scheduler_options_dict = {'synchronize_worker_interval':60,'allowed_failures':3,'services':{('http://hpcsrv',8787):8787}}
dask_scheduler = create_dask_scheduler(scheduler_options_dict)
The error I get is:
Exception in thread Thread-4: Traceback (most recent call last):
/uf5a/nbobolea/bin/anaconda2019.03_python3.7/envs/optimization/lib/python3.7/site-packages/ipykernel_launcher.py:18:
UserWarning: Could not launch service 'http‍://hpcsrv' on port 8787.
Got the following message: 'int' object is not callable
distributed.scheduler - INFO - Scheduler at:
tcp://xxx.xxx.xxx.xxx:8786
Help and insight is very appreciated.
You want
'services': {('bokeh', dashboard_address): BokehScheduler, {}}
where dashboard_address is something like "localhost:8787" and BokehScheduler is in distributed.bokeh.scheduler. You will need to read up on the Bokeh server to see what additional kwargs could be passed in that empty dictionary.

How do I use tryNext() in Gremlin with Node.js?

The following example does not work for me in Node.js using the 'gremlin' 3.4.1 npm package:
g.V().has('person','name','bill').tryNext().orElseGet{g.addV('person').property('name','bill').next()}
I am getting a TypeError saying tryNext() is not a function. What am I doing wrong?
import {driver, structure} from 'gremlin';
import DriverRemoteConnection = driver.DriverRemoteConnection;
import Graph = structure.Graph;
const g = new Graph().traversal().withRemote(new DriverRemoteConnection('ws://localhost:8182/gremlin'));
console.log(g.V().toList()); <= working
Now using the line from above in that code will not work, but it does work using the Gremlin console.
Trying to call a function that doesn't exist, which appears to be as stated in the Gremlin docs, to wit:
tryNext() will return an Optional and thus, is a composite of hasNext()/next() (only supported for JVM languages).
http://tinkerpop.apache.org/docs/current/reference/#terminal-steps
Caveat: Never used TinkerPop, never used Gremlin. But I know how to use the web. Could be this is wrong, but the docs do seem fairly clear.
If tryNext() not supported as Dave mentioned.
You can rewrite your query to do the same with other gremlin steps:
g.V().has('person','name','bill').fold().coalesce(unfold(),g.addV('person').property('name','bill')).next()

urllib3 debug request header

I'm using urllib3 and I want to see the headers that are send.
I've found this in documentation but it doesn't print the headers:
urllib3.add_stderr_logger(1)
Is there any way of doing this?
Right now, the best way to achieve really verbose logging that includes headers sent in urllib3 is to override the default value in httplib (which is used internally).
For Python 3:
# You'll need to do this before urllib3 creates any http connection objects
import http.client
http.client.HTTPConnection.debuglevel = 5
# Now you can use urllib3 as normal
import urllib3
http = urllib3.PoolManager()
r = http.request('GET', ...)
In Python 2, the HTTPConnection object lives under the httplib module.
This will turn on verbose logging for anything that uses httplib. Note that this is not using the documented API for httplib, but it's monkeypatching the default value for the HTTPConnection class.
The goal is to add better urllib3-native logging for these kinds of things, but it hasn't been implemented yet. Related issue: https://github.com/shazow/urllib3/issues/107

Resources