def detect_person(input_uri, output_uri):
"""Detects people in a video."""
client = videointelligence.VideoIntelligenceServiceClient(credentials=service_account.Credentials.from_service_account_file(
'./key.json'))
# Configure the request
config = videointelligence.types.PersonDetectionConfig(
include_bounding_boxes=True,
include_attributes=True,
include_pose_landmarks=True,
)
context = videointelligence.types.VideoContext(person_detection_config=config)
# Start the asynchronous request
operation = client.annotate_video(
input_uri=input_uri,
output_uri=output_uri,
features=[videointelligence.enums.Feature.PERSON_DETECTION],
video_context=context,
)
return operation
I then get an error when runnig:
operation = detect_person(input_uri, output_uri)
ERROR: AttributeError: module 'google.cloud.videointelligence_v1p3beta1' has no attribute 'enums' . Attempting to use Person detection in Google API but I get this error?
For some reason when I try to run the first code in Google Colab, nothing happens. I'm very new to this so I'm unsure what else I can do. Thank you so much! I am trying to follow this tutorial to create my own table tennis shot detection. https://github.com/google/making_with_ml/blob/master/sports_ai/Sports_AI_Analysis.ipynb
Import enums before running the code.
from google.cloud import videointelligence
from google.cloud.videointelligence import enums, types
Related
I'm working on a python app that needs to get the NTPSynchronized parameter from system-timedated. I'd also like to be able to start and stop the NTP service by using the SetNTP method.
To communicate with timedated over d-bus I have been using this as reference: https://www.freedesktop.org/wiki/Software/systemd/timedated/
I previously got this working with dbus-python, but have since learned that this library has been deprecated. I tried the dbus_next package, but that does not have support for Python 3.5, which I need.
I came across the pystemd package, but I am unsure if this can be used to do what I want. The only documentation I have been able to find is this example (https://github.com/facebookincubator/pystemd), but I can not figure out how to use this to work with system-timedated.
Here is the code I have that works with dbus-python:
import dbus
BUS_NAME = 'org.freedesktop.timedate1`
IFACE = 'org.freedesktop.timedate1`
bus = dbus.SystemBus()
timedate_obj = bus.get_object(BUS_NAME, '/org/freedesktop/timedate1')
# Get synchronization value
is_sync = timedate_obj.Get(BUS_NAME, 'NTPSynchronized', dbus_interface=dbus.PROPERTIES_IFACE)
# Turn off NTP
timedate_obj.SetNTP(False,False, dbus_interface=IFACE)
Here's what I have so far with pystemd, but I don't think I'm accessing it in the right way:
from pystemd.systemd1 import Unit
unit = Unit(b'systemd-timesyncd.service')
unit.load()
# Try to access properties
prop = unit.Properties
prop.NTPSynchronized
Running that I get:
Attribute Error: 'SDInterface' object has no attribute 'NTPSynchronized'
I have a feeling that either the service I entered is wrong, or the way I'm accessing properties is wrong, or even both are wrong.
Any help or advice is appreciated.
Looking at the source code, it appears that using the pystemd.systemd1 Unit object has a default destination of "org.freedesktop.systemd1" + the service name (https://github.com/facebookincubator/pystemd/blob/master/pystemd/systemd1/unit.py)
This is not what I want because I am trying to access "org.freedesktop.timedate1"
So instead I instantiated it's base class SDObject from pystemd.base (https://github.com/facebookincubator/pystemd/blob/master/pystemd/base.py)
The following code allowed me to get the sync status of NTP
from pystemd.base import SDObject
obj = SDObject(
destination=b'org.freedesktop.timedate1',
path=b'/org/freedesktop/timedate1',
bus=None,
_autoload=False
)
obj.load()
is_sync = obj.Properties.Get('org.freedesktop.timedate1','NTPSynchronized')
print(is_sync)
Not sure if this is what the library author intended, but hey it works!
I need a turorial about how to create a object detection, using tensorflowjs and nodejs, no browser, no python... is this posible, i tried to start doing this but i get a lot o bugs... so i need a guide line to acomplish this.
I start doing this
const tf = require('#tensorflow/tfjs')
const cc = require('#tensorflow-models/coco-ssd');
require('#tensorflow/tfjs-node')
cc.load()
and this get me this error.
UnhandledPromiseRejectionWarning: TypeError: tfconv.loadGraphModel is not a function
cc.load() returns a promise. So you have to use cc.load().then({ ... your code}) or await cc.load(). Read the official docs of coco ssd for more information
I'm trying to use dask.distributed Python API to start a scheduler. The example provided in http://distributed.dask.org/en/latest/setup.html#using-the-python-api works as expected but it does not provide insight on how to supply the options need to start Bokeh web interface.
Upon inspection of dask.distributed source code, I have understood I need to provide the Bokeh options using Scheduler(services={}). Unfortunately, I have failed on trying to find the correct dictionary format for services={}.
Below is the code for dask scheduler function.
import dask.distributed as daskd
import tornado
import threading
def create_dask_scheduler(scheduler_options_dict):
# Define and start tornado
tornado_loop = tornado.ioloop.IOLoop.current()
tornado_thread = threading.Thread(target=tornado_loop.start,daemon=True)
tornado_thread.start()
# Define and start scheduler
dask_scheduler = daskd.Scheduler(loop=tornado_loop,synchronize_worker_interval=scheduler_options_dict['synchronize_worker_interval'],allowed_failures=scheduler_options_dict['allowed_failures'],services=scheduler_options_dict['services'])
dask_scheduler.start('tcp://:8786')
return dask_scheduler
scheduler_options_dict = collections.OrderedDict()
scheduler_options_dict = {'synchronize_worker_interval':60,'allowed_failures':3,'services':{('http://hpcsrv',8787):8787}}
dask_scheduler = create_dask_scheduler(scheduler_options_dict)
The error I get is:
Exception in thread Thread-4: Traceback (most recent call last):
/uf5a/nbobolea/bin/anaconda2019.03_python3.7/envs/optimization/lib/python3.7/site-packages/ipykernel_launcher.py:18:
UserWarning: Could not launch service 'http‍://hpcsrv' on port 8787.
Got the following message: 'int' object is not callable
distributed.scheduler - INFO - Scheduler at:
tcp://xxx.xxx.xxx.xxx:8786
Help and insight is very appreciated.
You want
'services': {('bokeh', dashboard_address): BokehScheduler, {}}
where dashboard_address is something like "localhost:8787" and BokehScheduler is in distributed.bokeh.scheduler. You will need to read up on the Bokeh server to see what additional kwargs could be passed in that empty dictionary.
The following example does not work for me in Node.js using the 'gremlin' 3.4.1 npm package:
g.V().has('person','name','bill').tryNext().orElseGet{g.addV('person').property('name','bill').next()}
I am getting a TypeError saying tryNext() is not a function. What am I doing wrong?
import {driver, structure} from 'gremlin';
import DriverRemoteConnection = driver.DriverRemoteConnection;
import Graph = structure.Graph;
const g = new Graph().traversal().withRemote(new DriverRemoteConnection('ws://localhost:8182/gremlin'));
console.log(g.V().toList()); <= working
Now using the line from above in that code will not work, but it does work using the Gremlin console.
Trying to call a function that doesn't exist, which appears to be as stated in the Gremlin docs, to wit:
tryNext() will return an Optional and thus, is a composite of hasNext()/next() (only supported for JVM languages).
http://tinkerpop.apache.org/docs/current/reference/#terminal-steps
Caveat: Never used TinkerPop, never used Gremlin. But I know how to use the web. Could be this is wrong, but the docs do seem fairly clear.
If tryNext() not supported as Dave mentioned.
You can rewrite your query to do the same with other gremlin steps:
g.V().has('person','name','bill').fold().coalesce(unfold(),g.addV('person').property('name','bill')).next()
Veriblock has no python-grpc example. The return information may not be available due to coding problems. I'm not sure. I hope someone can make an example. Thank you very much.
I'm working on a more comprehensive example, but for connecting via gRPC and displaying current block number and node info this should get you started.
from __future__ import print_function
import json
import grpc
import veriblock_pb2 as vbk
import veriblock_pb2_grpc as vbkrpc
channel = grpc.insecure_channel('localhost:10500')
stub = vbkrpc.AdminStub(channel)
def GetStateInfoRequest():
response = stub.GetStateInfo(vbk.GetStateInfoRequest())
response = json.dumps({"connected_peer_count": response.connected_peer_count,
"network_height": response.network_height,
"local_blockchain_height": response.local_blockchain_height,
"network_version": response.network_version,
"program_version": response.program_version,
"nodecore_starttime": response.nodecore_starttime,
"wallet_cache_sync_height": response.wallet_cache_sync_height})
print(response)
def getBlock():
response = stub.GetInfo(vbk.GetInfoRequest())
response = (response.number_of_blocks - 1)
print(response)
getBlock()
GetStateInfoRequest()
Hope it helps.
Is there a specific python question, like calling a function or API or expecting output?
VeriBlock NodeCore does support python, via grpc (https://grpc.io/docs/tutorials/basic/python.html)
FWIW, there is a pre-compiled output for grpc that includes python
https://github.com/VeriBlock/nodecore-releases/releases/tag/v0.4.1-grpc
python
veriblock_pb2.py
veriblock_pb2_grpc.py
There is a C# example here: https://github.com/VeriBlock/VeriBlock.Demo.Rpc.Client (obviously not python, but maybe useful as a conceptual example)