How to connect a sink to a external waveform port in REDHAWK? - redhawksdr

I'm trying to write a unit test for a REDHAWK waveform. I would like to use stream sources to input data and stream/message sinks to store the output. I have written unit tests for components this way, but wanted to create a test for a waveform as well. I found a solution for connecting a StreamSource to a waveform's port, but have not been able to determine how to connect a sink to a waveform port.
For a source and a component (where self.comp is the component), normally one can use the following to connect them:
src = StreamSource(streamId='strm1', format='short')
src.connect(providesComponent=self.comp,
providesPortName='dataShort_in',
connectionId='testConn')
For a source and a waveform (where self.app is the waveform), I was able to get the following to work:
src = StreamSource(streamId='strm1', format='short')
src.connect(providesComponent=CorbaObject(self.app.getPort('dataShort_in')),
connectionId='testConn')
However, for a sink I would normally call connect on the component:
sink = StreamSink('short')
self.comp.connect(sink, usesPortName='dataShort_out')
I tried to use a similar approach as for the source case by getting the port from the waveform as below:
sink = StreamSink('short')
self.app.getPort('dataShort_out').connectPort(sink, 'outputConn')
However, this gives the error:
File "/usr/local/redhawk/core/lib/python/ossie/cf/Port_idl.py", line 86, in connectPort
return self._obj.invoke("connectPort", _0_CF.Port._d_connectPort, args)
BAD_PARAM: CORBA.BAD_PARAM(omniORB.BAD_PARAM_WrongPythonType, CORBA.COMPLETED_NO, ["Expecting object reference, got <class 'bulkio.sandbox.streamsink.StreamSink'>", "Operation 'connectPort' parameter 0"])
I am not sure how I can get a CORBA obj ref for the sink to use here. Or is there another approach I can use here to connect the port to the sink?
I am using REDHAWK 2.2.2 on Centos 7.

I think I have found a solution to my own question. I ended up creating a new class that manages port connections that works for both sinks and sources. I called it ConnectionManager (hopefully it won't be confused with the ossie.utils.model.connection.ConnectionManager class.
class ConnectionManager:
def __init__(self):
self.connections = list()
def clear(self):
del self.connections[:]
def connect(self, usesPort, providesPort, id):
usesPort.connectPort(providesPort, id)
self.connections.append( (usesPort, id))
def disconnectAll(self):
for port, id in self.connections:
port.disconnectPort(id)
self.clear()
Here's an example using a StreamSource (self.cm is a ConnectionManager):
strm = sb.StreamSource(streamID='strm1', format='short')
self.cm.connect(strm.getPort('shortOut'),
self.app.getPort('dataShort_in'),
'connID')
And an example using a StreamSink:
sink = sb.StreamSink('short')
self.cm.connect(self.app.getPort('dataShort_out'),
sink.getPort('shortIn'),
'conn2ID')
My unit test setUp method has a call to self.cm.clear() and the tearDown method a call to self.cm.disconnectAll() to clean up the connections after each test.
The only thing I don't understand is the names of the ports for the sink and source classes. Using the {format}{In|Out} names work, but I don't know why.

The same process that you applied for connecting a component to a sink applies to an application, as long as the application is a sandbox object rather than a CORBA one:
dom = redhawk.attach()
app = dom.apps[0]
sink = sb.StreamSink('short')
app.connect(sink)
The next code shows the names of the ports. In this case, there is just one of type short.
from pprint import pprint
pprint(sink._providesPortDict)
The code below shows the syntax for using a CORBA reference instead of a sandbox object.
sink_port = sink.getPort('shortIn')
ref = app.ref
ref.getPort('dataShort_out').connectPort(sink_port, 'outputConn')
You can run a waveform in the sandbox. Note that the waveform's components need to run on the local host.
Use the nodeBooter shell command or kickDomain from the redhawk Python package to start a domain manager and a device manager.
Sample code to run a waveform in the sandbox:
import os
from ossie.utils import redhawk, sb
dom = redhawk.attach()
SDRROOT = os.getenv('SDRROOT')
waveform_dir = os.path.join(SDRROOT, 'dom', 'waveforms')
waveform_name = os.listdir(waveform_dir)[0]
app = dom.createApplication(waveform_name)
sink = sb.StreamSink()
app.connect(sink)

Related

How to lock virtualbox to get a screenshot through SOAP API

I'm trying to use the SOAP interface of Virtualbox 6.1 from Python to get a screenshot of a machine. I can start the machine but get locking errors whenever I try to retrieve the screen layout.
This is the code:
import zeep
# helper to show the session lock status
def show_lock_state(session_id):
session_state = service.ISession_getState(session_id)
print('current session state:', session_state)
# connect
client = zeep.Client('http://127.0.0.1:18083?wsdl')
service = client.create_service("{http://www.virtualbox.org/}vboxBinding", 'http://127.0.0.1:18083?wsdl')
manager_id = service.IWebsessionManager_logon('fakeuser', 'fakepassword')
session_id = service.IWebsessionManager_getSessionObject(manager_id)
# get the machine id and start it
machine_id = service.IVirtualBox_findMachine(manager_id, 'Debian')
progress_id = service.IMachine_launchVMProcess(machine_id, session_id, 'gui')
service.IProgress_waitForCompletion(progress_id, -1)
print('Machine has been started!')
show_lock_state(session_id)
# unlock and then lock to be sure, doesn't have any effect apparently
service.ISession_unlockMachine(session_id)
service.IMachine_lockMachine(machine_id, session_id, 'Shared')
show_lock_state(session_id)
console_id = service.ISession_getConsole(session_id)
display_id = service.IConsole_getDisplay(console_id)
print(service.IDisplay_getGuestScreenLayout(display_id))
The machine is started properly but the last line gives the error VirtualBox error: rc=0x80004001 which from what I read around means locked session.
I tried to release and acquire the lock again, but even though it succeeds the error remains. I went through the documentation but cannot find other types of locks that I'm supposed to use, except the Write lock which is not usable here since the machine is running. I could not find any example in any language.
I found an Android app called VBoxManager with this SOAP screenshot capability.
Running it through a MITM proxy I reconstructed the calls it performs and wrote them as the Zeep equivalent. In case anyone is interested in the future, the last lines of the above script are now:
console_id = service.ISession_getConsole(session_id)
display_id = service.IConsole_getDisplay(console_id)
resolution = service.IDisplay_getScreenResolution(display_id, 0)
print(f'display data: {resolution}')
image_data = service.IDisplay_takeScreenShotToArray(
display_id,
0,
resolution['width'],
resolution['height'],
'PNG')
with open('screenshot.png', 'wb') as f:
f.write(base64.b64decode(image_data))

Python - Keras API server to process image on trained custom model

I've followed this tutoria to create a custom object YoloV3 Keras model:
https://momoky.space/pythonlessons/YOLOv3-object-detection-tutorial/tree/master/YOLOv3-custom-training
Model works perfectly fine, my next goal is to create a Python Flask API witch is capable to process Image after upload it.
I've started modify the Code here for image detection
That's my added code:
#app.route('/api/test', methods=['POST'])
def main():
img = request.files["image"].read()
img = Image.open(io.BytesIO(img))
npimg=np.array(img)
image=npimg.copy()
image=cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
#cv2.imshow("Image", image)
#cv2.waitKey()
cv2.imwrite('c:\\yolo\\temp.jpg', image)
image = 'c:\\yolo\\temp.jpg'
yolo = YOLO()
r_image, ObjectsList = yolo.detect_img(image)
#response = {ObjectsList}
response_pikled = jsonpickle.encode(ObjectsList)
#yolo.close_session()
return Response(response=response_pikled, status=200, mimetype="application/json")
app.run(host="localhost", port=5000)
So my problem is that it works only on first iteration, when I upload a new image I receive following error:
File "C:\Users\xxx\Anaconda3\envs\yolo\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "C:\Users\xxx\Anaconda3\envs\yolo\lib\site-packages\tensorflow\python\client\session.py", line 1095, in _run
'Cannot interpret feed_dict key as Tensor: ' + e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder:0", shape=(3, 3, 3, 32), dtype=float32) is not an element of this graph.
This is the original static part of the code:
if __name__=="__main__":
yolo = YOLO()
image = 'test.png'
r_image, ObjectsList = yolo.detect_img(image)
print(ObjectsList)
#cv2.imshow(image, r_image)
cv2.imwrite('detect.png', r_image)
yolo.close_session()
Things that really confuse me is how to load the model when the application start, and execute detection every time a new image is posted.
Thank you
UPDATE
in the construtor part there's a referenced Keras backend session:
**def __init__(self, **kwargs):
self.__dict__.update(self._defaults) # set up default values
self.__dict__.update(kwargs) # and update with user overrides
self.class_names = self._get_class()
self.anchors = self._get_anchors()
self.sess = K.get_session()
self.boxes, self.scores, self.classes = self.generate()**
After addinga K.clear_session it works for multiple series request:
#app.route('/api/test', methods=['POST'])
def main():
img = request.files["image"].read()
img = Image.open(io.BytesIO(img))
npimg=np.array(img)
image=npimg.copy()
image=cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
#cv2.imshow("Image", image)
#cv2.waitKey()
cv2.imwrite('c:\\yolo\\temp.jpg', image)
image = 'c:\\yolo\\temp.jpg'
yolo = YOLO()
r_image, ObjectsList = yolo.detect_img(image)
#response = {ObjectsList}
response_pikled = jsonpickle.encode(ObjectsList)
#yolo.close_session()
K.clear_session()
return Response(response=response_pikled, status=200, mimetype="application/json")
Will be possible to avoid model, anchors and classes needs to be loaded at every computation avoiding this:
ogs/000/trained_weights_final.h5 model, anchors, and classes loaded.
127.0.0.1 - - [27/Dec/2019 22:58:49] "?[37mPOST /api/test HTTP/1.1?[0m" 200 -
logs/000/trained_weights_final.h5 model, anchors, and classes loaded.
127.0.0.1 - - [27/Dec/2019 22:59:08] "?[37mPOST /api/test HTTP/1.1?[0m" 200 -
logs/000/trained_weights_final.h5 model, anchors, and classes loaded.
127.0.0.1 - - [27/Dec/2019 22:59:33] "?[37mPOST /api/test HTTP/1.1?[0m" 200
-
I've managed to get this up and running as a prototype. I've uploaded a repo: vulcan25/image_processor which implements this all.
The first thing I investigated was the functionality of the method YOLO.detect_img in that code from the tutorial. This method takes a filename, which is immediately handled by cv2.imread in the original code: #L152-L153. The returned data from this is then processed internally by self.detect_image (note the difference in spelling) and the result displayed with cv2.show.
This behaviour isn't good for a webapp and I wanted to keep everything in memory, so figured the best way to change that functionality was to subclass YOLO and overide the detect_img method, making it behave differently. So in processor/my_yolo.py I do something like:
from image_detect import YOLO as stock_yolo
class custom_yolo(stock_yolo):
def detect_img(self, input_stream):
image = cv2.imdecode(numpy.fromstring(input_stream, numpy.uint8), 1)
original_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
original_image_color = cv2.cvtColor(original_image, cv2.COLOR_BGR2RGB)
r_image, ObjectsList = self.detect_image(original_image_color)
is_success, output_stream = cv2.imencode(".jpg", r_image)
return is_success, output_stream
Note: In a later decision, I pulled image_detect.py into my repo to append K.clear_session(). It would have been possible to put the above mod in that file also, but I've stuck with subclassing for that part.
This accepts a stream, then uses cv2.imencode (source) and cv2.imdecode (source) in place of imshow and imread respectively.
We can now define a single a single function which will in turn run all the image processing stuff. This separates that part of the code (and dependencies) from your flask app which is good:
yolo = custom_yolo() # create an object from the custom class defined above.
def process(intput_stream):
start_time = time.time()
is_success, output_stream = yolo.detect_img(input_stream)
io_buf = io.BytesIO(output_stream)
print("--- %s seconds ---" % (time.time() - start_time))
return is_success, io_buf.read()
From Flask we can call this in the same way, where we already have the stream of the uploaded file available to us as: request.files['file'].read() (which is actually a method of the werkzeug.datastructures.FileStorage object, as I've documented elsewhere).
As a side-note, this function could be run from the terminal. If you're launching my repo you'd need to do this in the processor container (see the docker exec syntax at the end of my README)...
from my_yolo import process
with f.open('image.jpg', 'rb') as f:
is_sucess, processed_data = process(f.read())
Then the result written to a file:
with f.open('processed_image.jpg', 'wb' as f):
f.write(processed_data)
Note that my repo actually has two separate flask apps (based on another upload script I put together which implements dropzone.js on the frontend).
I can run in two modes:
processor/src/app.py: Accessible on port 5001, this runs process directly (incoming requests block until the processed data is returned).
flask/src/app.py: Accessible on port 5000, this creates an rq job for each incoming request, the processor container then runs as a worker to process these requests from the queue.
Each app has its own index.html which does its own unique implementation on the frontend. Mode (1) writes images straight to the page, mode (2) adds a link to the page, which when clicked leads to a separate endpoint that serves the image (when processed).
The major difference is how process is invoked. With mode (1) processor/src/app.py:
from my_yolo import process
if file and allowed_file(file.filename):
# process the upload immediately
input_data = file.read()
complete, data = process(input_data)
As mentioned in a comment, I was seeing pretty fast conversions with this mode: ~1.6s per image on CPU. This script also uses a redis set to maintain a list of uploaded files, which can be used for validation on the view endpoint further down.
In mode (2) flask/src/app.py:
from qu import image_enqueue
if file and allowed_file(file.filename):
input_data = file.read()
job = img_enqueue(input_data)
return jsonify({'url': url_for('view', job_id=job.id)})
I've implemented a separate file flask/src/qu.py which implements this img_enqueue function, which ultimately loads the process function from flask/src/my_yolo.py where it is defined as:
def process(data): pass
This is an important destinction. Normally with rq the contents of this function would be defined in the same codebase as the flask service. In fact, I've actually put the business logic in processor/src/my_yolo.py which allows us to detach the container with the image processing dependencies, and ultiately host this somewhere else, as long is it shares a redis connection with the flask service.
Please have a look at the code in the repo for further info, and feel free to log an issue against that repo with any further queries (or if you get stuck). Be aware I may introduce breaking changes, so you may wish to fork.
I've tried to keep this pretty simple. In theory this could be edited slightly to support a different processor/Dockerfile which handles any processing workload, but the same frontend allowing you to submit any type of data from a stream: images, CSV, other text, etc.
Things that really confuse me is how to load the model when the application start, and execute detection every time a new image is posted. Thank you
You'll notice when you run this mode (1) it is perfect, in that the dependencies load when the flask server boots (~17.s) and individual image processing takes ~1s. This is ideal, although probably leads to higher overall memory usage on the server, as each WSGI worker requires all the dependencies loaded.
When run in mode (2) - where processing is passed to rq workers, the libraries are loaded each time an image is processed, so it's much slower. I will try to fix that, I just need to investigate how to pre-load the libraries in the rq deployment; I was close to this before but that was around the time I stumbled with the K.clear_session() problem, so haven't had time to retest a fix for this (yet).
Inside YOLO constructor try adding this:
from keras import backend as K
K.clear_session()

How to use pystemd to control systemd timedated ntp service?

I'm working on a python app that needs to get the NTPSynchronized parameter from system-timedated. I'd also like to be able to start and stop the NTP service by using the SetNTP method.
To communicate with timedated over d-bus I have been using this as reference: https://www.freedesktop.org/wiki/Software/systemd/timedated/
I previously got this working with dbus-python, but have since learned that this library has been deprecated. I tried the dbus_next package, but that does not have support for Python 3.5, which I need.
I came across the pystemd package, but I am unsure if this can be used to do what I want. The only documentation I have been able to find is this example (https://github.com/facebookincubator/pystemd), but I can not figure out how to use this to work with system-timedated.
Here is the code I have that works with dbus-python:
import dbus
BUS_NAME = 'org.freedesktop.timedate1`
IFACE = 'org.freedesktop.timedate1`
bus = dbus.SystemBus()
timedate_obj = bus.get_object(BUS_NAME, '/org/freedesktop/timedate1')
# Get synchronization value
is_sync = timedate_obj.Get(BUS_NAME, 'NTPSynchronized', dbus_interface=dbus.PROPERTIES_IFACE)
# Turn off NTP
timedate_obj.SetNTP(False,False, dbus_interface=IFACE)
Here's what I have so far with pystemd, but I don't think I'm accessing it in the right way:
from pystemd.systemd1 import Unit
unit = Unit(b'systemd-timesyncd.service')
unit.load()
# Try to access properties
prop = unit.Properties
prop.NTPSynchronized
Running that I get:
Attribute Error: 'SDInterface' object has no attribute 'NTPSynchronized'
I have a feeling that either the service I entered is wrong, or the way I'm accessing properties is wrong, or even both are wrong.
Any help or advice is appreciated.
Looking at the source code, it appears that using the pystemd.systemd1 Unit object has a default destination of "org.freedesktop.systemd1" + the service name (https://github.com/facebookincubator/pystemd/blob/master/pystemd/systemd1/unit.py)
This is not what I want because I am trying to access "org.freedesktop.timedate1"
So instead I instantiated it's base class SDObject from pystemd.base (https://github.com/facebookincubator/pystemd/blob/master/pystemd/base.py)
The following code allowed me to get the sync status of NTP
from pystemd.base import SDObject
obj = SDObject(
destination=b'org.freedesktop.timedate1',
path=b'/org/freedesktop/timedate1',
bus=None,
_autoload=False
)
obj.load()
is_sync = obj.Properties.Get('org.freedesktop.timedate1','NTPSynchronized')
print(is_sync)
Not sure if this is what the library author intended, but hey it works!

Getting Data from PolarH10 via BLE

I have been trying to get data from my PolarH10 with my raspberry-pi. I have been successfully getting data through the commandline with bluez, but have been unable to reproduce that in python. I am using pygatt(gatttool bindings) and python3.
I have been closely following the examples provided on bitbucket and was able to detect my device and filter out it's MAC address by filtering it by name. I however was unable to get either of the "reading data asyncronously" examples to work.
#This doesnt work...
req = gattlib.GATTRequester(mymac)
response = gattlib.GATTResponse()
req.read_by_handle_async(0x15, response) # what does the 0x15 mean?
while not response.received():
time.sleep(0.1)
steps = response.received()[0]
...
#This doesn't work either
class NotifyYourName(gattlib.GATTResponse):
def on_response(self, data):
print("your data is: {}".format(data))
response = NotifyYourName()
req = gattlib.GATTRequester(mymac)
req.read_by_handle_async(0x15, response)
while True:
# here, do other interesting things
time.sleep(1)
I don't know and cannot extract from the "documentation(s)" how to subscribe to/read notifications from a characteristic(heart rate) of my sensor(PolarH10). The error I am getting is when calling GATTRequester.connect(True) is
RuntimeError: Channel or attrib not ready.
Please tell me how correctly connect to a BLE device via Python on Debian and how to programatically identify offered services and their characteristics and how to get their notifications asyncronously in python using gattlib(pygatt) or any other library. Thanks!
The answer is: Just use bleak.
I have a device that presents the same behavior. In my case, the problem was that it does not have a channel of type public, I should use random instead (like in gatttool -b BE:BA:CA:FE:BA:BE -I -t random).
Just calling the connect() method with the parameter channel_type to random could fix it:
requester.connect(True, channel_type="random")
PD: Sorry for the late response (maybe it will be helpful to others).

Jmeter: how to disable listener by code (groovy)

I've tried to disable view resutls tree by groovy code. The code runs, correctly shows and changes the name and enable property (as reported by log), but neither actual ceasing of info in GUI nor writing to file by the listener (both GUI and non-gui mode) happens. Listeners are processed at the end, so IMHO the code that is executed in setUp thread should have effect on logging of other threads. What enabled property do?
I've seen a workaround by editing jmeter plan file in place (JMeter: how can I disable a View Results Tree element from the command line?), but I'd like internal jmeter/groovy solution.
The code (interestingly each listener is processed twice, first printed view resuts tree, next already foo):
import org.apache.jmeter.engine.StandardJMeterEngine
import org.apache.jorphan.collections.HashTree
import org.apache.jorphan.collections.SearchByClass
import org.apache.jmeter.reporters.ResultCollector
def engine = ctx.getEngine()
def test = engine.getClass().getDeclaredField("test")
test.setAccessible(true)
def testPlanTreeRC = (HashTree) test.get(engine)
def rcSearch = new SearchByClass<>(ResultCollector.class)
testPlanTreeRC.traverse(rcSearch)
def rcTrees = rcSearch.getSearchResults()
for (rc in rcTrees) {
log.error(rc.getName())
if (rc.isEnabled()) {log.error("yes")} else {log.error("no")}
rc.setEnabled(false)
if (rc.isEnabled()) {log.error("yes")} else {log.error("no")}
if (rc.getName()=="View Results Tree") {rc.setName ("foo")}
}
ADDED: when listener is disabled in test plan in GUI, it's not found by traverse tree code above.
disabled property is used/checked by JMeter on startup so there must be a change in JMeter code
I open an enhancement Add option to disable View Results Tree/Listeners in non GUI
You can vote on
There are other options executing JMeter externally, using Taurus tool or execute JMeter using Java and disable it:
HashTree testPlanTree = SaveService.loadTree(new File("/path/to/your/testplan"));
SearchByClass<ResultCollector> listenersSearch = new SearchByClass<>(ResultCollector.class);
testPlanTree.traverse(listenersSearch);
Collection<ResultCollector> listeners = listenersSearch.getSearchResults();
listeners.forEach(listener -> listener.setProperty(TestElement.ENABLED, false));
jmeter.configure(testPlanTree);
jmeter.run();

Resources