Unable to simulate OpenModelica ThermoSysPro FMU in PyFMI - openmodelica

Running 64-bit Windows 10. Anaconda installation of PyFMI and pip OMPython.
I want to automate the running of a Rankine Cycle model (using ThermoSysPro Components) in Python using OMPython to create the FMU and then PyFMI to simulate it.
I have managed to use this method to simulate a Brayton Cycle in the ThermoPower library, and it is also simulating part of the cycle.
However as soon as I add the Exchanger component (dynamicExchangerWaterSteamFlueGases), it stops working.
"""
Setting Condenser to Pump to Bolier (TSP) model values using PyFMI
Sophie Gribben
29/07/19
"""
def createfmu():
# Flattening, compiling and exporting model as fmu
from OMPython import OMCSessionZMQ
omc = OMCSessionZMQ()
omc.sendExpression('loadFile("H:/OMModels/CondensertoPumptoBolier.mo")')
model_fmu = omc.sendExpression("translateModelFMU(CondensertoPumptoBolier)")
return model_fmu
# Load model
from pyfmi import load_fmu
model = load_fmu(createfmu())
#simulating model, which is returning the error
res = model.simulate()
Without setting a log_level, the error message is:
FMUException: Exit Initialize returned with an error. Enable logging for more information, (load_fmu(..., log_level=4)).
Update
RankineCycle_log.txt is giving me an FMU status Error but I am unsure how to fix this.
FMIL: module = b'Model', log level = 2: b'[logStatusError][FMU status:Error] C:/OpenModelica1.13.264bit/lib/omlibrary/ThermoSysPro 3.1/Properties/WaterSteam/IF97_packages.mo:123: Water_Ph: Incorrect region number (-1)'
FMIL: module = b'Model', log level = 2: b'[logFmi2Call][FMU status:Error] fmi2EnterInitializationMode: terminated by an assertion.'

Related

How to lock virtualbox to get a screenshot through SOAP API

I'm trying to use the SOAP interface of Virtualbox 6.1 from Python to get a screenshot of a machine. I can start the machine but get locking errors whenever I try to retrieve the screen layout.
This is the code:
import zeep
# helper to show the session lock status
def show_lock_state(session_id):
session_state = service.ISession_getState(session_id)
print('current session state:', session_state)
# connect
client = zeep.Client('http://127.0.0.1:18083?wsdl')
service = client.create_service("{http://www.virtualbox.org/}vboxBinding", 'http://127.0.0.1:18083?wsdl')
manager_id = service.IWebsessionManager_logon('fakeuser', 'fakepassword')
session_id = service.IWebsessionManager_getSessionObject(manager_id)
# get the machine id and start it
machine_id = service.IVirtualBox_findMachine(manager_id, 'Debian')
progress_id = service.IMachine_launchVMProcess(machine_id, session_id, 'gui')
service.IProgress_waitForCompletion(progress_id, -1)
print('Machine has been started!')
show_lock_state(session_id)
# unlock and then lock to be sure, doesn't have any effect apparently
service.ISession_unlockMachine(session_id)
service.IMachine_lockMachine(machine_id, session_id, 'Shared')
show_lock_state(session_id)
console_id = service.ISession_getConsole(session_id)
display_id = service.IConsole_getDisplay(console_id)
print(service.IDisplay_getGuestScreenLayout(display_id))
The machine is started properly but the last line gives the error VirtualBox error: rc=0x80004001 which from what I read around means locked session.
I tried to release and acquire the lock again, but even though it succeeds the error remains. I went through the documentation but cannot find other types of locks that I'm supposed to use, except the Write lock which is not usable here since the machine is running. I could not find any example in any language.
I found an Android app called VBoxManager with this SOAP screenshot capability.
Running it through a MITM proxy I reconstructed the calls it performs and wrote them as the Zeep equivalent. In case anyone is interested in the future, the last lines of the above script are now:
console_id = service.ISession_getConsole(session_id)
display_id = service.IConsole_getDisplay(console_id)
resolution = service.IDisplay_getScreenResolution(display_id, 0)
print(f'display data: {resolution}')
image_data = service.IDisplay_takeScreenShotToArray(
display_id,
0,
resolution['width'],
resolution['height'],
'PNG')
with open('screenshot.png', 'wb') as f:
f.write(base64.b64decode(image_data))

How to use pystemd to control systemd timedated ntp service?

I'm working on a python app that needs to get the NTPSynchronized parameter from system-timedated. I'd also like to be able to start and stop the NTP service by using the SetNTP method.
To communicate with timedated over d-bus I have been using this as reference: https://www.freedesktop.org/wiki/Software/systemd/timedated/
I previously got this working with dbus-python, but have since learned that this library has been deprecated. I tried the dbus_next package, but that does not have support for Python 3.5, which I need.
I came across the pystemd package, but I am unsure if this can be used to do what I want. The only documentation I have been able to find is this example (https://github.com/facebookincubator/pystemd), but I can not figure out how to use this to work with system-timedated.
Here is the code I have that works with dbus-python:
import dbus
BUS_NAME = 'org.freedesktop.timedate1`
IFACE = 'org.freedesktop.timedate1`
bus = dbus.SystemBus()
timedate_obj = bus.get_object(BUS_NAME, '/org/freedesktop/timedate1')
# Get synchronization value
is_sync = timedate_obj.Get(BUS_NAME, 'NTPSynchronized', dbus_interface=dbus.PROPERTIES_IFACE)
# Turn off NTP
timedate_obj.SetNTP(False,False, dbus_interface=IFACE)
Here's what I have so far with pystemd, but I don't think I'm accessing it in the right way:
from pystemd.systemd1 import Unit
unit = Unit(b'systemd-timesyncd.service')
unit.load()
# Try to access properties
prop = unit.Properties
prop.NTPSynchronized
Running that I get:
Attribute Error: 'SDInterface' object has no attribute 'NTPSynchronized'
I have a feeling that either the service I entered is wrong, or the way I'm accessing properties is wrong, or even both are wrong.
Any help or advice is appreciated.
Looking at the source code, it appears that using the pystemd.systemd1 Unit object has a default destination of "org.freedesktop.systemd1" + the service name (https://github.com/facebookincubator/pystemd/blob/master/pystemd/systemd1/unit.py)
This is not what I want because I am trying to access "org.freedesktop.timedate1"
So instead I instantiated it's base class SDObject from pystemd.base (https://github.com/facebookincubator/pystemd/blob/master/pystemd/base.py)
The following code allowed me to get the sync status of NTP
from pystemd.base import SDObject
obj = SDObject(
destination=b'org.freedesktop.timedate1',
path=b'/org/freedesktop/timedate1',
bus=None,
_autoload=False
)
obj.load()
is_sync = obj.Properties.Get('org.freedesktop.timedate1','NTPSynchronized')
print(is_sync)
Not sure if this is what the library author intended, but hey it works!

Is there a way to set a global variable to be used with aiortc?

I'm trying to have a python RTC client use a global variable so that I can reuse it for multiple functions.
I'm using this for a RTC project I¨ve been working on, I have a functioning js Client, but the functions work differently from python.
The functions on the server and js client side are my own, and do not have have parameters, and I hope to avoid having to use them on the python client I'm making.
I've been using the aiortc Cli.py from their github as a basis for how my python clien should work. But I don't run it asynchronous, because I am trying to learn and control when events happen.
the source code can be found here, I am referring to the codes in line 71-72
https://github.com/aiortc/aiortc/blob/master/examples/datachannel-cli/cli.py
this is the code I'm trying to run properly
I've only inserted the code relevant to my current issue
import argparse
import asyncio
import logging
import time
from aiortc import RTCIceCandidate, RTCPeerConnection, RTCSessionDescription
from aiortc.contrib.signaling import add_signaling_arguments, create_signaling
pc = None
channel = None
def createRTCPeer():
print("starting RTC Peer")
pc = RTCPeerConnection()
print("created Peer", pc)
return pc
def pythonCreateDataChannel():
print("creating datachannel")
channel = pc.CreateDataChannel("chat")
the createRTCPeer function works as intended, with it creating an RTC Object, but my pythonCreateDataChannel reports an error, if I have it set to "None" before using it
AttributeError: 'NoneType' object has no attribute 'CreateDataChannel'
and it will report
NameError: name 'channel' is not defined
same goes for pc if I don't set it in the global scope before hand
Have you tried this:
import argparse
import asyncio
import logging
import time
from aiortc import RTCIceCandidate, RTCPeerConnection, RTCSessionDescription
from aiortc.contrib.signaling import add_signaling_arguments, create_signaling
pc = None
channel = None
def createRTCPeer():
print("starting RTC Peer")
global pc
pc = RTCPeerConnection()
print("created Peer", pc)
def pythonCreateDataChannel():
print("creating datachannel")
global channel
channel = pc.CreateDataChannel("chat")

How to connect a sink to a external waveform port in REDHAWK?

I'm trying to write a unit test for a REDHAWK waveform. I would like to use stream sources to input data and stream/message sinks to store the output. I have written unit tests for components this way, but wanted to create a test for a waveform as well. I found a solution for connecting a StreamSource to a waveform's port, but have not been able to determine how to connect a sink to a waveform port.
For a source and a component (where self.comp is the component), normally one can use the following to connect them:
src = StreamSource(streamId='strm1', format='short')
src.connect(providesComponent=self.comp,
providesPortName='dataShort_in',
connectionId='testConn')
For a source and a waveform (where self.app is the waveform), I was able to get the following to work:
src = StreamSource(streamId='strm1', format='short')
src.connect(providesComponent=CorbaObject(self.app.getPort('dataShort_in')),
connectionId='testConn')
However, for a sink I would normally call connect on the component:
sink = StreamSink('short')
self.comp.connect(sink, usesPortName='dataShort_out')
I tried to use a similar approach as for the source case by getting the port from the waveform as below:
sink = StreamSink('short')
self.app.getPort('dataShort_out').connectPort(sink, 'outputConn')
However, this gives the error:
File "/usr/local/redhawk/core/lib/python/ossie/cf/Port_idl.py", line 86, in connectPort
return self._obj.invoke("connectPort", _0_CF.Port._d_connectPort, args)
BAD_PARAM: CORBA.BAD_PARAM(omniORB.BAD_PARAM_WrongPythonType, CORBA.COMPLETED_NO, ["Expecting object reference, got <class 'bulkio.sandbox.streamsink.StreamSink'>", "Operation 'connectPort' parameter 0"])
I am not sure how I can get a CORBA obj ref for the sink to use here. Or is there another approach I can use here to connect the port to the sink?
I am using REDHAWK 2.2.2 on Centos 7.
I think I have found a solution to my own question. I ended up creating a new class that manages port connections that works for both sinks and sources. I called it ConnectionManager (hopefully it won't be confused with the ossie.utils.model.connection.ConnectionManager class.
class ConnectionManager:
def __init__(self):
self.connections = list()
def clear(self):
del self.connections[:]
def connect(self, usesPort, providesPort, id):
usesPort.connectPort(providesPort, id)
self.connections.append( (usesPort, id))
def disconnectAll(self):
for port, id in self.connections:
port.disconnectPort(id)
self.clear()
Here's an example using a StreamSource (self.cm is a ConnectionManager):
strm = sb.StreamSource(streamID='strm1', format='short')
self.cm.connect(strm.getPort('shortOut'),
self.app.getPort('dataShort_in'),
'connID')
And an example using a StreamSink:
sink = sb.StreamSink('short')
self.cm.connect(self.app.getPort('dataShort_out'),
sink.getPort('shortIn'),
'conn2ID')
My unit test setUp method has a call to self.cm.clear() and the tearDown method a call to self.cm.disconnectAll() to clean up the connections after each test.
The only thing I don't understand is the names of the ports for the sink and source classes. Using the {format}{In|Out} names work, but I don't know why.
The same process that you applied for connecting a component to a sink applies to an application, as long as the application is a sandbox object rather than a CORBA one:
dom = redhawk.attach()
app = dom.apps[0]
sink = sb.StreamSink('short')
app.connect(sink)
The next code shows the names of the ports. In this case, there is just one of type short.
from pprint import pprint
pprint(sink._providesPortDict)
The code below shows the syntax for using a CORBA reference instead of a sandbox object.
sink_port = sink.getPort('shortIn')
ref = app.ref
ref.getPort('dataShort_out').connectPort(sink_port, 'outputConn')
You can run a waveform in the sandbox. Note that the waveform's components need to run on the local host.
Use the nodeBooter shell command or kickDomain from the redhawk Python package to start a domain manager and a device manager.
Sample code to run a waveform in the sandbox:
import os
from ossie.utils import redhawk, sb
dom = redhawk.attach()
SDRROOT = os.getenv('SDRROOT')
waveform_dir = os.path.join(SDRROOT, 'dom', 'waveforms')
waveform_name = os.listdir(waveform_dir)[0]
app = dom.createApplication(waveform_name)
sink = sb.StreamSink()
app.connect(sink)

How to use dask.distributed API to specify the options for starting Bokeh web interface?

I'm trying to use dask.distributed Python API to start a scheduler. The example provided in http://distributed.dask.org/en/latest/setup.html#using-the-python-api works as expected but it does not provide insight on how to supply the options need to start Bokeh web interface.
Upon inspection of dask.distributed source code, I have understood I need to provide the Bokeh options using Scheduler(services={}). Unfortunately, I have failed on trying to find the correct dictionary format for services={}.
Below is the code for dask scheduler function.
import dask.distributed as daskd
import tornado
import threading
def create_dask_scheduler(scheduler_options_dict):
# Define and start tornado
tornado_loop = tornado.ioloop.IOLoop.current()
tornado_thread = threading.Thread(target=tornado_loop.start,daemon=True)
tornado_thread.start()
# Define and start scheduler
dask_scheduler = daskd.Scheduler(loop=tornado_loop,synchronize_worker_interval=scheduler_options_dict['synchronize_worker_interval'],allowed_failures=scheduler_options_dict['allowed_failures'],services=scheduler_options_dict['services'])
dask_scheduler.start('tcp://:8786')
return dask_scheduler
scheduler_options_dict = collections.OrderedDict()
scheduler_options_dict = {'synchronize_worker_interval':60,'allowed_failures':3,'services':{('http://hpcsrv',8787):8787}}
dask_scheduler = create_dask_scheduler(scheduler_options_dict)
The error I get is:
Exception in thread Thread-4: Traceback (most recent call last):
/uf5a/nbobolea/bin/anaconda2019.03_python3.7/envs/optimization/lib/python3.7/site-packages/ipykernel_launcher.py:18:
UserWarning: Could not launch service 'http‍://hpcsrv' on port 8787.
Got the following message: 'int' object is not callable
distributed.scheduler - INFO - Scheduler at:
tcp://xxx.xxx.xxx.xxx:8786
Help and insight is very appreciated.
You want
'services': {('bokeh', dashboard_address): BokehScheduler, {}}
where dashboard_address is something like "localhost:8787" and BokehScheduler is in distributed.bokeh.scheduler. You will need to read up on the Bokeh server to see what additional kwargs could be passed in that empty dictionary.

Resources