Openvino node output - openvino

Environment: python=3.8.13, openvino=2022.3.0
Hi,
I intended to get the node output of my model.Model structure
The method I tried was: openvino.runtime.Output
openvino.rutime.Output
However, i found openvino.runtime was calling _pyopenvino, which was an empty init file. Thus the Output class was missing.
enter image description here
Is it a bug of openvino=2022.3.0?
Or any other method to get the intermediate output of a IR model under openvino?
Appreciate for your help!
Trie: reinstall openvino=2022.2.0, openvino=2022.1.0

You can try this method:
from openvino.runtime import Core
ie = Core()
classification_model_xml = "model/classification.xml"
model = ie.read_model(model=classification_model_xml)
model.output(0).any_name
Refer here for more info.

below is the code I tried to call the read_model method.
image1: screen shot calling the read_model method
from openvino.runtime import Core
ie = Core()
model_path = 'openvino_folder/testSimplifiedModel_3.simplified.xml'
Then I goto the runtime.Core file and found this:
image2: Core class
Maybe I installed the OpenVino in a wrong way? Current openvino version = 2022.3.0.

Related

"Error: Undeclared identifier" when trying to return a <Node> from a procedure

I post here a specific problem using a library (grim) in Nim, but the underlying concepts are still not super clear to me so I appreciate a solution coming with an explanation.
I would like to make a procedure returning a node. The example below is not really useful but makes the point: I want to return node but I apparently don't know what type it is.
import grim
import sequtils
proc get_a_node_with_label(graph: Graph, label: string): Node =
for node in graph.nodes:
if node.label == label:
return node
var g = newGraph("graph")
let n1 = g.addNode("n1", %(Name: "first"))
let n2 = g.addNode("n2", %(Name: "second"))
var aNode = get_a_node_with_label(g, "n2")
i get an Error: undeclared identifier: 'Node', but the type of "node" in the loop is "Node", if I echo node.type.
How should I deal with types on this occasion? What output should I declare in the procedure?
Thanks
Andrea
PS: I apologize if the question is not well asked, and I'm happy to improve it with your guidance.
You probably installed the grim library through nimble install grim. That gave you the grim-0.2.0, released early this year. The point is that Node was private in that release, so your code cannot access it.
You can opt to install the latest code, which at some point this year made Node and others public, with:
$ nimble uninstall grim
$ nimble install grim##devel
Or you can make the object public in your computer, editing (probably) ~/.nimble/pkgs/grim-0.2.0/grim/graph.nim:
30 Node = ref object
to
30 Node* = ref object
The former includes the latest code, and includes 40ish commits. On the downside, your build will be hard to reproduce, because you cannot pin the grim version.
The later should allow you to compile locally, but you will run into problems if you intend to distribute your code (i.e. forcing you or your users to patch the grim source).
You could also open an issue at the github repo, asking the author to tag a new version.
You can get objects which class is Node (aka Node objects), but you cannot write "Node" in your code, and the only way of creating a Node object is through code that has access to the Node private class (i.e. in the same file). It is usually some kind of newNode or getNode.
So you could get a Node inside your code, and pass it around, but cannot write "Node". E.g.
import grim
var g = newGraph("graph")
let n1 = g.addNode("n1", %(Name: "first"))
# This works happily
let node = g.node(n1) # This assigns a Node object to "node"
echo node # This passes the Node object to a $ proc.
# This fails to compile, albeit being functionally the same code,
# because your program doesn't know what "Node" is.
let node1: Node = g.node(n1)

Pytorch: Recover network with customized VGG model that was saved improperly

I am currently doing work with customizing the forward method for models. I was using some tutorial code that ran VGG. I did a few runs with the baseline model and it seemed to work fine. Afterwards, I replaced the forward method for the VGG using:
net.forward = types.MethodType(forward_vgg_new, net)
Unfortunately, the way that the tutorial code saves the models is:
state = {
'net':net,
'acc':acc,
'epoch':epoch,
}
...
torch.save(state, ...)
While This worked for the original tutorial code, loading no longer works for my custom models as I get:
AttributeError: 'VGG' object has no attribute 'forward_vgg_new'
I have since read from the documentation that it is better for me to save the model's state_dict:
state = {
'net':net.state_dict(),
'acc':acc,
'epoch':epoch,
}
...
torch.save(state, ...)
While I will change the code for future runs, I was wondering if it was possible to salvage the models I have already trained. I naively already tried to import the VGG class and add my forward_vgg_new method to it:
setattr(VGG, 'forward_vgg_new', forward_vgg_new)
before calling torch.load, but it doesn't work.
To solve the problem, I went directly into the VGG library and temporarily added my function so that I could load the saved models and save only their state dicts. I reverted the changes to the VGG library after I recovered the saves. Not the most graceful way of fixing the problem, but it worked.

'TensorBoard' object has no attribute 'writer' error when using Callback.on_epoch_end()

Since Model.train_on_batch() doesn't take a callback input, I tried using Callback.on_epoch_end() in order to write my loss to tensorboard
However, trying to run the on_epoch_end() method results in the titular error, 'TensorBoard' object has no attribute 'writer'. Other solutions to my original problem with writing to tensorboard included calling the Callback.writer attribute, and running these solutions gave the same error. Also, the tensorflow documentation for the TensorBoard class doesn't mention a writer attribute
I'm somewhat of a novice programmer, but it seems to me that the on_epoch_end() method is also at some point calling the writer attribute, but I'm confused as to why the function would use an attribute that doesn't exist
Here's the code I'm using to create the callback:
logdir = "./logs/"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
and this is the callback code that I try to run in my training loop:
logs = {
'encoder':encoder_loss[0],
'discriminator':d_loss,
'generator':g_loss,
}
tensorboard_callback.on_epoch_end(i, logs)
where encoder_loss, d_loss, and g_loss are my scalars, and i is the batch number
Is the error a result of some improper code on my part, or is tensorflow trying to reference something that doesn't exist?
Also, if anyone knows another way to write to tensorboard using Model.train_on_batch, that would also solve my problem
Since you are using a callback without the fit method, you also need to pass your model to the TensorBoard object:
logdir = "./logs/"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
tensorboard_callback.set_model(model=model)

How to pass arguments to openai-gym environments upon init

Following this (unreadable) forum post, I thought it was fitting to post it up on stack overflow for future generations who search for it.
How to pass arguments for gym environments on init?
In the meantime the support for arguments in gym.make has been implemented, so you can pass key word arguments to make right after environment name:
your_env = gym.make('YourEnv', some_kwarg=your_vars)
The gym version that I'm running is 0.12.4.
UPDATE: This is supported from version 0.10.10.. Reference. Thanks #Wojciech.
Method 1 - Use the built in register functionality:
Re-register the environment with a new name
For example:
'Blackjack-natural-v0'
Instead of the original
'Blackjack-v0'
First you need to import the register function:
from gym.envs.registration import register
Then you use the register function like this:
register( id='Blackjack-natural-v0', entry_point='gym.envs.toy_text:BlackjackEnv', kwargs={'natural': True} )
Method 2 - Add an extra method to your env:
If you can just call another init method after gym.make, then you can just do:
your_env = gym.make("YourEnv")
your_env.env.your_init(your_vars)

frontend_tuner_status doesn't work in Python FEI

I'm using the Redhawk IDE 2.0.1 in Centos 6.5.
If I generate a Python based FEI, install, run, allocate, and then try to change the center_frequency via the Properties tab in the IDE I get the error:
Failed to update device property: Center Frequency
Error while executing callable. Caused by org.omg.CORBA.NO_IMPLEMENT: Server-side Exception: null vmcid: 0x41540000 minor code: 99 completed: No
Server-side Exception: null
I've tried to totally different systems and I get the same behavior.
If I do the same thing with the C++ project it works fine. Seems to me the auto generated Python code in 2.0.1 is broken like maybe it's not registering the listener? Any ideas are appreciated as this app will be much easier to implement in Python for me. Thanks
The error org.omg.CORBA.NO_IMPLEMENT: Server-side Exception is a CORBA exception indicating that the FEI device does not implement the setTunerCenterFrequency method, despite the FEI device having a DigitalTuner port. The DigitalTuner IDL inherits from the AnalogTuner IDL, which provides the setTunerCenterFrequency method. There must be a bug in the implementation of the FEI DigitalTuner port. In ${OSSIEHOME}/lib/python/frontend/input_ports.py, InDigitalTunerPort does not inherit from the InAnalogTunerPort, which is where the setCenterFrequency method lives. Changing it to the following should fix this issue:
class InDigitalTunerPort(FRONTEND__POA.DigitalTuner, InAnalogTunerPort):
def __init__(self, name, parent=digital_tuner_delegation()):
InAnalogTunerPort.__init__(self, name, parent)
There's a second issue as well. The generated base class instantiates the DigitalTuner port without passing in a reference to itself, the parent. The generated base class of your FEI Device should change from this:
self.port_DigitalTuner_in = frontend.InDigitalTunerPort("DigitalTuner_in")
to this:
self.port_DigitalTuner_in = frontend.InDigitalTunerPort("DigitalTuner_in", self)

Resources