OpenModelica with Python - openmodelica

I created two models in OpenModelica-V1.12. One model has the main part and other model has the math related calculations. Main model uses the other model for the calculations. This works fine in OpenModelica.
When I launch the main model using Python with OMPython it shows the below error:
Error: Error occurred while flattening model Sim_FS_SingleEx.
How do I include the dependent models (.mo) in python?
Any suggestions to resolve the issue?
Thanks

I guess you can send two expressions to omc:
loadFile("file1.mo");
loadFile("file2.mo");
Better still would be to make your models into a library, then you can just load Library/package.mo which will load all the files from the library automatically.

I have fixed the ticket. You can specify .mo files in the third parameter.

Related

Apply a different dataset on REBEL module

I am trying to apply a different training/testing dataset on the REBEL module, please see the following page. https://github.com/Babelscape/rebel
If I understand it correctly, the following module should be able to provide a specific dataset for the module; https://github.com/Babelscape/crocodile. Does anyone know how this can effectively be applied?
Kind regards

How to use generators with kedro?

Thanks to David Beazley's slides on Generators I'm quite taken with using generators for data processing in order to keep memory consumption minimal. Now I'm working on my first kedro project, and my question is how I can use generators in kedro. When I have a node that yields a generator, and then run it with kedro run --node=example_node, I get the following error:
DataSetError: Failed while saving data to data set MemoryDataSet().
can't pickle generator objects
Am I supposed to always load all my data into memory when working with kedro?
Hi #ilja to do this you may need to change the type of assignment operation that MemoryDataSet applies.
In your catalog, declare your datasets explicitly, change the copy_mode to one of copy or assign. I think assign may be your best bet here...
https://kedro.readthedocs.io/en/stable/kedro.io.MemoryDataSet.html
I hope this works, but am not 100% sure.

How to save keras model in different formats

I am new to this field, this question might look dumb to some of you but please bear with it.
I have created a keras model which works well. I can save it in .hdf5 format using model.save("model_name.hdf5) which is good.
Main question is if there is any other format i can save my model in so that it can be used in c++/android/JavaScript. Is this even possible to do so?? If you are thinking why I am asking for all 3 languages, its because I have 3 projects each of them use respective language.
Thanks for any help in advance.
The answer depends on what you are going to save and use in another language.
If you just need the architecture of the model to be saved, you may save it as JSON, which can later be used in any other platform and language you are going to use.
model_json = model.to_json()
If you also need the weights and biases, I do not know any specific tool, but you can simply read the stored data in python, create a multidimensional array, and then save it in a file appropriate for any of the languages you need. For example, the weights of the second layer can be found in model.layers[2].get_weights().
Finally, if you want to run the model in another language, you need to implement the for-loops that make the processing. You might find some conversion tools for your target language. For example, for C, this tool can help.

Alloy API: Decompile into .als

BLUF: Can I export a .als file corresponding to a model I have created with the Alloy API?
Example: I have a module that I read in using edu.mit.csail.sdg.alloy4compiler.parser.CompUtil. I then add signatures and facts to create a modified model in memory. Can I "de-parse" that and basically invert the lexer (edu.mit.csail.sdg.alloy4compiler.parser.CompLexer) to get a .als file somehow?
It seems like there ought to be a way to decompile the model in memory and save that as code to be later altered, but I'm having trouble identifying a path to that in the Alloy API Javadocs. I'm building a translator from select behavioral aspects of UML/SysML as part of some research, so I'm trying to figure out if there is something extant I can take advantage of or if I need to create it.
It seems a similar question has been asked before: Generating .als files corresponding to model instances with Alloy API
From the attached post https://stackoverflow.com/users/2270610/lo%c3%afc-gammaitoni stated he has written a solution for this in his Lightning application. He said that he may include the source code for completing this task. I'm unsure if he has uploaded the solution yet.

Nullpointer Exception with OpenNLP in NameFinderME class

I am using OpenNLP to extract named entities from a given text.
It gives me the following error while running the code on large data. When I run it on small data it works fine.
java.lang.NullPointerException
at opennlp.tools.util.Cache.put(Cache.java:134)
at opennlp.tools.util.featuregen.CachedFeatureGenerator.createFeatures(CachedFeatureGenerator.java:71)
at opennlp.tools.namefind.DefaultNameContextGenerator.getContext(DefaultNameContextGenerator.java:116)
at opennlp.tools.namefind.DefaultNameContextGenerator.getContext(DefaultNameContextGenerator.java:39)
at opennlp.tools.util.BeamSearch.bestSequences(BeamSearch.java:125)
at opennlp.tools.util.BeamSearch.bestSequence(BeamSearch.java:198)
at opennlp.tools.namefind.NameFinderME.find(NameFinderME.java:214)
at opennlp.tools.namefind.NameFinderME.find(NameFinderME.java:198)
Please help me out with this.
I had this same issue with POSTaggerME, and the cause is almost certainly because you're sharing a NameFinderME instance between threads.
Per the opennlp doc, most of the exposed library classes are not thread-safe:
http://incubator.apache.org/opennlp/documentation/manual/opennlp.html#tools.namefind.recognition.api

Resources