Apply a different dataset on REBEL module - nlp

I am trying to apply a different training/testing dataset on the REBEL module, please see the following page. https://github.com/Babelscape/rebel
If I understand it correctly, the following module should be able to provide a specific dataset for the module; https://github.com/Babelscape/crocodile. Does anyone know how this can effectively be applied?
Kind regards

Related

How to use generators with kedro?

Thanks to David Beazley's slides on Generators I'm quite taken with using generators for data processing in order to keep memory consumption minimal. Now I'm working on my first kedro project, and my question is how I can use generators in kedro. When I have a node that yields a generator, and then run it with kedro run --node=example_node, I get the following error:
DataSetError: Failed while saving data to data set MemoryDataSet().
can't pickle generator objects
Am I supposed to always load all my data into memory when working with kedro?
Hi #ilja to do this you may need to change the type of assignment operation that MemoryDataSet applies.
In your catalog, declare your datasets explicitly, change the copy_mode to one of copy or assign. I think assign may be your best bet here...
https://kedro.readthedocs.io/en/stable/kedro.io.MemoryDataSet.html
I hope this works, but am not 100% sure.

How to save keras model in different formats

I am new to this field, this question might look dumb to some of you but please bear with it.
I have created a keras model which works well. I can save it in .hdf5 format using model.save("model_name.hdf5) which is good.
Main question is if there is any other format i can save my model in so that it can be used in c++/android/JavaScript. Is this even possible to do so?? If you are thinking why I am asking for all 3 languages, its because I have 3 projects each of them use respective language.
Thanks for any help in advance.
The answer depends on what you are going to save and use in another language.
If you just need the architecture of the model to be saved, you may save it as JSON, which can later be used in any other platform and language you are going to use.
model_json = model.to_json()
If you also need the weights and biases, I do not know any specific tool, but you can simply read the stored data in python, create a multidimensional array, and then save it in a file appropriate for any of the languages you need. For example, the weights of the second layer can be found in model.layers[2].get_weights().
Finally, if you want to run the model in another language, you need to implement the for-loops that make the processing. You might find some conversion tools for your target language. For example, for C, this tool can help.

Alloy API: Decompile into .als

BLUF: Can I export a .als file corresponding to a model I have created with the Alloy API?
Example: I have a module that I read in using edu.mit.csail.sdg.alloy4compiler.parser.CompUtil. I then add signatures and facts to create a modified model in memory. Can I "de-parse" that and basically invert the lexer (edu.mit.csail.sdg.alloy4compiler.parser.CompLexer) to get a .als file somehow?
It seems like there ought to be a way to decompile the model in memory and save that as code to be later altered, but I'm having trouble identifying a path to that in the Alloy API Javadocs. I'm building a translator from select behavioral aspects of UML/SysML as part of some research, so I'm trying to figure out if there is something extant I can take advantage of or if I need to create it.
It seems a similar question has been asked before: Generating .als files corresponding to model instances with Alloy API
From the attached post https://stackoverflow.com/users/2270610/lo%c3%afc-gammaitoni stated he has written a solution for this in his Lightning application. He said that he may include the source code for completing this task. I'm unsure if he has uploaded the solution yet.

OpenModelica with Python

I created two models in OpenModelica-V1.12. One model has the main part and other model has the math related calculations. Main model uses the other model for the calculations. This works fine in OpenModelica.
When I launch the main model using Python with OMPython it shows the below error:
Error: Error occurred while flattening model Sim_FS_SingleEx.
How do I include the dependent models (.mo) in python?
Any suggestions to resolve the issue?
Thanks
I guess you can send two expressions to omc:
loadFile("file1.mo");
loadFile("file2.mo");
Better still would be to make your models into a library, then you can just load Library/package.mo which will load all the files from the library automatically.
I have fixed the ticket. You can specify .mo files in the third parameter.

Best and optimized way to create web based Interactive Choropleth Map

I am going to build an interactive Choropleth map for Bangladesh. The goal of this project is to build a map system and populate different type of data. I read the documentations of the Openlayers, Leaflet and D3. I need some advice to find the right path. The solution must be optimized enough.
The map i am going to create will be something like the following http://nasirkhan.github.io/bangladesh-gis/asset/base_files/bd_admin_3.html. It is prepared based on leaflet js. But it is not mandatory to work with this library. I tried with Leaflet because it is easy to use and found the expected solution within a very short time.
The requirement of the project is to prepare a Choropleth map where i can display the related data. for example i have to show the population of all the divisions of Bangladesh. at the same time there should be some options so that i can show the literacy rate, male-female ratio and so on.
the solution i am working now have some issues. like the load time is huge and if i want to load the 2nd dataset then i have to load the same huge geolocation data, how can i optimize or avoid this situation?
Leaflet has a layers control feature. If you cut down your data to just what is required, split it into different layers and allow the user to select that layers they are interested in viewing that might cut down on the loading of the data. Another option is to simplify the shape of the polygons.

Resources