ML.NET export to ONNX - onnx

NET GUI tools to train model. The model works great with C# application, but I want to have it in ONNX format. I have found tools which are converting between model formats, but couldn't find anything for ML.NET generated format. Apparently it's some zip file, and nothing more I know about it. does anyone know a tool to do conversion to ONNX.
Thanks
Microsoft's ML.Net Model Builder generates code
// Load Data
IDataView trainingDataView = mlContext.Data.LoadFromTextFile<ModelInput>(
path: TRAIN_DATA_FILEPATH,
hasHeader: true,
separatorChar: ',',
allowQuoting: true,
allowSparse: false);
// Build training pipeline
IEstimator<ITransformer> trainingPipeline = BuildTrainingPipeline(mlContext);
// Train Model
ITransformer mlModel = TrainModel(mlContext, trainingDataView, trainingPipeline);
// Evaluate quality of Model
Evaluate(mlContext, trainingDataView, trainingPipeline);
// Save model
SaveModel (mlContext, mlModel, MODEL_FILE, trainingDataView.Schema);
var path = GetAbsolutePath(MODEL_FILE);
path = new FileInfo(path).Directory.FullName;
path = Path.Combine(path, "mymodel.onnx");
using (var onnx = File.Open(path, FileMode.OpenOrCreate))
{
mlContext.Model.ConvertToOnnx(mlModel, trainingDataView, onnx);
}
from
var path = GetAbsolutePath(MODEL_FILE);
path = new FileInfo(path).Directory.FullName;
path = Path.Combine(path, "mymodel.onnx");
using (var onnx = File.Open(path, FileMode.OpenOrCreate))
{
mlContext.Model.ConvertToOnnx(mlModel, trainingDataView, onnx);
}
I have modified.
I am getting onnx file, but I am not able to run it(inference). Likewise, I have tried to open it with WinML Dashboard, but it's also not able to run generated onnx. I wonder perhaps it's the version of the onnx it generates?
the model is simple regression with all inputs float numbers and output one float as well.

Use the Microsoft.ML.OnnxConverter NuGet Package. Something like this:
var mlContext = new MLContext();
...
IDataView trainingData = mlContext.Data.LoadFromEnumerable(inputData);
var pipeline = mlContext.Transforms.Concatenate("Features", ... )
.Append(...));
var model = pipeline.Fit(trainingData);
using (var onnx = File.Open("mymodel.onnx", FileMode.OpenOrCreate))
{
mlContext.Model.ConvertToOnnx(model, trainingData, onnx);
}

Related

Importing obj file from 3DS Max to Phaser3

In my Phaser3 game, I am trying to import a 3D model from an .obj file to be the main character of my game.
My code is :
class Scene1 extends Phaser.Scene {
constructor() {
super("startScene");
}
preload() {
this.load.image("texture", "assets/images/<filename>.png");
this.load.obj({
key: "keymodel",
url: "assets/models/<filename>.obj",
matURL: "assets/models/<filename>.mtl",
});
}
create(){
const mesh = this.add.mesh(50,50,"texture");
mesh.addVerticesFromObj('keymodel', 0.1);
mesh.panZ(7);
mesh.modelRotation.y += 0.5;
this.debug = this.add.graphics();
mesh.setDebug();
console.log(mesh);
}
The game works fine when I use the base model offered in the tutorial (the skull from this tutorial https://phaser.io/examples/v3/view/game-objects/mesh/mesh-from-obj ) But When I try to import my model from 3DS max, errors start popping out.
In particular, I receive different errors depending on the model on the addVerticesFromObj function.
When I try to load the full model (composed of several objects), I receive the error:
Uncaught TypeError: Cannot read properties of undefined (reading 'u')
at t.exports (phaser.min.js:1:580548)
at initialize.addVerticesFromObj (phaser.min.js:1:298100)
at Scene1.create (Scene1.js:54:10)
at initialize.create (phaser.min.js:1:491015)
at initialize.loadComplete (phaser.min.js:1:490466)
at o.emit (phaser.min.js:1:7809)
at initialize.loadComplete (phaser.min.js:1:944358)
at initialize.fileProcessComplete (phaser.min.js:1:944058)
at initialize.onProcessComplete (phaser.min.js:1:21253)
at initialize.onProcess (phaser.min.js:1:303819)
When I try to load the model of a single object, I receive the error:
Uncaught TypeError: Cannot read properties of undefined (reading 'x')
at t.exports (phaser.min.js:1:580422)
at initialize.addVerticesFromObj (phaser.min.js:1:298100)
at Scene1.create (Scene1.js:54:10)
at initialize.create (phaser.min.js:1:491015)
at initialize.loadComplete (phaser.min.js:1:490466)
at o.emit (phaser.min.js:1:7809)
at initialize.loadComplete (phaser.min.js:1:944358)
at initialize.fileProcessComplete (phaser.min.js:1:944058)
at initialize.onProcessComplete (phaser.min.js:1:21253)
at initialize.onProcess (phaser.min.js:1:303819)
This happens only by changing the filename without modifying the rest of the code.
Could It be that there are special export parameters from 3DS max I need to set? Or is there another error in my code I cannot see?
Thank you all for your help!

How to make a ES6 class visible for both the server and client scripts in Node JS?

So, I have 2 classes for shapes in a script that's called shapes.js
class Shape {
constructor(x, y) {
this.x = x;
this.y = y
}
}
class Cube extends Shape {
constructor(x, y, t) {
super(x, y);
this.t = t;
}
}
How do I import both of these in my server.js or other js files? I know Shape is nothing more than an abstract class right now, but I want to test the functionality of importing multiple classes. I have tried doing it in the following ways:
var shape = require('/shapes');
var Shape = shape.Shape, Cube = shape.Cube;
or
import {Shape, Cube} from 'shapes'
import {Shape, Cube} from '/shapes'
I have also tried exporting them in shapes.js at the end of the file like this :
module.exports = {Shape, Cube}
or
export {Shape, Cube}
I've tried all the possibilities I've been provided within the basic tutorials, the result is either an error or a blank white screen with no errors. I have been really stuck on this one, would appreciate some help, thank you
I encourage you to use the ES Module syntax :
import {Shape, Cube} from 'shapes' to import a module
export {Shape, Cube} to export a module
Most browsers support this.Unfortunately! Node.js suppport ES6 but doesn't support ESModule syntax (or only experimental way)
So you need to transpile your code with babeljs and this plugin
Here the .babelrc file to configure babel:
{
"plugins": ["transform-es2015-modules-commonjs"]
}
If you use babel-register the transformation occurs when the file is required(imported)
The module.exports syntax is the best way to export your code. However, the best way to import them differs between Node and browsers.
module.exports = {Shape, Cube}; // in file with classes defined
// below are different ways to import
const { Shape, Cube } = require('./shapes'); // Node
import { Shape, Cube } from './shapes'; // modern browsers
The Node line has something called a destructuring statement. It just prevents you from having to define both shape and Shape/Cube.
You said in your question that you tried both module.exports and a require statement. My guess is that you either a) didn't try both at the same time or b) without the ./ in your require statement, the program couldn't find your file.
Note that the export/import keyword do not work in Node - they work only in modern browsers. As other answers have noted, there are ways to make those work. However, I would generally recommend against them for smaller scale projects, particularly if you're just getting familiar with this stuff.
In my last projects I used Typescript with node.js it's very powerful so I use namespace way :
shapes.ts :
export namespace Shapes {
export class A {
constructor(x, y) {
this.x = x;
this.y = y
}
}
export class B {
constructor(x, y) {
this.x = x;
this.y = y
}
}
}
usage :
import Shapes from 'shapes'
let shapeA = new Shapes.A();

How to convert ML model to MLlib model?

I had trained Machine Learning models using Spark RDD based API (mllib package) 1.5.2 say "Mymodel123",
org.apache.spark.mllib.tree.model.RandomForestModel Mymodel123 = ....;
Mymodel123.save("sparkcontext","path");
Now I'm using Spark Dataset based API (ml package) 2.2.0 . Is there any way to load the models (Mymodel123) using Dataset based API?
org.apache.spark.ml.classification.RandomForestClassificationModel newModel = org.apache.spark.ml.classification.RandomForestClassificationModel.load("sparkcontext","path");
There is no public API that can do that, however you RandomForestModels wrap old mllib API and provide private methods which can be used to convert mllib models to ml models:
/** Convert a model from the old API */
private[ml] def fromOld(
oldModel: OldRandomForestModel,
parent: RandomForestClassifier,
categoricalFeatures: Map[Int, Int],
numClasses: Int,
numFeatures: Int = -1): RandomForestClassificationModel = {
require(oldModel.algo == OldAlgo.Classification, "Cannot convert RandomForestModel" +
s" with algo=${oldModel.algo} (old API) to RandomForestClassificationModel (new API).")
val newTrees = oldModel.trees.map { tree =>
// parent for each tree is null since there is no good way to set this.
DecisionTreeClassificationModel.fromOld(tree, null, categoricalFeatures)
}
val uid = if (parent != null) parent.uid else Identifiable.randomUID("rfc")
new RandomForestClassificationModel(uid, newTrees, numFeatures, numClasses)
}
so it is not impossible. In Java you can use it directly (Java doesn't respect package private modifiers), in Scala you'll have to put adapter code in org.apache.spark.ml package.

How to use Apache OpenNLP in a node.js application

What is the best way to use Apache Open NLP with node.js?
Specifically, I want to use Name Entity Extraction API. Here is what is says about it - the documentation is terrible (new project, I think):
http://opennlp.apache.org/documentation/manual/opennlp.html#tools.namefind
From the docs:
To use the Name Finder in a production system its strongly recommended
to embed it directly into the application instead of using the command
line interface. First the name finder model must be loaded into memory
from disk or an other source. In the sample below its loaded from
disk.
InputStream modelIn = new FileInputStream("en-ner-person.bin");
try {
TokenNameFinderModel model = new TokenNameFinderModel(modelIn);
}
catch (IOException e) {
e.printStackTrace();
}
finally {
if (modelIn != null) {
try {
modelIn.close();
}
catch (IOException e) {
}
}
}
There is a number of reasons why the model loading can fail:
Issues with the underlying I/O
The version of the model is not compatible with the OpenNLP version
The model is loaded into the wrong component, for example a tokenizer
model is loaded with TokenNameFinderModel class.
The model content is not valid for some other reason
After the model is loaded the NameFinderME can be instantiated.
NameFinderME nameFinder = new NameFinderME(model);
The initialization is now finished and the Name Finder can be used.
The NameFinderME class is not thread safe, it must only be called from
one thread. To use multiple threads multiple NameFinderME instances
sharing the same model instance can be created. The input text should
be segmented into documents, sentences and tokens. To perform entity
detection an application calls the find method for every sentence in
the document. After every document clearAdaptiveData must be called to
clear the adaptive data in the feature generators. Not calling
clearAdaptiveData can lead to a sharp drop in the detection rate after
a few documents. The following code illustrates that:
for (String document[][] : documents) {
for (String[] sentence : document) {
Span nameSpans[] = find(sentence);
// do something with the names
}
nameFinder.clearAdaptiveData()
}
the following snippet shows a call to find
String sentence = new String[]{
"Pierre",
"Vinken",
"is",
"61",
"years"
"old",
"."
};
Span nameSpans[] = nameFinder.find(sentence);
The nameSpans arrays contains now exactly one Span which marks the
name Pierre Vinken. The elements between the begin and end offsets are
the name tokens. In this case the begin offset is 0 and the end offset
is 2. The Span object also knows the type of the entity. In this case
its person (defined by the model). It can be retrieved with a call to
Span.getType(). Additionally to the statistical Name Finder, OpenNLP
also offers a dictionary and a regular expression name finder
implementation.
Checkout this NodeJS library.
https://github.com/mbejda/Node-OpenNLP
https://www.npmjs.com/package/opennlp
Just do NPM install opennlp
And look at the examples on the Github.
var nameFinder = new openNLP().nameFinder;
nameFinder.find(sentence, function(err, results) {
console.log(results)
});

Xtext/EMF how to do model-to-model transform?

I have a DSL in Xtext, and I would like to reuse the rules, terminals, etc. defined in my .xtext file to generate a configuration file for some other tool involved in the project. The config file uses syntax similar to BNF, so it is very similar to the actual Xtext content and it requires minimal transformations. In theory I could easily write a script that would parse Xtext and spit out my config...
The question is, how do I go about implementing it so that it fits with the whole ecosystem? In other words - how to do a Model to Model transform in Xtext/EMF?
If you have both metamodels(ecore,xsd,...), your best shot is to use ATL ( http://www.eclipse.org/atl/ ).
If I understand you correct you want to go from an xtext model to its EMF model. Here is a code example that achieves this, substitute your model specific where necessary.
public static BeachScript loadScript(String file) throws BeachScriptLoaderException {
try {
Injector injector = new BeachStandaloneSetup().createInjectorAndDoEMFRegistration();
XtextResourceSet resourceSet = injector.getInstance(XtextResourceSet.class);
resourceSet.addLoadOption(XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE);
Resource resource = resourceSet.createResource(URI.createURI("test.beach"));
InputStream in = new ByteArrayInputStream(file.getBytes());
resource.load(in, resourceSet.getLoadOptions());
BeachScript model = (BeachScript) resource.getContents().get(0);
return model;
} catch (Exception e) {
throw new BeachScriptLoaderException("Exception Loading Beach Script " + e.toString(),e );
}

Resources