Loading in a .onnx file into javascript - onnx

I am trying to load in a .onnx file into a javascript session. The error I am receiving is TypeError: unrecognized operator 'ReduceL2', but this link https://github.com/onnx/onnx/blob/master/docs/Operators.md is saying that 'ReduceL2' is supported by onnx. I am guessing that it could have something to do with webGL not supporting it. Are there any work arounds or a better way to approach running a model in browser? very new to javascript.
Javascript Code:
async function runExample() {
// Create an ONNX inference session with WebGL backend.
const session = new onnx.InferenceSession({ backendHint: 'webgl' });
// Load an ONNX model. This model is Resnet50 that takes a 1*3*224*224 image and classifies it.
await session.loadModel("./pathtomodel");
Error thrown:
Uncaught (in promise) TypeError: unrecognized operator 'ReduceL2'
at t.createOperator (session-handler.ts:222)
at t.resolve (session-handler.ts:86)
at e.initializeOps (session.ts:252)
at session.ts:92
at t.event (instrument.ts:294)
at e.initialize (session.ts:81)
at e.<anonymous> (session.ts:63)
at inference-session-impl.ts:16
at Object.next (inference-session-impl.ts:16)
at a (inference-session-impl.ts:16)

Refer to ONNX.js's list of supported operators, which is only a subset of all ONNX operators. ReduceL2 is not on the list, but interesting that ReduceSumSquare is on the list, which seems to the same thing. (?)
Rather than waiting for them to implement ReduceL2 in ONXX.js, try either ReduceSumSquare or else try (pointwise) squaring your argument and then calling ReduceSum.
Likewise, you could define your own ReduceL2() function that calls either of the above two operations.

Related

TypeScript best way to restore prototype chain? (Node.js)

I have questions about the Object.setPrototypeOf(this, new.target.prototype) function because of this MDN warning:
Warning: Changing the [[Prototype]] of an object is, by the nature of how modern JavaScript engines optimize property accesses, currently a very slow operation in every browser and JavaScript engine. In addition, the effects of altering inheritance are subtle and far-flung, and are not limited to simply the time spent in the Object.setPrototypeOf(...) statement, but may extend to any code that has access to any object whose [[Prototype]] has been altered.
Because this feature is a part of the language, it is still the burden on engine developers to implement that feature performantly (ideally). Until engine developers address this issue, if you are concerned about performance, you should avoid setting the [[Prototype]] of an object. Instead, create a new object with the desired [[Prototype]] using Object.create().
So what would be the best way to restore the prototype string in TypeScript (Node.js) without using Object.setPrototypeOf(this, new.target.prototype) (but using classes)? This is all because of an error management middleware implemented in Express, in which I need to make use of instanceof to determine the origin of the error and thus return a proper response, but whenever I make a class that extends from Error, the error instanceof Error returns true, but error instanceof CustomError returns false. Doing a little research I found this in the official TypeScript documentation:
The new.target meta-property is new syntax introduced in ES2015. When an instance of a constructor is created via new, the value of new.target is set to be a reference to the constructor function initially used to allocate the instance. If a function is called rather than constructed via new, new.target is set to undefined.
new.target comes in handy when Object.setPrototypeOf or proto needs to be set in a class constructor. One such use case is inheriting from Error in NodeJS v4 and higher
// Example
class CustomError extends Error {
constructor(message?: string) {
super(message); // 'Error' breaks prototype chain here
Object.setPrototypeOf(this, new.target.prototype); // restore prototype chain
}
}
// Results
var CustomError = (function (_super) {
__extends(CustomError, _super);
function CustomError() {
var _newTarget = this.constructor;
var _this = _super.apply(this, arguments); // 'Error' breaks prototype chain here
_this.__proto__ = _newTarget.prototype; // restore prototype chain
return _this;
}
return CustomError;
})(Error);
And I thought okay everything is perfect since the code at the moment of being converted, I suppose that the compiler will take care of that function that is slow and will replace it with another procedure that is more efficient, but to my surprise it's not like that and I'm a little worried about the performance (I'm working on a big project).
I'm using TypeScript version 3.9.6 in Node 14.5.0, these are screenshots of the tests I did:
TypeScript with Node
TypeScript Playground
TypeScript compiler results

'TensorBoard' object has no attribute 'writer' error when using Callback.on_epoch_end()

Since Model.train_on_batch() doesn't take a callback input, I tried using Callback.on_epoch_end() in order to write my loss to tensorboard
However, trying to run the on_epoch_end() method results in the titular error, 'TensorBoard' object has no attribute 'writer'. Other solutions to my original problem with writing to tensorboard included calling the Callback.writer attribute, and running these solutions gave the same error. Also, the tensorflow documentation for the TensorBoard class doesn't mention a writer attribute
I'm somewhat of a novice programmer, but it seems to me that the on_epoch_end() method is also at some point calling the writer attribute, but I'm confused as to why the function would use an attribute that doesn't exist
Here's the code I'm using to create the callback:
logdir = "./logs/"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
and this is the callback code that I try to run in my training loop:
logs = {
'encoder':encoder_loss[0],
'discriminator':d_loss,
'generator':g_loss,
}
tensorboard_callback.on_epoch_end(i, logs)
where encoder_loss, d_loss, and g_loss are my scalars, and i is the batch number
Is the error a result of some improper code on my part, or is tensorflow trying to reference something that doesn't exist?
Also, if anyone knows another way to write to tensorboard using Model.train_on_batch, that would also solve my problem
Since you are using a callback without the fit method, you also need to pass your model to the TensorBoard object:
logdir = "./logs/"
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=logdir)
tensorboard_callback.set_model(model=model)

NodeJS / RequireJS: models loading late

I'm working on a refactor of my application. I'm using require.js at the top of a service class to get my sequelize models -- I have about 15 models.
For some reason, the models are an empty object unless I require them further down in my flow (moving the require statement inside a function call works, for example).
So for example, when the require is at the top, a statement like models.Foo.findOne() throws Cannot read property Foo of undefined.
Figured out I had a circular dependency -- essentially a model ended up dependent on a file that required models.
Sometime I use similar code to avoid this problem:
(() => models.Foo)().findOne()
or
(() => Bar.sequelize.models.Foo)().findOne()

Subclassing Bluebird's OperationalError

I'm promisifying a 3rd party library that doesn't throw typed errors, but instead uses the "err" function callback to notify the caller of the error. In this case, the "error" that the library is reporting is just an anonymous JS object with a few well-defined properties.
Bluebird is wrapping this in an OperationalError, which is great - but it'd be even more handy if I could subclass OperationalError and provide my own well-defined Error type that I could define. For instance LibraryXOperationalError - in order to distinguish the errors from this library, from some other error, in some sort of global express error handling middleware.
Is this possible? I tried to come up with a solution using the "promisifier" concept, but haven't had success.
OperationalError is subclassable just like Error is, you can get a reference from Promise.OperationalError

ASM is not loading object in class retransform but working fine with usual transform

I am trying to load some object through bytecode modification using asm bytecode instrumentation library.
I am retransforming the classes using retransformClasses() method.
I am loading the objects in this way :
super.visitVarInsn(Opcodes.ALOAD, 0);
super.visitFieldInsn(Opcodes.GETFIELD, owner, name, desc);
super.visitMethodInsn(org.objectweb.asm.Opcodes.INVOKESTATIC,
"com/coolcoder/MyClass",
"objectCheckTest",
"(Ljava/lang/Object;)V");`
The problem is that I the objects are getting loaded using usual tranform() of ClassTransformer , but when I am using Attach API's retranformClasses() , these objects are not getting loaded . Strange thing is that , I am not getting any bytecode error either.
Am I doing something wrong ? or am I missing some intricate part regarding retransform ?
I was be able to solve the issue , though I do not know why it happened in the first place.
(1) I was loading the object only on PUTFIELD or PUTSTATIC opcodes, i.e , when the object value is being set.
(2) Just after bytecode modification the class is getting redefined , as a result of which the code snippet did not work.
(3) Next time when the class loaded due to redefinition , the object is already being set , so it will not be called again.
(4) Now , when I checked for GETFIELD or GETSTATIC opcodes , i.e when the object's value is being retrieved and I loaded it , I was able to view its details.
The same approach which I mentioned above is working fine in usual transformation (i.e running with premain).
Any idea on this erratic behaviour of redefinition ?
First, in manifest.mf file,Can-Retransform-Classes attribute should be true.
And your ClassFileTransformer instance should be add with the true value for parameter canRetransform in method addTransformer .
Here is another additional tips:
If the transformer throws an exception (which it doesn't catch), subsequent transformers will still be called and the load, redefine or retransform will still be attempted. Thus, throwing an exception has the same effect as returning null. To prevent unexpected behavior when unchecked exceptions are generated in transformer code, a transformer can catch Throwable. If the transformer believes the classFileBuffer does not represent a validly formatted class file, it should throw an IllegalClassFormatException; while this has the same effect as returning null. it facilitates the logging or debugging of format corruptions.
from java api docs
So, there maybe some uncatched exception which will be just ignored by the jvm arised.
You can put a try..catch
Please forgive my poor English ...

Resources