ag-grid-community vs ag-grid-enterprise new Grid - node.js

I have a Node client-side application with the latest ag-grid version.
I was using ag-grid-community without any issues with this require line
const {Grid} = require('ag-grid-community');
and this new
new Grid(agGridDiv, agGridOptions);
but if I change the require to
const {Grid} = require('ag-grid-enterprise');
the new fails with exception 'Grid is not a constructor'
How can I fix this? I have tried various changes such as new Grid.Grid etc but nothing seems to work.

For latest 23.1.1 version this page:
// ECMA 5 - using nodes require() method
const AgGrid = require('ag-grid-enterprise');
Another way to follow this guide, it all depends on which repository you download the dependencies from.
import {Grid, GridOptions} from '#ag-grid-community/core';
import {LicenseManager} from '#ag-grid-enterprise/core';
// or
const {Grid, GridOptions} = require('#ag-grid-community/core');
I used core and it worked for import.
For old version:
Grid, like everything else, needs to be imported from ag-grid-community.
1) ag-grid-enterprise is pure additive functionality for ag-grid-community.
2) You will use ag-grid-enterprise via the ag-grid-community api not explicit. Use ag-grid-enterprise for LicenseManager only.
Off-topic:
I would recommend starting with the old version, since the source code of the new version is minified and it will be more difficult for you to understand many nontrivial nuances.

Related

Can't load Features.Diagnostics

I'm creating a web client for joining Teams meetings with the ACS Calling SDK.
I'm having trouble loading the diagnostics API. Microsoft provides this page:
https://learn.microsoft.com/en-us/azure/communication-services/concepts/voice-video-calling/call-diagnostics
You are supposed to get the diagnostics this way:
const callDiagnostics = call.api(Features.Diagnostics);
This does not work.
I am loading the Features like this:
import { Features } from '#azure/communication-calling'
A statement console.log(Features) shows only these four features:
DominantSpeakers: (...)
Recording: (...)
Transcription: (...)
Transfer: (...)
Where are the Diagnostics??
User Facing Diagnostics
For anyone, like me, looking now...
ATOW, using the latest version of #azure/communication-calling SDK, the documented solution, still doesn't work:
const callDiagnostics = call.api(Features.Diagnostics);
call.api is undefined.
TL;DR
However, once the call is instantiated, this allows you to subscribe to changes:
const call = callAgent.join(/** your settings **/);
const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics);
userFacingDiagnostics.media.on("diagnosticChanged", (diagnosticInfo) => {
console.log(diagnosticInfo);
});
userFacingDiagnostics.network.on("diagnosticChanged", (diagnosticInfo) => {
console.log(diagnosticInfo);
});
This isn't documented in the latest version, but is under this alpha version.
Whether this will continue to work is anyone's guess ¯\(ツ)/¯
Accessing Pre-Call APIs
Confusingly, this doesn't currently work using the specified version, despite the docs saying it will...
Features.PreCallDiagnostics is undefined.
This is actually what I was looking for, but I can get what I want by setting up a test call asking for the latest values, like this:
const call = callAgent.join(/** your settings **/);
const userFacingDiagnostics = call.feature(Features.UserFacingDiagnostics);
console.log(userFacingDiagnostics.media.getLatest())
console.log(userFacingDiagnostics.network.getLatest())
Hope this helps :)
Currently the User Facing Diagnostics API is only available in the Public Preview and npm beta packages right now. I confirmed this with a quick test comparing the 1.1.0 and beta packages.
Check the following link:
https://github.com/Azure-Samples/communication-services-web-calling-tutorial/
Features are imported from the #azure/communication-calling,
for example:
const {
Features
} = require('#azure/communication-calling');

Google Translate V2

I've used Google Translate for years with excellent results. Now, Google has deprecated my version and now employs Google.Cloud.Translation.V2.
The Nuget Install-Package Google.Cloud.Translation.V2 -Version 2.0.0 installs, but causes the dreaded yellow screen saying is can't find one thing after another, and also System.Net.Http version conflicts. Manually edit Web and Machine configs, and on and on.
Also, Google samples reference things with undefined namespaces. i have the Google credentials json file.
I really don't want to change Environment vars in a production environment.
Bottom line? I'm in agreement with hundreds of folks saying that this is a nightmare.
With Nuget, it's usually easy to install, reference and run. Not so here. Google's primers on this are unnecessarily verbose and impossible, at least for me, to follow. This should be Nuget, reference, code.
I think I'm up to about 100 folks with the same problem. Aaaargh!
Ideas?
OK, figured this out and while the instructions are all over the board, the process is not.
This is for C#.
You need an account at Google that enables Google.Cloud.Translation.V2.
Once enabled, you'll need a json credentials file. Download it and save it.
The instructions show how to use the Package Manager Console to allow these to work.
In your translate class, add these:
using Google.Cloud.Translation.V2; //PM> Install-Package Google.Cloud.Translation.V2 -Version 2.0.0
using Google.Apis.Auth.OAuth2; //PM> Install-Package Google.Apis.Oauth2.v2 -Version 1.50.0.1869
Then this (the encoding can be whatever you need):
//usage:
var translated = TranslateText("en", "ar", "Happy Translating");
public string TranslateText(string srclang, string destlang, string trns)
{
var utf8 = new UTF8Encoding(false);
var client = TranslationClient.Create(GoogleCredential.FromFile(path to json credentials file));
var result = client.TranslateText(
text: trns,
targetLanguage: destlang, // ar
sourceLanguage: srclang, // en
model: TranslationModel.Base);
// or model: TranslationModel.NeuralMachineTranslation);
return utf8.GetString(utf8.GetBytes(result.TranslatedText));
}
Result: سعيد الترجمة

Tensorflow-node not recognized with cocoSsd on node.js

I'm using #tensorflow-models/coco-ssd and #tensorflow/tfjs-node to do some object detection. It's working, but apparently could be faster. It's honestly not even that slow, it bangs through an image in about a second or two, but it just bugs me when something isn't working as well as it could.
You can find a live version of this at https://01014.org/wall-of-cats
Most current code at https://github.com/qozle/wall-of-cats
I get this on the first call to the model.detect():
============================
Hi there 👋. Looks like you are running TensorFlow.js in Node.js. To speed things up
dramatically, install our node backend, which binds to TensorFlow C++, by running
npm i #tensorflow/tfjs-node, or npm i #tensorflow/tfjs-node-gpu if you have CUDA.
Then call require('#tensorflow/tfjs-node'); (-gpu suffix for CUDA) at the start of
your program. Visit https://github.com/tensorflow/tfjs-node for more details.
============================
I'm on a linux ubuntu 20 LTS server. I tried downgrading tfjs-node, I saw some folks had a problem with the versions not matching for the faceAPI example, so I tried that.
"#tensorflow-models/coco-ssd": "^2.1.0",
"#tensorflow/tfjs-node": "^2.1.0",
I tried deleting node_modules and doing
npm install
so that it would rebuild the bindings. No beans. Tried making sure I have python installed- I'm running python3. EDIT tried making sure that I have 2.7 installed instead and that I'm using it as default. No beans.
EDIT I've also tried adding #tensorflow/tfjs-backend-cpu to the mix, and rebuilding bindings again by deleting node_modules and doing npm install. No beans.
Here's some of the code:
const tf = require("#tensorflow/tfjs-node");
const cocoSsd = require("#tensorflow-models/coco-ssd");
tf.enableProdMode();
preloading the model:
catModel = await cocoSsd.load();
Then later on, when I get some data:
const image = await tf.node.decodeImage(resp.body, 3);
const predictions = await nsfwModel.classify(image);
const catObjects = await catModel.detect(image);
image.dispose();
This is for a project that interfaces with the twitter API, pulls filtered data of all posts with images that have #cat or cat or kitten in the post, checks it against a NSFW model, and then does object detection to make sure there are cats in the pictures (I got a lot of random images and couldn't really refine the twitter API filter rules).
I'm out of beans and out of ideas.

Is it possible to use paperjs in nodejs without using Cairo?

I need to "read" an array of some paperjs paths in nodejs and get their dimension. I wanted to use paper npm module but saw that it has a dependency to Cairo.
As I'm deploying to heroku it is a little difficult to use Cairo. I know its possible but I want to know if its really necessary just for "reading" the dimensions of a path group.
A present day answer: Yes it's possible.
———
To quote from the docs:
Paper.js comes in three different versions on NPM: paper, paper-jsdom and paper-jsdom-canvas. Depending on your use case, you need to require a different one:
paper is the main library, and can be used directly in a browser context, e.g. a web browser or worker.
paper-jsdom is a shim module for Node.js, offering headless use with SVG importing and exporting through jsdom.
paper-jsdom-canvas is a shim module for Node.js, offering canvas rendering through Node-Canvas as well as SVG importing and exporting through jsdom.
→ If you don't require rendering to canvas, paper-jsdom should do the trick (which doesn't need Cairo)
———
Important note:
If I understood correctly this means you can't use PaperScript, but have to use JavaScript directly. This will affect how you have to write code:
new paper.Path() instead of new Path()
Create project manually using paper.setup()
No mathematical operators (+ - * / %)
... (More info in the docs)
———
A simple example to create a line, log its bounding box size (and also export the SVG):
Note: I used this with the current Node LTS (v14.16.1)
package.json
{
"name": "paper-jsdom-example",
"main": "index.js",
"dependencies": {
"paper-jsdom": "^0.12.15"
}
}
index.js
const paper = require('paper');
const fs = require('fs');
var size = new paper.Size(300, 300)
paper.setup(size);
var path = new paper.Path();
path.strokeColor = '#348BF0';
var start = new paper.Point(100, 100);
var end = new paper.Point(200, 200);
path.moveTo(start);
path.lineTo(end);
console.log('width', path.bounds.width, 'height', path.bounds.height);
var svg = paper.project.exportSVG({asString:true});
fs.writeFileSync('punchline.svg', svg);

Parse.com adding node.js modules to Cloud

I'm trying to add gm module to my cloud. Since parse is not a node.js environment I made small changes over other modules I used, but gm module requires so much node core module. Do I have to push all of the sub modules to parse. Also how can I add the core modules. Changing require('xxx') to require('xxx/index.js') or require('xxx/xxx.js') failed.
These are the modules I could find where gm is depended and changed these files. I did include all the files in the modules only changed the following ones.
- gm/index.js
- events/events.js
- util/util.js
- stream/index.js
- emitter/index.js (also depends on util)
- asynclist/index.js
- eventproxy/index.js
changing all of these gives error
Result: Error: Module ./lib/eventproxy not found
at libs/eventproxy/index.js:1:18
at libs/asynclist/index.js:1:18
at libs/emitter/index.js:1:17
at libs/stream/index.js:22:15
at libs/gm/index.js:6:14
at main.js:118:14
// This error is caused by ./lib/eventproxy.
// It must be cloud/libs/asynclist/node_modules/eventproxy/lib/eventproxy.js
// Parse doesn't recognize './' as this folder in cloud code
where gm/index.js's require part is
var Stream = require('cloud/libs/stream/index.js').Stream;
var EventEmitter = require('cloud/libs/events/events.js').EventEmitter;
var util = require('cloud/libs/util/util.js');
My cloud folder structure is
cloud/libs/gm
cloud/libs/events
cloud/libs/util
cloud/libs/stream
cloud/libs/emitter
cloud/libs/asynclist
cloud/libs/eventproxy
EDIT: I found more dependencies. According to this, gm has 36 dependent libraries and I clearly need a simple solution.
EDIT: for the relative path problem with parse, this is a solution but as I said I need a simple way for the whole problem.

Resources