Gremlin Javascript order by function - node.js

I use gremlin-javascript (in aws Neptune) to traverse the remote graph and get a list of vertex. I want order the vertex by their createdAt date property. But since I have multiple order().by(), I want to group them by week.
const gremlin = require('gremlin')
const moment = require('moment')
const { Graph } = gremlin.structure
const { DriverRemoteConnection } = gremlin.driver
const __ = gremlin.process.statics
const { order } = gremlin.process
const getWeek = date => parseInt(moment(date).format('YYYYWW'), 10)
const graph = new Graph()
const dc = new DriverRemoteConnection(endpointNeptune)
const g = graph.traversal().withRemote(dc)
g.V().order().by(getWeek(__.values('createdAt')), order.decr)
But this throw an error: "Could not locate method: NeptuneGraphTraversal.by([202029, decr])"
Thank you in advance

Copying my earlier comment to an answer:
The by modulator is expecting a key name not a literal value. Something like
g.V().group().by('createdAt').order(local).by(keys,desc)
or perhaps depending on your data model
g.V().order().by('createdAt', order.desc)

Related

can not get cascade DDS value for SharedObjectSequence

I have a test like this, but i can not get the 'sharedMap' in 'sharedSeq1' value, i don't know how to get the 'remoteFluidObjectHandle' value.
import {MockContainerRuntimeFactory, MockFluidDataStoreRuntime, MockStorage} from "#fluidframework/test-runtime-utils";
import {SharedObjectSequence, SharedObjectSequenceFactory} from "#fluidframework/sequence";
import * as mocks from "#fluidframework/test-runtime-utils";
import {SharedMap} from "#fluidframework/map";
import {IFluidHandle} from "#fluidframework/core-interfaces";
const mockRuntime: mocks.MockFluidDataStoreRuntime = new mocks.MockFluidDataStoreRuntime();
describe('ShredObjectSequence', function () {
it('should get synchronization data from another shared object', async function () {
const dataStoreRuntime1 = new MockFluidDataStoreRuntime();
const sharedSeq1: SharedObjectSequence<IFluidHandle<SharedMap>> = new SharedObjectSequence(mockRuntime, 'shareObjectSeq1', SharedObjectSequenceFactory.Attributes,)
const containerRuntimeFactory = new MockContainerRuntimeFactory();
dataStoreRuntime1.local = false;
const containerRuntime1 = containerRuntimeFactory.createContainerRuntime(
dataStoreRuntime1,
);
const services1 = {
deltaConnection: containerRuntime1.createDeltaConnection(),
objectStorage: new MockStorage(),
};
sharedSeq1.initializeLocal();
sharedSeq1.connect(services1);
const dataStoreRuntime2 = new MockFluidDataStoreRuntime();
const containerRuntime2 = containerRuntimeFactory.createContainerRuntime(
dataStoreRuntime2,
);
const services2 = {
deltaConnection: containerRuntime2.createDeltaConnection(),
objectStorage: new MockStorage(),
};
const sharedSeq2: SharedObjectSequence<IFluidHandle<SharedMap>> = new SharedObjectSequence(mockRuntime, 'shareObjectSeq2', SharedObjectSequenceFactory.Attributes,)
sharedSeq2.initializeLocal();
sharedSeq2.connect(services2);
// insert a node into sharedSeq2, it will sync to sharedSeq1
sharedSeq2.insert(0, [<IFluidHandle<SharedMap>>new SharedMap('sharedMapId', mockRuntime, SharedMap.getFactory().attributes).handle])
containerRuntimeFactory.processAllMessages();
// next case is passed, it show we got the sharedSeq2 changed
expect(sharedSeq1.getLength()).toBe(1)
const remoteFluidObjectHandle = await sharedSeq1.getRange(0, 1)[0];
// at here, i get error: Cannot read property 'mimeType' of null, it cause by remoteFluidObjectHandle.ts:51:30
const sharedMap = await remoteFluidObjectHandle.get()
expect(sharedMap).not.toBeUndefined()
});
});
run this test will get 'Cannot read property 'mimeType' of null' error, it caused by 'remoteFluidObjectHandle.ts:51:30'
The fluid mocks have very limited and specific behaviors, it looks like you are hitting the limits of them. You'll have better luck with an end-to-end test, see packages\test\end-to-end-tests. These use the same in-memory server as our as the playground on fluidframework dot com. The in-memory server uses the same code as tinylicious, our single process server and routerlicious, our docker based reference implementation.

how to parse a value to a 2Dtext in spark ar

I have an X value representing how much I open my mouth in spark AR. How can I show that X value into a 2d text?
I expect the text to show the Number value while running the effect.
You should use this script:
const Scene = require('Scene');
// Use export keyword to make a symbol available in scripting debug console
export const Diagnostics = require('Diagnostics');
// To use variables and functions across files, use export/import keyword
// export const animationDuration = 10;
// Use import keyword to import a symbol from another file
// import { animationDuration } from './script.js'
// To access scene objects
// const directionalLight = Scene.root.find('directionalLight0');
// To access class properties
// const directionalLightIntensity = directionalLight.intensity;
// To log messages to the console
Diagnostics.log('Console message logged from the script.');
const Patches = require('Patches');
const numberFormat = '{0}';
const number = Patches.getScalarValue('number');
Patches.setStringValue('value', number.format(numberFormat));
enter image description here

How to increase excel sheet column width while downloading in react js ? ,i am using a package called "export-from-json"

downloadPropertiesInXl = async () => {
let API_URL = "something....";
const property = await axios.get(API_URL);
const data = property.data;
const fileName = "download";
const exportType = "xls";
exportFromJSON({ data, fileName, exportType });
}
};
is there any other packages to change column width??
Use Excel.js, it have many options for customization
https://www.npmjs.com/package/exceljs#columnsex

Organizing requires and moving them to document Top

I am organizing code in a app. The require statements are un-organized so I made this codemod to sort them and to add them on top of the page.
The codemod works, almost perfect. I have some doubts:
is this a ok approach, or is there a more correct way to use the API?
how can I keep the empty line between the sourceStart (all the requires) and the rest of the source code?
can a similar approach be used in ES6 imports? (ie to sort them with jscodeshift)
My Initial code:
var path = require('path');
var stylus = require('stylus');
var express = require('express');
var router = express.Router();
var async = require('async');
let restOfCode = 'foo';
My codemod:
let requires = j(file.source).find(j.CallExpression, {
"callee": {
"name": "require"
}
}).closest(j.VariableDeclarator);
let sortedNames = requires.__paths.map(node => node.node.id.name).sort(sort); // ["async", "express", "path", "stylus"]
let sortedRequires = [];
requires.forEach(r => {
let index = sortedNames.indexOf(r.node.id.name);
sortedRequires[index] = j(r).closest(j.VariableDeclaration).__paths[0]; // <- feels like a hack
});
let sourceStart = j(sortedRequires).toSource();
let sourceRest = j(file.source).find(j.CallExpression, {
"callee": {
"name": "require"
}
}).closest(j.VariableDeclaration)
.replaceWith((vD, i) => {
// return nothing, it will be replaced on top of document
})
.toSource();
return sourceStart.concat(sourceRest).join('\n'); // is there a better way than [].concat(string).join(newLine) ?
And the result I got:
var async = require('async');
var express = require('express');
var path = require('path');
var stylus = require('stylus');
var router = express.Router(); // <- I would expect a empty line before this one
let restOfCode = 'foo';
is this a ok approach, or is there a more correct way to use the API?
You shouldn't be accessing __paths directly. If you need to access all NodePaths, you can use the .paths() method. If you want to access the AST nodes, use .nodes().
E.g. the mapping would just be
let sortedNames = requires.nodes()(node => node.id.name).sort(sort);
how can I keep the empty line between the sourceStart (all the requires) and the rest of the source code?
There isn't really a good way to do this. See this related recast issue. Hopefully this will become easier one day with CSTs.
can a similar approach be used in ES6 imports? (ie to sort them with jscodeshift)
Certainly.
FWIW, here is my version (based on your first version):
export default function transformer(file, api) {
const j = api.jscodeshift;
const sort = (a, b) => a.declarations[0].id.name.localeCompare(
b.declarations[0].id.name
);
const root = j(file.source);
const requires = root
.find(j.CallExpression, {"callee": {"name": "require"}})
.closest(j.VariableDeclaration);
const sortedRequires = requires.nodes().sort(sort);
requires.remove();
return root
.find(j.Statement)
.at(0)
.insertBefore(sortedRequires)
.toSource();
};
}
https://astexplorer.net/#/i8v3GBENZ7

PDF to Text extractor in nodejs without OS dependencies

Is there a way to extract text from PDFs in nodejs without any OS dependencies (like pdf2text, or xpdf on windows)? I wasn't able to find any 'native' pdf packages in nodejs. They always are a wrapper/util on top of an existing OS command.
Thanks
Have you checked PDF2Json? It is built on top of PDF.js. Though it is not providing the text output as a single line but I believe you may just reconstruct the final text based on the generated Json output:
'Texts': an array of text blocks with position, actual text and styling informations:
'x' and 'y': relative coordinates for positioning
'clr': a color index in color dictionary, same 'clr' field as in 'Fill' object. If a color can be found in color dictionary, 'oc' field will be added to the field as 'original color" value.
'A': text alignment, including:
left
center
right
'R': an array of text run, each text run object has two main fields:
'T': actual text
'S': style index from style dictionary. More info about 'Style Dictionary' can be found at 'Dictionary Reference' section
After some work, I finally got a reliable function for reading text from PDF using https://github.com/mozilla/pdfjs-dist
To get this to work, first npm install on the command line:
npm i pdfjs-dist
Then create a file with this code (I named the file "pdfExport.js" in this example):
const pdfjsLib = require("pdfjs-dist");
async function GetTextFromPDF(path) {
let doc = await pdfjsLib.getDocument(path).promise;
let page1 = await doc.getPage(1);
let content = await page1.getTextContent();
let strings = content.items.map(function(item) {
return item.str;
});
return strings;
}
module.exports = { GetTextFromPDF }
Then it can simply be used in any other js file you have like so:
const pdfExport = require('./pdfExport');
pdfExport.GetTextFromPDF('./sample.pdf').then(data => console.log(data));
Thought I'd chime in here for anyone who came across this question in the future.
I had this problem and spent hours over literally all the PDF libraries on NPM. My requirements were that I needed to run it on AWS Lambda so could not depend on OS dependencies.
The code below is adapted from another stackoverflow answer (which I cannot currently find). The only difference being that we import the ES5 version which works with Node >= 12. If you just import pdfjs-dist there will be an error of "Readable Stream is not defined". Hope it helps!
import * as pdfjslib from 'pdfjs-dist/es5/build/pdf.js';
export default class Pdf {
public static async getPageText(pdf: any, pageNo: number) {
const page = await pdf.getPage(pageNo);
const tokenizedText = await page.getTextContent();
const pageText = tokenizedText.items.map((token: any) => token.str).join('');
return pageText;
}
public static async getPDFText(source: any): Promise<string> {
const pdf = await pdfjslib.getDocument(source).promise;
const maxPages = pdf.numPages;
const pageTextPromises = [];
for (let pageNo = 1; pageNo <= maxPages; pageNo += 1) {
pageTextPromises.push(Pdf.getPageText(pdf, pageNo));
}
const pageTexts = await Promise.all(pageTextPromises);
return pageTexts.join(' ');
}
}
Usage
const fileBuffer = fs.readFile('sample.pdf');
const pdfText = await Pdf.getPDFText(fileBuffer);
This solution worked for me using node 14.20.1 using "pdf-parse": "^1.1.1"
You can install it with:
yarn add pdf-parse
This is the main function which converts the PDF file to text.
const path = require('path');
const fs = require('fs');
const pdf = require('pdf-parse');
const assert = require('assert');
const extractText = async (pathStr) => {
assert (fs.existsSync(pathStr), `Path does not exist ${pathStr}`)
const pdfFile = path.resolve(pathStr)
const dataBuffer = fs.readFileSync(pdfFile);
const data = await pdf(dataBuffer)
return data.text
}
module.exports = {
extractText
}
Then you can use the function like this:
const { extractText } = require('../api/lighthouse/lib/pdfExtraction')
extractText('./data/CoreDeveloper-v5.1.4.pdf').then(t => console.log(t))
Instead of using the proposed PDF2Json you can also use PDF.js directly (https://github.com/mozilla/pdfjs-dist). This has the advantage that you are not depending on modesty who owns PDF2Json and that he updates the PDF.js base.

Resources