How can I get the label of an equation? I'm attempting to reprocess an equation with a label, but I have to delete the label from MathJax.Extension["TeX/AMSmath"].labels first, for which the label must be known...
I know I can scan through the source text for the label MathJax.Hub.getAllJax("mathDiv")[0}.SourceElement().find("\label(") (...), but this seems needlessly complicated. Is there a better way?
There's no built-in API for this.
If you don't need to keep labels, then the reset in the comment above is probably the best way to go about it:
MathJax.Extension["TeX/AMSmath"].labels = {}
A quick and dirty way to get the IDs is to leverage the fact that they end up in the output. So you can just get all the IDs in the output, e.g.,
const math = MathJax.Hub.getAllJax()[0];
const nodesWithIds = document.getElementById(math.root.inputID).previousSibling.querySelectorAll('[id]');
const ids = [];
for (node of nodesWithIds) ids.push(node.id);
A cleaner and perhaps conceptually easier way would be to leverage MathML (which is essentially the internal format): the \label{} always ends up on an mlabeledtr. The trouble is that you'd have to re-parse that, e.g.,
const temp = document.createElement('span');
temp.innerHTML = math.root.toMathML();
const nodesWithIds = temp.querySelectorAll('mlabeledtr [id]');
const ids = [];
for (node of nodesWithIds) ids.push(node.id);
This will make sure the array only has relevant IDs in them (and the contents of the nodes should correspond to \label{}.
I suppose with helper libraries it might be easier to dive into the math.root object directly and look for IDs recursively (in its data key).
Related
The quickest method I have found is to just convert the ItemPaged object to a list using list() and then I'm able to manipulate/extract using a Pandas DataFrame. However, if I have millions of results, the process can be quite time-consuming, especially if I only want every nth result over a certain time-frame, for instance. Typically, I would have to query the entire time-frame and then re-loop to only obtain every nth element. Does anyone know a more efficient way to use query_entities OR how to more efficiently return every nth item from ItemPaged or more explicitly from table.query_entities? Portion of my code below:
connection_string = "connection string here"
service = TableServiceClient.from_connection_string(conn_str=connection_string)
table_string = ""
table = service.get_table_client(table_string)
entities = table.query_entities(filter, select, etc.)
results = pd.DataFrame(list(entities))
Does anyone know a more efficient way to use query_entities OR how to more efficiently return every nth item from ItemPaged or more explicitly from table.query_entities?
After reproducing from my end, one of the ways to achieve your requirement using get_entity() instead of query_entities(). Below is the complete code that worked for me.
entity = tableClient.get_entity(partition_key='<PARTITION_KEY>', row_key='<ROW_KEY>')
print("Results using get_entity :")
print(format(entity))
RESULTS:
I need to transform a large array of JSON (that can have over 100k positions) into a CSV.
This array is created directly in the application, it's not the result of an uploaded file.
Looking at the documentation, I've thought on using parser but it says that:
For that reason is rarely a good reason to use it until your data is very small or your application doesn't do anything else.
Because the data is not small and my app will do other things than creating the csv, I don't think it'll be the best approach but I may be misunderstanding the documentation.
Is it possible to use the others options (async parser or transform) with an already created data (and not a stream of data)?
FYI: It's a nest application but I'm using this node.js lib.
Update: I've tryied to insert with an array with over 300k positions, and it went smoothly.
Why do you need any external modules?
Converting JSON into a javascript array of javascript objects is a piece of cake with the native JSON.parse() function.
let jsontxt=await fs.readFile('mythings.json','uft8');
let mythings = JSON.parse(jsontxt);
if (!Array.isArray(mythings)) throw "Oooops, stranger things happen!"
And, then, converting a javascript array into a CSV is very straightforward.
The most obvious and absurd case is just mapping every element of the array into a string that is the JSON representation of the object element. You end up with a useless CSV with a single column containing every element of your original array. And then joining the resulting strings array into a single string, separated by newlines \n. It's good for nothing but, heck, it's a CSV!
let csvtxt = mythings.map(JSON.stringify).join("\n");
await fs.writeFile("mythings.csv",csvtxt,"utf8");
Now, you can feel that you are almost there. Replace the useless mapping function into your own
let csvtxt = mythings.map(mapElementToColumns).join("\n");
and choose a good mapping between the fields of the objects of your array, and the columns of your csv.
function mapElementToColumns(element) {
return `${JSON.stringify(element.id)},${JSON.stringify(element.name)},${JSON.stringify(element.value)}`;
}
or, in a more thorough way
function mapElementToColumns(fieldNames) {
return function (element) {
let fields = fieldnames.map(n => element[n] ? JSON.stringify(element[n]) : '""');
return fields.join(',');
}
}
that you may invoke in your map
mythings.map(mapElementToColumns(["id","name","element"])).join("\n");
Finally, you might decide to use an automated for "all fields in all objects" approach; which requires that all the objects in the original array maintain a similar fields schema.
You extract all the fields of the first object of the array, and use them as the header row of the csv and as the template for extracting the rest of the elements.
let fieldnames = Object.keys(mythings[0]);
and then use this field names array as parameter of your map function
let csvtxt= mythings.map(mapElementToColumns(fieldnames)).join("\n");
and, also, prepending them as the CSV header
csvtxt.unshift(fieldnames.join(','))
Putting all the pieces together...
function mapElementToColumns(fieldNames) {
return function (element) {
let fields = fieldnames.map(n => element[n] ? JSON.stringify(element[n]) : '""');
return fields.join(',');
}
}
let jsontxt=await fs.readFile('mythings.json','uft8');
let mythings = JSON.parse(jsontxt);
if (!Array.isArray(mythings)) throw "Oooops, stranger things happen!";
let fieldnames = Object.keys(mythings[0]);
let csvtxt= mythings.map(mapElementToColumns(fieldnames)).join("\n");
csvtxt.unshift(fieldnames.join(','));
await fs.writeFile("mythings.csv",csvtxt,"utf8");
And that's it. Pretty neat, uh?
Take a windowed virtual list with the capability of loading an arbitrary range of rows at any point in the list, such as in this following example.
The virtual list provides a callback that is called anytime the user scrolls to some rows that have not been fetched from the backend yet, and provides the start and stop indexes, so that, in an offset based pagination endpoint, I can fetch the required items without fetching any unnecessary data.
const loadMoreItems = (startIndex, stopIndex) => {
fetch(`/items?offset=${startIndex}&limit=${stopIndex - startIndex}`);
}
I'd like to replace my offset based pagination with a cursor based one, but I can't figure out how to reproduce the above logic with it.
The main issue is that I feel like I will need to download all the items before startIndex in order to receive the cursor needed to fetch the items between startIndex and stopIndex.
What's the correct way to approach this?
After some investigation I found what seems to be the way MongoDB approaches the problem:
https://docs.mongodb.com/manual/reference/method/cursor.skip/#mongodb-method-cursor.skip
Obviously he same approach can be adopted by any other backend implementation.
They provide a skip method that allows to skip an arbitrary amount of items after the provided cursor.
This means my sample endpoint would look like the following:
/items?cursor=${cursor}&skip=${skip}&limit=${stopIndex - startIndex}
I then need to figure out the cursor and the skip values.
The following code could work to find the closest available cursor, given I store them together with the items:
// Limit our search only to items before startIndex
const fragment = items.slice(0, startIndex);
// Find the closest cursor index
const cursorIndex = fragment.length - 1 - fragment.reverse().findIndex(item => item.cursor != null);
// Get the cursor
const cursor = items[cursorIndex];
And of course, I also have a way to know the skip value:
const skip = items.length - 1 - cursorIndex;
I have a list of valid values that I am storing in a data store. This list is about 20 items long now and will likely grow to around 100, maybe more.
I feel there are a variety of reasons it makes sense to store this in a data store rather than just storing in code. I want to be able to maintain the list and its metadata and make it accessible to other services, so it seems like a micro-service data store.
But in code, we want to make sure only values from the list are passed, and they can typically be hardcoded. So we would like to create an enum that can be used in code to ensure that valid values are passed.
I have created a simple node.js that can generate a JS file with the enum right from the data store. This could be regenerated anytime the file changes or maybe on a schedule. But sharing the enum file with any node.js applications that use it would not be trivial.
Has anyone done anything like this? Any reason why this would be a bad approach? Any feedback is welcome.
Piggy-backing off of this answer, which describes a way of creating an "enum" in JavaScript: you can grab the list of constants from your server (via an HTTP call) and then generate the enum in code, without the need for creating and loading a JavaScript source file.
Given that you have loaded your enumConstants from the back-end (here I hard-coded them):
const enumConstants = [
'FIRST',
'SECOND',
'THIRD'
];
const temp = {};
for (const constant of enumConstants) {
temp[constant] = constant;
}
const PlaceEnum = Object.freeze(temp);
console.log(PlaceEnum.FIRST);
// Or, in one line
const PlaceEnum2 = Object.freeze(enumConstants.reduce((o, c) => { o[c] = c; return o; }, {}));
console.log(PlaceEnum2.FIRST);
It is not ideal for code analysis or when using a smart editor, because the object is not explicitly defined and the editor will complain, but it will work.
Another approach is just to use an array and look for its members.
const members = ['first', 'second', 'third'...]
// then test for the members
members.indexOf('first') // 0
members.indexOf('third') // 2
members.indexOf('zero') // -1
members.indexOf('your_variable_to_test') // does it exist in the "enum"?
Any value that is >=0 will be a member of the list. -1 will not be a member. This doesn't "lock" the object like freeze (above) but I find it suffices for most of my similar scenarios.
I found this package and I tried to use it because I would like to see differences in JSON: https://www.npmjs.com/package/json-multilevel-delta
This is what I tried:
// row.old = "{\"current_page\":1,\"data\":[{\"id\":6430,\"name\":\"A random name\",\"code\":\"rname13\",\"description\":\"rname13test ...
// row.new = "{\"current_page\":1,\"data\":[{\"id\":6430,\"name\":\"A random name 2\",\"code\":\"rname13\",\"description\":\"rname13test ...
const oldData = JSON.parse(row.old);
const newData = JSON.parse(row.new);
const difference = jsonMultilevelDelta.json(oldData, newData);
console.log(difference);
However for some reason I am not getting any result, am I using it wrong?
From looking at it, it is only finding the difference by looking for missing properties, not by looking at the difference in values. I don't know if it was designed to meet your requirements.
It also has low weekly downloads and little activity so probably not the best thing you want in a project, imo.