Typescript Multi Dimensional Array's Values Not Updating (to null) - node.js

What I am Doing
I am trying to create a Sudoku solver and generator in Vue. Right now, I have the solving algorithm set up, and just need to generate new problems. I am generating problems by creating a completed Sudoku problem (complete with no bugs), then I have to remove nodes so that there is still only 1 solution to the problem.
The Problem
When I try to access a node from the multi-dimensional array that represents the board, and change it to null (what I am using to display a blank node), the board does not update that value. I am changing it with the following code: newGrid[pos[0]][pos[1]] = null; (where pos[0] is the row, pos[1] is the column , and newGrid is grid we want to mutate). Note that the array is an array with 9 arrays inside, and each of those arrays has 9 numbers (or null) which represent the values for that position in the grid. To elaborate on the bug, if I put a console.log(newGrid), there are normal looking values, and no null.
What I Know and Have Tried
I know it has to do with this specific line, and the fact that I am setting the value equal to null because changing null to another value (i.e. newGrid[pos[0]][pos[1]] = 0;) works and changes the array. The reason I don't just use a value other than null is: null renders and nothing and other values (0) render as something (null nodes should be blank), null is simple to understand in this situation (the logic is node has null, node has nothing, node is blank), and null is already implemented throughout my codebase.
Additionally, if I use console.log(newGrid[pos[0]][pos[1]]), null (the correct output) is outputted, even though console.log(newGrid) shows a number there, not null. Also, oddly enough, this works for one specific node. In row 1 (indexing starts at 0), column 8, null is set. Even though the input (completed) grid is always different, this node is always set to null. Edit: this bug had to do with the input grid already having null here, so it actually doesn't let any nulls be set.
To summarize: I expect an array with a null in a few positions I update, but I get a number instead. Also, there are no errors when the Typescript compiles to Javascript or during runtime.
Code
Given that I am not exactly sure where the problem may be (i.e. maybe I create the array wrong) I am including the minimum code with a pastebin link to the whole file (this is the full code). To restate, the goal of this function is to remove nodes from the list (by replacing them with null) in order to create a Sudoku puzzle with one solution. The code on Stack Overflow only includes some of the whole file, and the pastebin link includes the rest.
//global.d.ts
type Nullable<T> = T | null;
type Grid = Array<Array<number | null>>;
import { Solver } from './Solve';
// Inside the function that does the main work
const rowLen: number = grid.length;
const colLen: number = grid[0].length;
let newGrid: Grid = grid; // Grid is a argument for this function
let fullNodes = GetFirstFull(grid, colLen, rowLen);
let fullNodesLen: number = fullNodes.length;
// Some stuff that figures out how many solutions there are (we only want 1) is excluded
if (solutions != 1) {
fullNodesLen++;
rounds--;
} else {
newGrid[pos[0]][pos[1]] = null;
}
Note that if anything seems confusing check out the pastebin or ask. Thank you so much for taking the time to look at my problem!
Also, it isn't just 0 that works, undefined also makes it set correctly. So, this problem seems to be something with the null keyword...
EDIT:
Given that no one has responded yet, I assume: my problem is a bit hard, there isn't enough information, my post isn't good quality, or not enough people have seen it. To control the problem of not enough information, I would like to include the function that calls this function (just to see if that might be related).
generate(context: ActionContext<State, any>) {
let emptyArray = new Array(9);
for (let i = 0; i < 9; ++i)
emptyArray[i] = [null, null, null, null, null, null, null, null, null];
const fullGrid = Solver(emptyArray);
const puzzle = fullGrid ? Remover(fullGrid, 6) : state.gridLayout;
context.commit('resetBoard', puzzle);
},
Note: If you aren't familiar with Vuex, what context.commit does is changes the state (except it is changing a global state rather than a component state). Given that this function isn't refactored or very easy to read code in the first place, if you have any questions, please ask.
To solve other potential problems: I have been working on this, I have tried a lot of console.log()ing, changing the reference (newGrid) to a deepcopy, moving stuff out of the if statements, verifying code execution, and changing the way the point on the newGrid is set (i.e. by using newgrid.map() with logic to return that point as null). If you have any questions or I can help at all, please ask.

Related

StormCrawler: setting "maxDepth": 0 prevents ES seed injection

With StormCrawler 2.3-SNAPSHOT, setting "maxDepth": 0 in the urlfilters.json prevents the seed injection into the ES index. Is that the expected behaviour? Or should it be injecting the seeds and do a closed crawl on the injected seeds only with no redirection at all? (what I was expecting)
Launch looks fine but ES status index is empty.
See MaxDepthFilter, with a value of 0, everything gets filtered. Setting the filter to a value of 1 should do the trick, the seeds will be injected but their links won't be followed.
In MaxDepthFilter,
private String filter(final int depth, final int max, final String url) {
// deactivate the outlink no matter what the depth is
if (max == 0) {
return null;
}
if (depth >= max) {
LOG.debug("filtered out {} - depth {} >= {}", url, depth, maxDepth);
return null;
}
return url;
}
turns out that URLs need to have a depth of max-1 to be kept, so to put it differently, the actual maximum depth is max-1.
This feels not right and slightly confusing, I agree.
I think this is due to the sequence in which the outlinks get filtered. Often, this is done in the StatusEmitterBolt.
At the moment they first get filtered then inherit their metadata from the parent metadata. It is during that later step that their depth value gets incremented. I suspect this is why we are doing the max-1 trick.
There probably was a reason why the filtering was done first then the metadata inheritance but it has been a while and I can't remember any. I would be happy to change the order and get the metadata then filter and change the depth filtering so that it is more intuitive. Could you please open an issue on Github so that we discuss it there?
Thanks!

Why would you use the spread operator to spread a variable onto itself?

In the Google Getting started with Node.js tutorial they perform the following operation
data = {...data};
in the code for sending data to Firestore.
You can see it on their Github, line 63.
As far as I can tell this doesn't do anything.
Is there a good reason for doing this?
Is it potentially future proofing, so that if you added your own data you'd be less likely to do something like data = {data, moreData}?
#Manu's answer details what the line of code is doing, but not why it's there.
I don't know exactly why the Google code example uses this approach, but I would guess at the following reason (and would do the same myself in this situation):
Because objects in JavaScript are passed by reference, it becomes necessary to rebuild the 'data' object from it's constituent parts to avoid the original data object being further modified by the ref.set(data) call on line 64 of the example code:
await ref.set(data);
For example, in MongoDB, when you pass an object into a write or update method, Mongo will actually modify the object to add extra properties such as the datetime it was insert into a collection or it's ID within the collection. I don't know for sure if Firestore does the same, but if it doesn't now, it's possible that it may in future. If it does, and if your original code that calls the update method from Google's example code goes on to further manipulate the data object that it originally passed, that object would now have extra properties on it that may cause unexpected problems. Therefore, it's prudent to rebuild the data object from the original object's properties to avoid contamination of the original object elsewhere in code.
I hope that makes sense - the more I think about it, the more I'm convinced that this must be the reason and it's actually a great learning point.
I include the full original function from Google's code here in case others come across this in future, since the code is subject to change (copied from https://github.com/GoogleCloudPlatform/nodejs-getting-started/blob/master/bookshelf/books/firestore.js at the time of writing this answer):
// Creates a new book or updates an existing book with new data.
async function update(id, data) {
let ref;
if (id === null) {
ref = db.collection(collection).doc();
} else {
ref = db.collection(collection).doc(id);
}
data.id = ref.id;
data = {...data};
await ref.set(data);
return data;
}
It's making a shallow copy of data; let's say you have a third-party function that mutates the input:
const foo = input => {
input['changed'] = true;
}
And you need to call it, but don't want to get your object modified, so instead of:
data = {life: 42}
foo(data)
// > data
// { life: 42, changed: true }
You may use the Spread Syntax:
data = {life: 42}
foo({...data})
// > data
// { life: 42 }
Not sure if this is the particular case with Firestone but the thing is: spreading an object you get a shallow copy of that obj.
===
Related: Object copy using Spread operator actually shallow or deep?

How to prevent loops in jointjs / rappid

I'm building an application which uses jointjs / rappid and I want to be able to avoid loops from occuring across multiple cells.
Jointjs already has some examples on how to avoid this in a single cell (connecting an "out" port to an "in" port of the same cell) but has nothing on how to detect and prevent loops from occuring further up in the chain.
To help understand, imagine each cell in the paper is a step to be completed. Each step should only ever be run once. If the last step has an "out" port that connects to the "in" port of the first cell, it will just loop forever. This is what I want to avoid.
Any help is greatly appreciated.
I actually found a really easy way to do this for anyone else who wishes to achieve the same thing. Simply include the graphlib dependancy and use the following:
paper.on("link:connect", function(linkView) {
if(graphlib.alg.findCycles(graph.toGraphLib()).length > 0) {
linkView.model.remove();
// show some error message here
}
});
This line:
graphlib.alg.findCycles(graph.toGraphLib())
Returns an array that contains any loops, so by checking the length we can determine whether or not the paper contains any loops and if so, remove the link that the user is trying to create.
Note: This isn't completely full-proof because if the paper already contains a loop (before the user adds a link) then simply removing the link that the user is creating won't remove any loop that exists. For me this is fine because all of my papers will be created from scratch so as long as this logic is always in place, no loops can ever be created.
Solution through graphlib
Based on Adam's graphlib solution, instead of findCycles to test for loops, the graphlib docs suggests to use the isAcyclic function, which:
returns true if the graph has no cycles and returns false if it does. This algorithm returns as soon as it detects the first cycle.
Therefore this condition:
if(graphlib.alg.findCycles(graph.toGraphLib()).length > 0)
Can be shortened to:
if(!graphlib.alg.isAcyclic(graph))
JointJS functions solution
Look up the arrays of ancestors and successors of a newly connected element and intersect them:
// invoke inside an event which tests if a specific `connectedElement` is part of a loop
function isElementPartOfLoop (graph, connectedElement) {
var elemSuccessors = graph.getSuccessors(connectedElement, {deep: true});
var elemAncestors = connectedElement.getAncestors();
// *** OR *** graph.getPredecessors(connectedElement, {deep: true});
var commonElements = _.intersection(elemSuccessors, elemAncestors);
// if an element is repeated (non-empty intersection), then it's part of a loop
return !_.isEmpty(commonElements);
}
I haven't tested this, but the theory behind the test you are trying to accomplish should be similar.
This solution is not as efficient as using directly the graphlib functions.
Prevention
One way you could prevent the link from being added to the graph is by dealing with it in an event:
graph.on('add', _.bind(addCellOps, graph));
function addCellOps (cell, collection, opt) {
if (cell.isLink()){
// test link's target element: if it is part of a loop, remove the link
var linkTarget = cell.getTargetElement();
// `this` is the graph
if(target && isElementPartOfLoop(this, linkTarget)){
cell.remove();
}
}
// other operations ....
}

How to maintain counters with LinqToObjects?

I have the following c# code:
private XElement BuildXmlBlob(string id, Part part, out int counter)
{
// return some unique xml particular to the parameters passed
// remember to increment the counter also before returning.
}
Which is called by:
var counter = 0;
result.AddRange(from rec in listOfRecordings
from par in rec.Parts
let id = GetId("mods", rec.CKey + par.UniqueId)
select BuildXmlBlob(id, par, counter));
Above code samples are symbolic of what I am trying to achieve.
According to the Eric Lippert, the out keyword and linq does not mix. OK fair enough but can someone help me refactor the above so it does work? A colleague at work mentioned accumulator and aggregate functions but I am novice to Linq and my google searches were bearing any real fruit so I thought I would ask here :).
To Clarify:
I am counting the number of parts I might have which could be any number of them each time the code is called. So every time the BuildXmlBlob() method is called, the resulting xml produced will have a unique element in there denoting the 'partNumber'.
So if the counter is currently on 7, that means we are processing 7th part so far!! That means XML returned from BuildXmlBlob() will have the counter value embedded in there somewhere. That's why I need it somehow to be passed and incremented every time the BuildXmlBlob() is called per run through.
If you want to keep this purely in LINQ and you need to maintain a running count for use within your queries, the cleanest way to do so would be to make use of the Select() overloads that includes the index in the query to get the current index.
In this case, it would be cleaner to do a query which collects the inputs first, then use the overload to do the projection.
var inputs =
from recording in listOfRecordings
from part in recording.Parts
select new
{
Id = GetId("mods", recording.CKey + part.UniqueId),
Part = part,
};
result.AddRange(inputs.Select((x, i) => BuildXmlBlob(x.Id, x.Part, i)));
Then you wouldn't need to use the out/ref parameter.
XElement BuildXmlBlob(string id, Part part, int counter)
{
// implementation
}
Below is what I managed to figure out on my own:.
result.AddRange(listOfRecordings.SelectMany(rec => rec.Parts, (rec, par) => new {rec, par})
.Select(#t => new
{
#t,
Id = GetStructMapItemId("mods", #t.rec.CKey + #t.par.UniqueId)
})
.Select((#t, i) => BuildPartsDmdSec(#t.Id, #t.#t.par, i)));
I used resharper to convert it into a method chain which constructed the basics for what I needed and then i simply tacked on the select statement right at the end.

Increase Hashmap Index Without Looping

I have working on clustering algorithm. I decided to use hashmap to store the points because thinking that i can use as clusterID and as the point. I do a dfs fashion search to identify nearest and my calculation related work and all the looping on data take place outside of the method that I identify the clusters.
Also the intention of this clustering is that, if a point belongs to a same cluster its id remain the same. What I want to find out is that once i enter value in the hash map how can increase the index for the next value (Key would be same) with out using loop.
Here is how my method looks like, I took up some content of the algorithm out of since it really not relevant to the question.
public void dfsNearest(double point) {
double aPointInCluster = point;
if(!cluster.contains(aPointInCluster)) {
...
this.setNumOfClusters(this.getNumOfClusters() + 1);
mapOfCluster.put(this.getNumOfClusters(), aPointInCluster);
//after this i want to increase the index so no override happens
}
...
if(newNeighbor != 0.0) {
cluster.add(newNeighbor);
mapOfCluster.put(this.getNumOfClusters(), newNeighbor);
//want to increase the index....
...
if (!visitedMap.containsKey(newNeighbor)) {
dfsNearest(newNeighbor);
}
}
...
}
Thanks for any suggestions, also please let me know if rest of the code is necessary to make a good decision. Just wanted to keep it simple.

Resources