JointJS: joint.dia.Graph.prototype.clear doesn't work - jointjs

joint.dia.Graph.prototype.clear is not working for me. When I do a clear and log the graph, it seems to still contain all the elements. Is clear an async function?
this.graph.clear();
appLogger.info({graph: this.graph}, "Graph with zero elem");
gives
[2019-11-28T21:03:46+05:30] info: Graph with zero elem
graph: child
attributes:
cells: child
cellNamespace: null
graph: child {cid: "c1", attributes: {…}, _changing: false, _previousAttributes: {…}, changed: {…}, …}
length: 3
models: (3) [child, child, child]
_byId: {c2: child, 5d55f246-c572-4462-b6a0-1325040b2d1a: child, c3: child, 143b2670-c8b4-43d0-8007-c7860405f530: child, c4: child, …}
_events: {all: Array(1), add: Array(1), remove: Array(2), reset: Array(1), change:source: Array(1), …}
__proto__: Backbone.Collection
__proto__: Object
changed:
cells: child
cellNamespace: null
graph: child {cid: "c1", attributes: {…}, _changing: false, _previousAttributes: {…}, changed: {…}, …}
length: 3
models: (3) [child, child, child]
_byId: {c2: child, 5d55f246-c572-4462-b6a0-1325040b2d1a: child, c3: child, 143b2670-c8b4-43d0-8007-c7860405f530: child, c4: child, …}
_events: {all: Array(1), add: Array(1), remove: Array(2), reset: Array(1), change:source: Array(1), …}
__proto__: Backbone.Collection
__proto__: Object
cid: "c1"
_changing: false**
Could you please help?

this method is pretty straightforward, no catches. It removes all cells one by one by calling the remove on the element/link. Don't you have overridden the remove on any of the cells? Especially the 3 remained in the graph?

Related

typeorm update relations: INSERT before DELETE leads to a duplicate violation

I have a manytomany relation.
A Coordination can have several countries.
It ends up with 3 tables: coordination, country and coordination_country_join
#Entity('coordination')
...
#ManyToMany(() => CountryEntity)
#JoinTable({
joinColumn: {
name: 'event_id_w',
referencedColumnName: 'event_id_w',
},
inverseJoinColumn: {
name: 'CountryUnCode',
referencedColumnName: 'UN_Code',
},
})
countries: string[];
When I save my Coordination, with an array of countries, it works fine, but I found out weird sequence in the SQL statements.
To update the relations (the countries, content of coordination_country_join), it does
INSERT INTO coordination_country_join ... all the new countries of the given coordination
then DELETE from coordination_country_join all the old countries of the relation
Which does not work when I save the coordination while no country has changed because it tries to insert a pair (countryId, coordinationId) which already exists in the coordination_country_join table.
How can I fix this issue ?
Thanks
query: SELECT "CoordinationEntity"."gid" AS "CoordinationEntity_gid", "CoordinationEntity"."objectid" AS "CoordinationEntity_objectid", "CoordinationEntity"."gdacsid" AS "CoordinationEntity_gdacsid", "CoordinationEntity"."type" AS "CoordinationEntity_type", "CoordinationEntity"."name" AS "CoordinationEntity_name", "CoordinationEntity"."coordinato" AS "CoordinationEntity_coordinato", "CoordinationEntity"."requestor" AS "CoordinationEntity_requestor", "CoordinationEntity"."activation" AS "CoordinationEntity_activation", "CoordinationEntity"."spacechart" AS "CoordinationEntity_spacechart", "CoordinationEntity"."glidenumbe" AS "CoordinationEntity_glidenumbe", "CoordinationEntity"."url" AS "CoordinationEntity_url", "CoordinationEntity"."date_creat" AS "CoordinationEntity_date_creat", "CoordinationEntity"."date_close" AS "CoordinationEntity_date_close", "CoordinationEntity"."status" AS "CoordinationEntity_status", "CoordinationEntity"."comment" AS "CoordinationEntity_comment", "CoordinationEntity"."event_id_w" AS "CoordinationEntity_event_id_w", ST_AsGeoJSON("CoordinationEntity"."the_geom")::json AS "CoordinationEntity_the_geom" FROM "coordination" "CoordinationEntity" WHERE "CoordinationEntity"."gid" IN ($1) -- PARAMETERS: [36]
query: SELECT "CoordinationEntity_countries_rid"."event_id_w" AS "event_id_w", "CoordinationEntity_countries_rid"."CountryUnCode" AS "CountryUnCode" FROM "country" "country" INNER JOIN "coordination_countries_country" "CoordinationEntity_countries_rid" ON ("CoordinationEntity_countries_rid"."event_id_w" = $1 AND "CoordinationEntity_countries_rid"."CountryUnCode" = "country"."UN_Code") ORDER BY "CoordinationEntity_countries_rid"."CountryUnCode" ASC, "CoordinationEntity_countries_rid"."event_id_w" ASC -- PARAMETERS: [50]
query: START TRANSACTION
query: INSERT INTO "coordination_countries_country"("event_id_w", "CountryUnCode") VALUES ($1, $2) -- PARAMETERS: [50,4]
query failed: INSERT INTO "coordination_countries_country"("event_id_w", "CountryUnCode") VALUES ($1, $2) -- PARAMETERS: [50,4]
error: error: duplicate key value violates unique constraint "PK_622c3d328cd639f1f6deb8f3874"
at Parser.parseErrorMessage (/home/florent/dev/smcs/api/node_modules/pg-protocol/src/parser.ts:369:69)
at Parser.handlePacket (/home/florent/dev/smcs/api/node_modules/pg-protocol/src/parser.ts:188:21)
at Parser.parse (/home/florent/dev/smcs/api/node_modules/pg-protocol/src/parser.ts:103:30)
at Socket.<anonymous> (/home/florent/dev/smcs/api/node_modules/pg-protocol/src/index.ts:7:48)
at Socket.emit (events.js:375:28)
at addChunk (internal/streams/readable.js:290:12)
at readableAddChunk (internal/streams/readable.js:265:9)
at Socket.Readable.push (internal/streams/readable.js:204:10)
at TCP.onStreamRead (internal/stream_base_commons.js:188:23) {
length: 274,
severity: 'ERROR',
code: '23505',
detail: 'Key (event_id_w, "CountryUnCode")=(50, 4) already exists.',
hint: undefined,
position: undefined,
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: 'public',
table: 'coordination_countries_country',
column: undefined,
dataType: undefined,
constraint: 'PK_622c3d328cd639f1f6deb8f3874',
file: 'nbtinsert.c',
line: '434',
routine: '_bt_check_unique'
}
query: ROLLBACK

What are the succeed, fail, and done Context Methods For?

In Node.js, Amazon Lambda functions have a signature that looks like this
exports.handler = async function (event, context) {
// TODO implement
const response = {
statusCode: 200,
body: JSON.stringify('Hello from Lambda July 20!'),
};
return response
};
The event parameter contains information about the AWS Service Event that triggered the Lambda. The context parameter contains information about the Lambda environment itself.
The context object is documented here. Per those docs, it has a single method named getRemainingTimeInMillis and a number of properties.
However, if I log this object, I see the following
INFO {
callbackWaitsForEmptyEventLoop: [Getter/Setter],
succeed: [Function (anonymous)],
fail: [Function (anonymous)],
done: [Function (anonymous)],
functionVersion: '$LATEST',
functionName: 'july-2021-delete-after-july-31',
memoryLimitInMB: '128',
logGroupName: '/aws/lambda/july-2021-delete-after-july-31',
logStreamName: '2021/07/15/[$LATEST]e05ac24e44d6489b9f8124791b3d5513',
clientContext: undefined,
identity: undefined,
invokedFunctionArn: '...',
awsRequestId: '7852dc1a-8283-46a1-b445-e6d6187553b6',
getRemainingTimeInMillis: [Function: getRemainingTimeInMillis]
}
That is, there's three additional methods named succeed, fail, and done.
succeed: [Function (anonymous)],
fail: [Function (anonymous)],
done: [Function (anonymous)],
What are these methods for, exactly? I can take some guesses and Googling around leads to some circumstantial evidence that they're deprecated methods, but I can't seem to find any documentation on what they're meant to do or how they work.

Sequelize, how to use Op.contains to find models that have certain values

I'm working on some project about renting houses and appartments and I've reached the point when i need to implement filtering houses based on features they have(wifi, security and other staff). In the beginning I decided to try Sequelize ORM for the first time. Stuff like adding, creating, editing is working fine, but the filtering part is where I have some problems.
I'm working with nodejs,express and postgresql.
I need to find all houses that have features listed in the array of features IDs. Here is what I've tried. In this example I'm trying to get houses which have features with ids 1, 2 and 4.
db.House.findAll({
include: [{
model: db.HouseFeature,
as: 'HouseFeatures',
where: {
featureId: {
[Op.contains]: [1, 2, 4] //<- array of featuresIds
}
}
}]
})
Fetching houses by single feature id works fine because i don't use Op.contains there.
Here are some relations related to this case:
House.hasMany(models.HouseFeature, { onDelete: 'CASCADE' });
HouseFeature.belongsTo(models.House);
HouseFeature contains featureId field.
Here is the error I get:
error: оператор не существует: integer #> unknown
at Connection.parseE (C:\***\server\node_modules\pg\lib\connection.js:601:11)
at Connection.parseMessage (C:\***\server\node_modules\pg\lib\connection.js:398:19)
at Socket.<anonymous> (C:\***\server\node_modules\pg\lib\connection.js:120:22)
at Socket.emit (events.js:182:13)
at addChunk (_stream_readable.js:283:12)
at readableAddChunk (_stream_readable.js:264:11)
at Socket.Readable.push (_stream_readable.js:219:10)
at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
name: 'error',
length: 397,
severity: 'ОШИБКА',
code: '42883',
detail: undefined,
hint:
'Оператор с данными именем и типами аргументов не найден. Возможно, вам следует добавить явные приведения типов.',
position: '851',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file:
'd:\\pginstaller.auto\\postgres.windows-x64\\src\\backend\\parser\\parse_oper.c',
line: '731',
routine: 'op_error',
sql:
'SELECT "House"."id", "House"."title", "House"."description", "House"."price", "House"."address", "House"."lat", "House"."lon", "House"."kitchen", "House"."bathrooms", "House"."floor", "House"."totalFloors", "House"."people", "House"."area", "House"."bedrooms", "House"."trusted", "House"."createdAt", "House"."updatedAt", "House"."CityId", "House"."ComplexId", "House"."OwnerProfileId", "House"."HouseTypeId", "House"."RentTypeId", "HouseFeatures"."id" AS "HouseFeatures.id", "HouseFeatures"."featureId" AS "HouseFeatures.featureId", "HouseFeatures"."createdAt" AS "HouseFeatures.createdAt", "HouseFeatures"."updatedAt" AS "HouseFeatures.updatedAt", "HouseFeatures"."HouseId" AS "HouseFeatures.HouseId" FROM "Houses" AS "House" INNER JOIN "HouseFeatures" AS "HouseFeatures" ON "House"."id" = "HouseFeatures"."HouseId" AND "HouseFeatures"."featureId" #> \'1,2\';'
Sorry for some russian there.
UPDATE:
I've managed to do what i needed by changing each House relating only to one HouseFeature, and by changing that HouseFeature model to store array of featureIds. Op.contains works fine.
db.House.findAll({
include: [{
model: db.HouseFeature,
as: 'HouseFeature',
where: {
features: {
[Op.contains]: req.body.features
}
},
}]
})
// Associations
HouseFeature.belongsTo(models.House);
House.hasOne(models.HouseFeature, { onDelete: 'CASCADE' });
const HouseFeature = sequelize.define('HouseFeature', {
features: {
type: DataTypes.ARRAY(DataTypes.INTEGER)
}
}, {});
Now i have one little issue. Can I somehow link HouseFeature model with Feature model to fetch feature icon images and name later on? With Feature ids being stored inside HouseFeature array.
Please check the difference between Op.in and Op.contains:
[Op.in]: [1, 2], // IN [1, 2]
[Op.contains]: [1, 2] // #> [1, 2] (PG array contains operator)
It looks like HouseFeatures.featureId is a PK with type integer, not a postgres array.
Please try:
db.House.findAll({
include: [{
model: db.HouseFeature,
as: 'HouseFeatures',
where: {
featureId: {
[Op.in]: [1, 2, 3]
}
}
}]
})
or even
db.House.findAll({
include: [{
model: db.HouseFeature,
as: 'HouseFeatures',
where: {
featureId: [1, 2, 3]
}
}]
})
instead

Get all network interfaces with node

How would I get all network interfaces and their IP address, mac addresses, state, and it's master interface.
os.networkInterfaces() won't work, because it doesn't report interfaces that are down, or don't have IP addresses, and it doesn't return their state (UP/DOWN/etc) or their master interface.
var shell = require('shelljs');
var interfaceCard = shell.ls('/sys/class/net');
this interfaceCard has list of all network interfaces
output will be
[ 'eth0',
'eth1',
'lo',
**stdout: 'eth0\neth1\nlo\n',**
stderr: null,
code: 0,
cat: [Function: bound ],
exec: [Function: bound ],
grep: [Function: bound ],
head: [Function: bound ],
sed: [Function: bound ],
sort: [Function: bound ],
tail: [Function: bound ],
to: [Function: bound ],
toEnd: [Function: bound ],
uniq: [Function: bound ] ]
interfaceCard=interfaceCard.**stdout**.split('\n');
interfaceCard = eth0, eth1, lo

write in stdin for a spawned child_process doesn't work

I am spawnning a java App (REPL for querying a local DB) using:
repl = = require('child_process').spawn('java', ['-cp', '...list of libs...', ,{ cwd: '...path to env...', env: process.env, customFds: [-1, -1, -1] });
The REPL loads fine because I can seen its outputs in stdout, but stdin.write commands don't go throught. I can however write them directly the console window of the node process itself (which is weird since I didn't .resume() it).
I have printed out the stdin of the spawned process, it looks like this:
{ _handle:
{ writeQueueSize: 0,
socket: [Circular],
onread: [Function: onread] },
_pendingWriteReqs: 0,
_flags: 0,
_connectQueueSize: 0,
destroyed: false,
bytesRead: 0,
bytesWritten: 0,
allowHalfOpen: undefined,
writable: true,
readable: false }
It seems there is no 'fd' defined, and also .readable returns false. How can this be resolved?
(this is all on a windows machine, node v0.6.6)
Thanks
The documentation states that the customFds option was deprecated specifically because they couldn't get it to work on Windows.
While an array of -1's implies that it shouldn't be used, since the entire option is deprecated, try removing it entirely and see if that solves your problem.

Resources