Converting ODATA url to sequelize selector - node.js

My node.js application (getting requests with express, stores data with sequelize and proxies to a sap odata api) gets odata requests. As sequelize can't handle the odata scheme, I have to create a parser by myself.
For example the odata URI could be:
$expand=Car($expand=Brand($expand=Type)),Person($expand=Gender)&$filter=id eq 1
As you can see, there are some nested expands.
The parsed data should look like that:
{
include: [
{
association: "Car",
include: [
{
association: "Brand"
include: [
{
association: "Type"
}
}
]
},
{
association: "Person",
include: [
{
association: "Gender"
}
]
}
],
where: {
id: 1
}
}
I already had a look at some npm packages, but none of them does support the expand of entities/entitysets.
Any idea how to recursively convert that odata scheme to the sequelize selectors?
I'm pretty sure that would be interesting for many api developers using node.

Related

How to use scope in loopback filter in json format

I am trying to make call from my angular service to loopback api. I have a parcelStatuses collection that contains a parcelId so i am able to include parcel collection too but I also need to check against a particular vendorId and that vendorId exists in parcel collection. I am trying to make use of scope to check against particular vendorId but i think i am not writing correct json syntax/call. Here is my function inside service
private getParcelsByFilter(
limit: number,
skip: number,
vendorId: string,
filter: string
) {
const checkFilter = {
"where": {
"and": [{"statusRepositoryId": filter}]
},
"include": [
{
"parcel": [
{
"scope": {"vendorId": vendorId}
},
"parcelStatuses",
{"customerData":"customer"}
]
}
],
"limit": limit,
"skip": skip,
}
return this._http.get<IParcel[]>(
`${environment.url}/ParcelStatuses?filter=${encodeURIComponent(JSON.stringify(checkFilter))}`
);
}
Here is my demo view of parcelStatus collection object
[{
"id":"lbh24214",
"statusRepositoryId":"3214fsad",
"parcelId":"LH21421"
}]
Demo json of parcel
[{
"id":"LHE21421",
"customerDataId":"214fdsas",
"customerId":"412dsf",
"vendorId":"123421"
}]
Please help me with writing correct call
Formatting aside, there's several issues with the query:
Unnecessary and
This line:
where: {
and: [{statusRepositoryId: filter}]
}
Can be simplified to:
where: {
statusRepositoryId: filter
}
As there is only 1 where condition, and becomes redundant.
Misuse of include and scope
include is used to include relations while scope applies filters to those relations. They can work in tandem to create a comprehensive query:
include: [
{
relation: "parcels",
scope: {
where: {vendorId: vendorId},
}
}
],
This will include the parcels relation as part of the response, while filtering the parcels relation with a where filter.
That means the final code should look similar to the following:
private getParcelsByFilter(
limit: number,
skip: number,
vendorId: string,
filter: string
) {
const checkFilter = {
where: {statusRepositoryId: filter},
include: [
{
relation: "parcels",
scope: {
where: {vendorId: vendorId},
}
}
],
limit: limit,
skip: skip,
}
return this._http.get<IParcel[]>(
`${environment.url}/ParcelStatuses?filter=${encodeURIComponent(JSON.stringify(checkFilter))}`
);
}
Further reading
Please review these resources to get a better understanding on how to use filters.
https://loopback.io/doc/en/lb4/Include-filter.html

Using Azure CosmosDB DocumentDB API with Graph API

What I would like to be able to do:
Save schemaless JSON to documents
Connect those documents arbitrarily
Get recursive tree of documents based on aforementioned connections, like for example:
{
"name": "Document 1",
"includes": [
{
"name": "Document 2.1"
"includes": [
{
"name": "Document 3",
"includes": []
}
]
},
{
"name": "Document 2.2",
"includes": []
}
]
}
Current status of my setup:
CosmosDB instance configured with Graph (Gremlin) API
Possible to create (JSON) documents through DocumentDB API
Possible to created edges to documents through Graph API
Using Node.js SDKs
Questions:
Is it possible to save JSON objects as vertices through Graph API? It allows creating vertices with g.addV('person').property('id', 'thomas').property('firstName', 'Thomas').property('age', 44).property('userid', 1) but something like g.addV({ firstName: 'Thomas' }) does not seem to work.
If I add documents through DocumentDB API and edges between them through Graph API and traverse through the graph, results only include IDs of the documents, not other properties. Is it possible to populate the documents somehow?
Example traversal query:
g.V('03e0576f-2ff7-6109-5ed5-237b43191354').repeat(out('includes')).until(not(out('includes'))).simplePath().dedup().tree().by('id')
Result from this query:
[{
"03e0576f-2ff7-6109-5ed5-237b43191354": {
"key": "03e0576f-2ff7-6109-5ed5-237b43191354",
"value": {
"7fab4007-c80c-ba21-f5d3-8eb353ea3279": {
"key": "7fab4007-c80c-ba21-f5d3-8eb353ea3279",
"value": {
"eec55fbd-6900-130d-247f-fb437b093711": {
"key": "eec55fbd-6900-130d-247f-fb437b093711",
"value": {}
},
"cfd14a8c-1ac3-6cc3-e2a4-ac3f250478e1": {
"key": "cfd14a8c-1ac3-6cc3-e2a4-ac3f250478e1",
"value": {
"acac136a-3bd4-831c-df6e-e5b95e593b9a": {
"key": "acac136a-3bd4-831c-df6e-e5b95e593b9a",
"value": {}
}
}
}
}
}
}
}
}]
Yes, it is possible to insert documents both through the Graph API and through the Document API. However, Cosmos expects a specific GraphSON format for the documents in order for all of their properties to be picked up during graph traversal.
I'd recommend taking a look at both Vertex Properties and GraphSON from the Tinkerpop documentation to start to get a better idea about these topics.
When adding a document through Gremlin the syntax is a name value comma separated for all properties you want to add. Try this:
g.addV('label', 'human', 'name', 'jesse', 'age', 27)
Now if you go to the Azure portal and execute a SQL query SELECT * FROM c you'll be able to see the format that Cosmos has translated your document into.

Mongoose populate either ObjectId reference or String

Is there a way to specify a heterogeneous array as a schema property where it can contain both ObjectIds and strings? I'd like to have something like the following:
var GameSchema = new mongoose.schema({
players: {
type: [<UserModel reference|IP address/socket ID/what have you>]
}
Is the only option a Mixed type that I manage myself? I've run across discriminators, which look somewhat promising, but it looks like it only works for subdocuments and not references to other schemas. Of course, I could just have a UserModel reference and create a UserModel that just stores the IP address or whatever I'm using to identify them, but that seems like it could quickly get hugely out of control in terms of space (having a model for every IP I come across sounds bad).
EDIT:
Example:
A game has one logged in user, three anonymous users, the document should look something like this:
{ players: [ ObjectId("5fd88ea85...."), "192.0.0.1", "192.1.1.1", "192.2.2.1"] }
Ideally this would be populated to:
{ players: [ UserModel(id: ..., name: ...), "192.0.0.1", "192.1.1.1", "192.2.2.1"] }
EDIT:
I've decided to go a different route: instead of mixing types, I'm differentiating with different properties. Something like this:
players: [
{
user: <object reference>,
sessionID: <string>,
color: {
type: String
},
...other properties...
}
]
I have a validator that ensures only one of user or sessionID are populated for a given entry. In some ways this is more complex, but it does obviate the need to do this kind of conditional populating and figuring out what type each entry is when iterating over them. I haven't tried any of the answers, but they look promising.
If you are content to go with using Mixed or at least some scheme that will not work with .populate() then you can shift the "join" responsibility to the "server" instead using the $lookup functionality of MongoDB and a little fancy matching.
For me if I have a "games" collection document like this:
{
"_id" : ObjectId("5933723c886d193061b99459"),
"players" : [
ObjectId("5933723c886d193061b99458"),
"10.1.1.1",
"10.1.1.2"
],
"__v" : 0
}
Then I send the statement to the server to "join" with the "users" collection data where an ObjectId is present like this:
Game.aggregate([
{ "$addFields": {
"users": {
"$filter": {
"input": "$players",
"as": "p",
"cond": { "$gt": [ "$$p", {} ] }
}
}
}},
{ "$lookup": {
"from": "users",
"localField": "users",
"foreignField": "_id",
"as": "users"
}},
{ "$project": {
"players": {
"$map": {
"input": "$players",
"as": "p",
"in": {
"$cond": {
"if": { "$gt": [ "$$p", {} ] },
"then": {
"$arrayElemAt": [
{ "$filter": {
"input": "$users",
"as": "u",
"cond": { "$eq": [ "$$u._id", "$$p" ] }
}},
0
]
},
"else": "$$p"
}
}
}
}
}}
])
Which gives the result when joined to the users object as:
{
"_id" : ObjectId("5933723c886d193061b99459"),
"players" : [
{
"_id" : ObjectId("5933723c886d193061b99458"),
"name" : "Bill",
"__v" : 0
},
"10.1.1.1",
"10.1.1.2"
]
}
So the "fancy" part really relies on this logical statement when considering the entries in the "players" array:
"$filter": {
"input": "$players",
"as": "p",
"cond": { "$gt": [ "$$p", {} ] }
}
How this works is that to MongoDB, an ObjectId and actually all BSON types have a specific sort precedence. In this case where the data is "Mixed" between ObjectId and String then the "string" values are considered "less than" the value of a "BSON Object", and the ObjectId values are "greater than".
This allows you to separate the ObjectId values from the source array into their own list. Given that list, you $lookup to perform the "join" at get the objects from the other collection.
In order to put them back, I'm using $map to "transpose" each element of the original "players" where the matched ObjectId was found with the related object. An alternate approach would be to "split" the two types, do the $lookup and $concatArrays between the Users and the "strings". But that would not maintain the original array order, so $map may be a better fit.
I will add of note that the same basic process can be applied in a "client" operation by similarly filtering the content of the "players" array to contain just the ObjectId values and then calling the "model" form of .populate() from "inside" the response of the initial query. The documentation shows an example of that form of usage, as do some answers on this site before it was possible to do a "nested populate" with mongoose.
The other point of mind here is that .populate() itself existed as a mongoose method long before the $lookup aggregation pipeline operator came about, and was a solution for a time when MongoDB itself was incapable of performing a "join" of any sort. So the operations are indeed "client" side as an emulation and really only perform additional queries that you do not need to be aware of in issuing the statements yourself.
Therefore it should generally be desirable in a modern scenario to use the "server" features, and avoid the overhead involved with multiple queries in order to get the result.

Feathers - restrict service response to data owned by current user only

In Feathers the goal is to restrict the data accessible on a certain service to the data owned by the currently logged in user only.
Assuming that I am using Feathers authentication, the data available on this service is stored in a database table, and the table column that contains the user ID is called user_id, will this hook achieve the goal?
If not then what needs to change?
In case it is important to be able to answer the question then I am using Sequelize and Postgres.
const { authenticate } = require('feathers-authentication').hooks;
const { queryWithCurrentUser } = require('feathers-authentication-hooks');
const { associateCurrentUser } = require('feathers-authentication-hooks');
const readRestrict = [
queryWithCurrentUser({
idField: 'id',
as: 'user_id'
})
];
const modRestrict = [
associateCurrentUser({
idField: 'id',
as: 'user_id'
})
];
module.exports = {
before: {
all: [ authenticate('jwt') ],
find: [ ...readRestrict ],
get: [ ...readRestrict ],
create: [ ...modRestrict ],
update: [ ...modRestrict ],
patch: [ ...modRestrict ],
remove: [ ...modRestrict ]
},
after: {
all: [],
find: [],
get: [],
create: [],
update: [],
patch: [],
remove: []
},
error: {
all: [],
find: [],
get: [],
create: [],
update: [],
patch: [],
remove: []
}
};
It seems to work but since I'm a Feathers noob I thought I'd better check before this is put into the wild to make sure there are no cases that I am unaware of that will cause leaks.
As a totoal beginner to feathers and express, I am unsure. Now, all works as stated above.
Old Answer
For remove, I used restrictToOwner. (I also think for patch and update because they operate on the existing data. I did not test that though.)
Otherwise I was able to cross-delete data by specifying the id. Maybe you can check if this is the case for you, too.
This is the test case:
user 1 creates an model object with
user id to check authorization
object id to identify the object
user 2 deletes the object with object id
test ok: 404 expected
test fail: 204 or 200 worked
user 1 tries to get the object
test ok: object is there, 200
test fail: object is absent, 404
Test code:
test_cannot_cross_delete
Thank you very much, you post was really helpful to me!

Most efficient way to check if element exists in a set

so in my MongoDB database I have a collection holding user posts.
Within that collection I have a set called "likes", which holds an array of the ids of the users that have liked that post. When querying I would like to pass a user id to my query and have a boolean in the result telling me whether the id exists in the array to see whether the user has already liked the post. I understand this would be easy to do with two queries, one to get the post and one to check if the user has liked it, but I would like to find the most efficient way to do this.
For example, one of my documents looks like this
{
_id: 24jef247jos991,
post: "Test Post",
likes: ["userid1", "userid2"]
}
When I query from "userid1" I would like the return
{
_id: 24jef247jos991,
post: "Test Post",
likes: ["userid1", "userid2"],
userLiked: true
}
But when I query from let's say "userid3" I would like
{
_id: 24jef247jos991,
post: "Test Post",
likes: ["userid1", "userid2"],
userLiked: false
}
You can add the $addFields stage checking each of the document likes arrays against the input user.
db.collection.aggregate( [
{
$addFields: {
"userLiked":{ $in: [ "userid1", "$likes" ] }
}
}
] )
Starting from MongoDB 3.4 you can use the $in aggregation operator to check if an array contains a given element. You can use the $addFields operator aggregation operator to add the newly computed value to your document without explicitly including other fields.
db.collection.aggregate( [
{ "$addFields": { "userLiked": { "$in": [ "userid1", "$likes" ] } } }
])
In MongoDB 3.2, you can use the $setIsSubset operator and the square bracket [] operator to do this. The downside of this approach is that you need to manually $project all the field in your document. Also the $setIsSubset operator with de-duplicate your array which may not be what you want.
db.collection.aggregate([
{ "$project": {
"post": 1, "likes": 1,
"userLiked": { "$setIsSubset": [ [ "userid3" ], "$likes" ] }
}}
])
Finally if your mongod version is 3.0 or older you need to use the $literal operator instead of the [] operator.

Resources