Aliasing is very handy and works great when aliasing a specific resolver. For instance:
{
admins: users(type:"admin"){
username
}
moderators: users(type:"moderators"){
moderators
}
}
I'm not sure how to handle aliasing of the fields themselves though. For example:
{
site_stats {
hits: sum(field: "hits")
bounces: sum(field: "bounces")
}
}
If the resolver returns any sum value, the same value is aliased to both hits and bounces (which makes sense, since only a single sum value could even be returned). If I make the resolver use the alias names as the field names when returning the results, hits and bounces both become null.
I could simply break those fields out into separate resolvers, but that complicates integration for the front end devs. We would also lose a ton of efficiency benefits, since I can aggregate all the data needed in a single query to our data source (we're using ElasticSearch).
Any help from you geniuses would be greatly appreciated!
Using aliases and single fields has very limited usability.
You can use complex filters (input params), f.e. list of keys to be returned and their associated params, f.e.
[{name:"hits", range:"month"},
{name:"bounces", range:"year"}]
With query - expected structure
{
stats {
name
sum
average
}
}
Required fields may vary, f.e. only name and sum.
You can return arrays of object f.e.
{ stats: [
{ name:"hits",
sum:12345,
average: 456 }
Aliases can be usable here to choose different data sets f.e. name and sum for hits, bounces additionally with average.
... more declarative?
PS. There is nothing that "complicates integration for the front end devs". Result is json, can be converted/transformed/adapted after fetching (clinet side) when needed.
It sounds like you're putting all your logic inside the root-level resolver (site_stats) instead of providing a resolver for the sum field. In other words, if your resolvers look like this:
const resolvers = {
Query: {
site_stats: () => {
...
return { sum: someValue }
},
},
}
you should instead do something like:
const resolvers = {
Query: {
site_stats: () => {
return {} // empty object
},
},
SiteStats: {
sum: () => {
...
return someValue
},
},
}
This way you're not passing down the value for sum from the parent and relying on the default resolver -- you're explicitly providing the value for sum inside its resolver. Since the sum resolver will be called separately for each alias with the arguments specific to that alias, each alias will resolve accordingly.
Related
I saw that the realtime database keys must be strings.
I want to use numbers as keys and do a start at and end at to fetch chunks of data e.g.
admin.database().ref('PARENT').orderByKey().startAt('107').endAt('1032').once('value');
will this do in numerical order e.g. 107,108,109,110,111,112... or will it return in a string order by e.g. 1,11,111,2,22,222
Thank you
Firebase Realtime Database keys are strings and should always be treated as such.
The exception to this happens with when fetching a parent node that that contains child nodes that look like mostly sequential numbers. In that case, the client receives a list or array of results where the string keys actually become the numeric indexes on in the array. However, it's generally not recommended to deal with lists of data this way (and I strongly recommend reading that linked blog post). In particular, this part:
If all of the keys are integers, and more than half of the keys between 0 and the maximum key in the object have non-empty values, then Firebase will render it as an array.
Keys are returned in lexicographical order, so you'll get 1,11,111,2,22,222.
If you want to get them in numerical order, use keys that sort the same lexicographically as they do numerically, by padding with 0's at the start:
{
"001": { ... },
"002": { ... },
"011": { ... },
"022": { ... },
"111": { ... },
"222": { ... }
}
The number of character you use should be picked to allow the maximum numeric value you ever want to support. In the above that maximum value would be 999, but you may have a different requirement and thus pick a different number of characters.
You can then also prefix the keys with a short non-numeric string, to ensure you never hit the array coercion that Doug mentions in his answer:
{
"key001": { ... },
"key002": { ... },
"key011": { ... },
"key022": { ... },
"key111": { ... },
"key222": { ... }
}
Let's say my graphql server wants to fetch the following data as JSON where person3 and person5 are some id's:
"persons": {
"person3": {
"id": "person3",
"name": "Mike"
},
"person5": {
"id": "person5",
"name": "Lisa"
}
}
Question: How to create the schema type definition with apollo?
The keys person3 and person5 here are dynamically generated depending on my query (i.e. the area used in the query). So at another time I might get person1, person2, person3 returned.
As you see persons is not an Iterable, so the following won't work as a graphql type definition I did with apollo:
type Person {
id: String
name: String
}
type Query {
persons(area: String): [Person]
}
The keys in the persons object may always be different.
One solution of course would be to transform the incoming JSON data to use an array for persons, but is there no way to work with the data as such?
GraphQL relies on both the server and the client knowing ahead of time what fields are available available for each type. In some cases, the client can discover those fields (via introspection), but for the server, they always need to be known ahead of time. So to somehow dynamically generate those fields based on the returned data is not really possible.
You could utilize a custom JSON scalar (graphql-type-json module) and return that for your query:
type Query {
persons(area: String): JSON
}
By utilizing JSON, you bypass the requirement for the returned data to fit any specific structure, so you can send back whatever you want as long it's properly formatted JSON.
Of course, there's significant disadvantages in doing this. For example, you lose the safety net provided by the type(s) you would have previously used (literally any structure could be returned, and if you're returning the wrong one, you won't find out about it until the client tries to use it and fails). You also lose the ability to use resolvers for any fields within the returned data.
But... your funeral :)
As an aside, I would consider flattening out the data into an array (like you suggested in your question) before sending it back to the client. If you're writing the client code, and working with a dynamically-sized list of customers, chances are an array will be much easier to work with rather than an object keyed by id. If you're using React, for example, and displaying a component for each customer, you'll end up converting that object to an array to map it anyway. In designing your API, I would make client usability a higher consideration than avoiding additional processing of your data.
You can write your own GraphQLScalarType and precisely describe your object and your dynamic keys, what you allow and what you do not allow or transform.
See https://graphql.org/graphql-js/type/#graphqlscalartype
You can have a look at taion/graphql-type-json where he creates a Scalar that allows and transforms any kind of content:
https://github.com/taion/graphql-type-json/blob/master/src/index.js
I had a similar problem with dynamic keys in a schema, and ended up going with a solution like this:
query lookupPersons {
persons {
personKeys
person3: personValue(key: "person3") {
id
name
}
}
}
returns:
{
data: {
persons: {
personKeys: ["person1", "person2", "person3"]
person3: {
id: "person3"
name: "Mike"
}
}
}
}
by shifting the complexity to the query, it simplifies the response shape.
the advantage compared to the JSON approach is it doesn't need any deserialisation from the client
Additional info for Venryx: a possible schema to fit my query looks like this:
type Person {
id: String
name: String
}
type PersonsResult {
personKeys: [String]
personValue(key: String): Person
}
type Query {
persons(area: String): PersonsResult
}
As an aside, if your data set for persons gets large enough, you're going to probably want pagination on personKeys as well, at which point, you should look into https://relay.dev/graphql/connections.htm
I have a GraphQl resolvers that resolves nested data.
for eg. this is my type definitions
type Users {
_id: String
company: Company
}
For the post I have my resolver which resolves post._id as
Users: {
company: (instance, arguments, context, info) => {
return instance.company && Company.find({_id: instance.company});
}
}
The above example works perfectly fine when I query for
Query {
Users {
_id
name
username
company {
_id
PAN
address
}
}
}
But the problem is sometime I don't have to use the company resolver inside Users, because it is coming along with the user so I just need to pass what's in the user object (no need of database call here)
I can achieve this just by checking if instance.company is and _id or Object, if _id get from database otherwise resolve whatever coming in.
But the problem is I have these type of resolvers in many places so I don't think it's a good idea to have this check in all places wherever I have resolver.
Is there a better way where I can define a configuration just to skip this resolver check.
Any feedback or suggestions would be highly appreciated.
Thanks
The following schema is intended to record total views and views for a very specific day only.
const usersSchema = new Schema({
totalProductsViews: {type: Number, default: 0},
productsViewsStatistics: [{
day: {type: String, default: new Date().toISOString().slice(0, 10), unique: true},
count: {type: Number, default: 0}
}],
});
So today views will be stored in another subdocument different from yesterday. To implement this I tried to use upsert so as subdocument will be created each day when product is viewed and counts will be incremented and recorded based on a particular day. I tried to use the following function but seems not to work the way I intended.
usersSchema.statics.increaseProductsViews = async function (id) {
//Based on day only.
const todayDate = new Date().toISOString().slice(0, 10);
const result = await this.findByIdAndUpdate(id, {
$inc: {
totalProductsViews: 1,
'productsViewsStatistics.$[sub].count': 1
},
},
{
upsert: true,
arrayFilters: [{'sub.day': todayDate}],
new: true
});
console.log(result);
return result;
};
What do I miss to get the functionality I want? Any help will be appreciated.
What you are trying to do here actually requires you to understand some concepts you may not have grasped yet. The two primary ones being:
You cannot use any positional update as part of an upsert since it requires data to be present
Adding items into arrays mixed with "upsert" is generally a problem that you cannot do in a single statement.
It's a little unclear if "upsert" is your actual intention anyway or if you just presumed that was what you had to add in order to get your statement to work. It does complicate things if that is your intent, even if it's unlikely give the finByIdAndUpdate() usage which would imply you were actually expecting the "document" to be always present.
At any rate, it's clear you actually expect to "Update the array element when found, OR insert a new array element where not found". This is actually a two write process, and three when you consider the "upsert" case as well.
For this, you actually need to invoke the statements via bulkWrite():
usersSchema.statics.increaseProductsViews = async function (_id) {
//Based on day only.
const todayDate = new Date().toISOString().slice(0, 10);
await this.bulkWrite([
// Try to match an existing element and update it ( do NOT upsert )
{
"updateOne": {
"filter": { _id, "productViewStatistics.day": todayDate },
"update": {
"$inc": {
"totalProductsViews": 1,
"productViewStatistics.$.count": 1
}
}
}
},
// Try to $push where the element is not there but document is - ( do NOT upsert )
{
"updateOne": {
"filter": { _id, "productViewStatistics.day": { "$ne": todayDate } },
"update": {
"$inc": { "totalProductViews": 1 },
"$push": { "productViewStatistics": { "day": todayDate, "count": 1 } }
}
}
},
// Finally attempt upsert where the "document" was not there at all,
// only if you actually mean it - so optional
{
"updateOne": {
"filter": { _id },
"update": {
"$setOnInsert": {
"totalProductViews": 1,
"productViewStatistics": [{ "day": todayDate, "count": 1 }]
}
}
}
])
// return the modified document if you really must
return this.findById(_id); // Not atomic, but the lesser of all evils
}
So there's a real good reason here why the positional filtered [<identifier>] operator does not apply here. The main good reason is the intended purpose is to update multiple matching array elements, and you only ever want to update one. This actually has a specific operator in the positional $ operator which does exactly that. It's condition however must be included within the query predicate ( "filter" property in UpdateOne statements ) just as demonstrated in the first two statements of the bulkWrite() above.
So the main problems with using positional filtered [<identifier>] are that just as the first two statements show, you cannot actually alternate between the $inc or $push as would depend on if the document actually contained an array entry for the day. All that will happen is at best no update will be applied when the current day is not matched by the expression in arrayFilters.
The at worst case is an actual "upsert" will throw an error due to MongoDB not being able to decipher the "path name" from the statement, and of course you simply cannot $inc something that does not exist as a "new" array element. That needs a $push.
That leaves you with the mechanic that you also cannot do both the $inc and $push within a single statement. MongoDB will error that you are attempting to "modify the same path" as an illegal operation. Much the same applies to $setOnInsert since whilst that operator only applies to "upsert" operations, it does not preclude the other operations from happening.
Thus the logical steps fall back to what the comments in the code also describe:
Attempt to match where the document contains an existing array element, then update that element. Using $inc in this case
Attempt to match where the document exists but the array element is not present and then $push a new element for the given day with the default count, updating other elements appropriately
IF you actually did intend to upsert documents ( not array elements, because that's the above steps ) then finally actually attempt an upsert creating new properties including a new array.
Finally there is the issue of the bulkWrite(). Whilst this is a single request to the server with a single response, it still is effectively three ( or two if that's all you need ) operations. There is no way around that and it is better than issuing chained separate requests using findByIdAndUpdate() or even updateOne().
Of course the main operational difference from the perspective of code you attempted to implement is that method does not return the modified document. There is no way to get a "document response" from any "Bulk" operation at all.
As such the actual "bulk" process will only ever modify a document with one of the three statements submitted based on the presented logic and most importantly the order of those statements, which is important. But if you actually wanted to "return the document" after modification then the only way to do that is with a separate request to fetch the document.
The only caveat here is that there is the small possibility that other modifications could have occurred to the document other than the "array upsert" since the read and update are separated. There really is no way around that, without possibly "chaining" three separate requests to the server and then deciding which "response document" actually applied the update you wanted to achieve.
So with that context it's generally considered the lesser of evils to do the read separately. It's not ideal, but it's the best option available from a bad bunch.
As a final note, I would strongly suggest actually storing the the day property as a BSON Date instead of as a string. It actually takes less bytes to store and is far more useful in that form. As such the following constructor is probably the clearest and least hacky:
const todayDate = new Date(new Date().setUTCHours(0,0,0,0))
The best way I can describe what I'm after is by starting with a simplified demo page document:
{
name: "Page Name"
blocks: {
"top-content": 50f0ded99561a3b211000001,
"bottom-content": 50f0ded99561a3b211000002
}
}
Is there a way I can define this kind of many-to-many relation with mongoose so that I can look them up by string keys like top-content and bottom-content? The relational equivalent to this would be something like including a string on the many to many join table.
Important:
I don't want to embed the blocks I'm referencing as they could be referenced by multiple pages.
Mongoose's query population feature supports this, but top-content and bottom-content would need to be ObjectIds instead of strings (which is more efficient anyway).
After some chatting in #node.js on Freenode, it seems like I can just set this up as an array and put arbitrary key/value pairs containing the references I want.
At which point, I'll be dereferencing things myself.
{ //PAGE 1
name:"Sticks",
blocks:[
"top":[OID(5),OID(6)],
"bottom":[OID(7),OID(8)]
]
};
{ //PAGE 2
name:"Poopsicles",
blocks:[top:[OID(7)]]
};
//BLOCKS
{
OID:5,
name:"top"
};
{
OID:7,
name:"bottom rocker"
};
//QUERYING
page.findOne({name:"Sticks"}, function(e,d) {
var a = [];
for(l in d.blocks) {
a.append(d.blocks[l]);
}
blocks.find({oid:a},function (e,b) {
//do your stuff with the blocks and page here;
});
});