Let's say my graphql server wants to fetch the following data as JSON where person3 and person5 are some id's:
"persons": {
"person3": {
"id": "person3",
"name": "Mike"
},
"person5": {
"id": "person5",
"name": "Lisa"
}
}
Question: How to create the schema type definition with apollo?
The keys person3 and person5 here are dynamically generated depending on my query (i.e. the area used in the query). So at another time I might get person1, person2, person3 returned.
As you see persons is not an Iterable, so the following won't work as a graphql type definition I did with apollo:
type Person {
id: String
name: String
}
type Query {
persons(area: String): [Person]
}
The keys in the persons object may always be different.
One solution of course would be to transform the incoming JSON data to use an array for persons, but is there no way to work with the data as such?
GraphQL relies on both the server and the client knowing ahead of time what fields are available available for each type. In some cases, the client can discover those fields (via introspection), but for the server, they always need to be known ahead of time. So to somehow dynamically generate those fields based on the returned data is not really possible.
You could utilize a custom JSON scalar (graphql-type-json module) and return that for your query:
type Query {
persons(area: String): JSON
}
By utilizing JSON, you bypass the requirement for the returned data to fit any specific structure, so you can send back whatever you want as long it's properly formatted JSON.
Of course, there's significant disadvantages in doing this. For example, you lose the safety net provided by the type(s) you would have previously used (literally any structure could be returned, and if you're returning the wrong one, you won't find out about it until the client tries to use it and fails). You also lose the ability to use resolvers for any fields within the returned data.
But... your funeral :)
As an aside, I would consider flattening out the data into an array (like you suggested in your question) before sending it back to the client. If you're writing the client code, and working with a dynamically-sized list of customers, chances are an array will be much easier to work with rather than an object keyed by id. If you're using React, for example, and displaying a component for each customer, you'll end up converting that object to an array to map it anyway. In designing your API, I would make client usability a higher consideration than avoiding additional processing of your data.
You can write your own GraphQLScalarType and precisely describe your object and your dynamic keys, what you allow and what you do not allow or transform.
See https://graphql.org/graphql-js/type/#graphqlscalartype
You can have a look at taion/graphql-type-json where he creates a Scalar that allows and transforms any kind of content:
https://github.com/taion/graphql-type-json/blob/master/src/index.js
I had a similar problem with dynamic keys in a schema, and ended up going with a solution like this:
query lookupPersons {
persons {
personKeys
person3: personValue(key: "person3") {
id
name
}
}
}
returns:
{
data: {
persons: {
personKeys: ["person1", "person2", "person3"]
person3: {
id: "person3"
name: "Mike"
}
}
}
}
by shifting the complexity to the query, it simplifies the response shape.
the advantage compared to the JSON approach is it doesn't need any deserialisation from the client
Additional info for Venryx: a possible schema to fit my query looks like this:
type Person {
id: String
name: String
}
type PersonsResult {
personKeys: [String]
personValue(key: String): Person
}
type Query {
persons(area: String): PersonsResult
}
As an aside, if your data set for persons gets large enough, you're going to probably want pagination on personKeys as well, at which point, you should look into https://relay.dev/graphql/connections.htm
I have:
let foundConversation = await req.db.models.Conversation.findOne({
where: {
id: ConversationId
},
include: [{
model: req.db.models.Audio
}]
})
This returns a Conversation and an Audio attached to that conversation.
When I do:
foundConversation.Audio.processingData.stuff = requestObj
await foundConversation.Audio.save()
it doesn't save the Audio data. Is there any way for me to do this?
I'd rather not do a separate class .update since I want foundConversation to continue to have the updated Audio data for further processing.
Update, my associations:
Models.audio.belongsTo(Models.conversation, {
allowNull: true,
onDelete: 'CASCADE'
})
Models.conversation.hasOne(Models.audio)
I could tell better if you share the model definitions too..
It could be one of the two issues:
There is no foundConversation.Audio. Maybe because of incorrect name you used Models.audio and models.Conversation Notice different in capitalisation.
foundConversation.Audio.processingData.stuff This indicates you are trying to change a property within a property of a model.. Are you using JSON Columns?
Also i noticed you are injecting db in express request (via a middleware) req.db.models. Ideally it should not be a property of request and should be accessed via a service: dbService.models
The two types of objects seem to be so close to one another that having both feels redundant. What is the point of having both schemas and models?
EDIT: Although this has been useful for many people, as mentioned in the comments it answers the "how" rather than the why. Thankfully, the why of the question has been answered elsewhere also, with this answer to another question. This has been linked in the comments for some time but I realise that many may not get that far when reading.
Often the easiest way to answer this type of question is with an example. In this case, someone has already done it for me :)
Take a look here:
http://rawberg.com/blog/nodejs/mongoose-orm-nested-models/
EDIT: The original post (as mentioned in the comments) seems to no longer exist, so I am reproducing it below. Should it ever return, or if it has just moved, please let me know.
It gives a decent description of using schemas within models in mongoose and why you would want to do it, and also shows you how to push tasks via the model while the schema is all about the structure etc.
Original Post:
Let’s start with a simple example of embedding a schema inside a model.
var TaskSchema = new Schema({
name: String,
priority: Number
});
TaskSchema.virtual('nameandpriority')
.get( function () {
return this.name + '(' + this.priority + ')';
});
TaskSchema.method('isHighPriority', function() {
if(this.priority === 1) {
return true;
} else {
return false;
}
});
var ListSchema = new Schema({
name: String,
tasks: [TaskSchema]
});
mongoose.model('List', ListSchema);
var List = mongoose.model('List');
var sampleList = new List({name:'Sample List'});
I created a new TaskSchema object with basic info a task might have. A Mongoose virtual attribute is setup to conveniently combine the name and priority of the Task. I only specified a getter here but virtual setters are supported as well.
I also defined a simple task method called isHighPriority to demonstrate how methods work with this setup.
In the ListSchema definition you’ll notice how the tasks key is configured to hold an array of TaskSchema objects. The task key will become an instance of DocumentArray which provides special methods for dealing with embedded Mongo documents.
For now I only passed the ListSchema object into mongoose.model and left the TaskSchema out. Technically it's not necessary to turn the TaskSchema into a formal model since we won’t be saving it in it’s own collection. Later on I’ll show you how it doesn’t harm anything if you do and it can help to organize all your models in the same way especially when they start spanning multiple files.
With the List model setup let’s add a couple tasks to it and save them to Mongo.
var List = mongoose.model('List');
var sampleList = new List({name:'Sample List'});
sampleList.tasks.push(
{name:'task one', priority:1},
{name:'task two', priority:5}
);
sampleList.save(function(err) {
if (err) {
console.log('error adding new list');
console.log(err);
} else {
console.log('new list successfully saved');
}
});
The tasks attribute on the instance of our List model (sampleList) works like a regular JavaScript array and we can add new tasks to it using push. The important thing to notice is the tasks are added as regular JavaScript objects. It’s a subtle distinction that may not be immediately intuitive.
You can verify from the Mongo shell that the new list and tasks were saved to mongo.
db.lists.find()
{ "tasks" : [
{
"_id" : ObjectId("4dd1cbeed77909f507000002"),
"priority" : 1,
"name" : "task one"
},
{
"_id" : ObjectId("4dd1cbeed77909f507000003"),
"priority" : 5,
"name" : "task two"
}
], "_id" : ObjectId("4dd1cbeed77909f507000001"), "name" : "Sample List" }
Now we can use the ObjectId to pull up the Sample List and iterate through its tasks.
List.findById('4dd1cbeed77909f507000001', function(err, list) {
console.log(list.name + ' retrieved');
list.tasks.forEach(function(task, index, array) {
console.log(task.name);
console.log(task.nameandpriority);
console.log(task.isHighPriority());
});
});
If you run that last bit of code you’ll get an error saying the embedded document doesn’t have a method isHighPriority. In the current version of Mongoose you can’t access methods on embedded schemas directly. There’s an open ticket to fix it and after posing the question to the Mongoose Google Group, manimal45 posted a helpful work-around to use for now.
List.findById('4dd1cbeed77909f507000001', function(err, list) {
console.log(list.name + ' retrieved');
list.tasks.forEach(function(task, index, array) {
console.log(task.name);
console.log(task.nameandpriority);
console.log(task._schema.methods.isHighPriority.apply(task));
});
});
If you run that code you should see the following output on the command line.
Sample List retrieved
task one
task one (1)
true
task two
task two (5)
false
With that work-around in mind let’s turn the TaskSchema into a Mongoose model.
mongoose.model('Task', TaskSchema);
var Task = mongoose.model('Task');
var ListSchema = new Schema({
name: String,
tasks: [Task.schema]
});
mongoose.model('List', ListSchema);
var List = mongoose.model('List');
The TaskSchema definition is the same as before so I left it out. Once its turned into a model we can still access it’s underlying Schema object using dot notation.
Let’s create a new list and embed two Task model instances within it.
var demoList = new List({name:'Demo List'});
var taskThree = new Task({name:'task three', priority:10});
var taskFour = new Task({name:'task four', priority:11});
demoList.tasks.push(taskThree.toObject(), taskFour.toObject());
demoList.save(function(err) {
if (err) {
console.log('error adding new list');
console.log(err);
} else {
console.log('new list successfully saved');
}
});
As we’re embedding the Task model instances into the List we’re calling toObject on them to convert their data into plain JavaScript objects that the List.tasks DocumentArray is expecting. When you save model instances this way your embedded documents will contain ObjectIds.
The complete code example is available as a gist. Hopefully these work-arounds help smooth things over as Mongoose continues to develop. I’m still pretty new to Mongoose and MongoDB so please feel free to share better solutions and tips in the comments. Happy data modeling!
Schema is an object that defines the structure of any documents that will be stored in your MongoDB collection; it enables you to define types and validators for all of your data items.
Model is an object that gives you easy access to a named collection, allowing you to query the collection and use the Schema to validate any documents you save to that collection. It is created by combining a Schema, a Connection, and a collection name.
Originally phrased by Valeri Karpov, MongoDB Blog
I don't think the accepted answer actually answers the question that was posed. The answer doesn't explain why Mongoose has decided to require a developer to provide both a Schema and a Model variable. An example of a framework where they have eliminated the need for the developer to define the data schema is django--a developer writes up their models in the models.py file, and leaves it to the framework to manage the schema. The first reason that comes to mind for why they do this, given my experience with django, is ease-of-use. Perhaps more importantly is the DRY (don't repeat yourself) principle--you don't have to remember to update the schema when you change the model--django will do it for you! Rails also manages the schema of the data for you--a developer doesn't edit the schema directly, but changes it by defining migrations that manipulate the schema.
One reason I could understand that Mongoose would separate the schema and the model is instances where you would want to build a model from two schemas. Such a scenario might introduce more complexity than is worth managing--if you have two schemas that are managed by one model, why aren't they one schema?
Perhaps the original question is more a relic of the traditional relational database system. In world NoSQL/Mongo world, perhaps the schema is a little more flexible than MySQL/PostgreSQL, and thus changing the schema is more common practice.
To understand why? you have to understand what actually is Mongoose?
Well, the mongoose is an object data modeling library for MongoDB and Node JS, providing a higher level of abstraction. So it's a bit like the relationship between Express and Node, so Express is a layer of abstraction over regular Node, while Mongoose is a layer of abstraction over the regular MongoDB driver.
An object data modeling library is just a way for us to write Javascript code that will then interact with a database. So we could just use a regular MongoDB driver to access our database, it would work just fine.
But instead we use Mongoose because it gives us a lot more functionality out of the box, allowing for faster and simpler development of our applications.
So, some of the features Mongoose gives us schemas to model our data and relationship, easy data validation, a simple query API, middleware, and much more.
In Mongoose, a schema is where we model our data, where we describe the structure of the data, default values, and validation, then we take that schema and create a model out of it, a model is basically a wrapper around the schema, which allows us to actually interface with the database in order to create, delete, update, and read documents.
Let's create a model from a schema.
const tourSchema = new mongoose.Schema({
name: {
type: String,
required: [true, 'A tour must have a name'],
unique: true,
},
rating: {
type: Number,
default: 4.5,
},
price: {
type: Number,
required: [true, 'A tour must have a price'],
},
});
//tour model
const Tour = mongoose.model('Tour', tourSchema);
According to convetion first letter of a model name must be capitalized.
Let's create instance of our model that we created using mongoose and schema. also, interact with our database.
const testTour = new Tour({ // instance of our model
name: 'The Forest Hiker',
rating: 4.7,
price: 497,
});
// saving testTour document into database
testTour
.save()
.then((doc) => {
console.log(doc);
})
.catch((err) => {
console.log(err);
});
So having both schama and modle mongoose makes our life easier.
Think of Model as a wrapper to schemas. Schemas define the structure of your document , what kind of properties can you expect and what will be their data type (String,Number etc.). Models provide a kind of interface to perform CRUD on schema. See this post on FCC.
Schema basically models your data (where you provide datatypes for your fields) and can do some validations on your data. It mainly deals with the structure of your collection.
Whereas the model is a wrapper around your schema to provide you with CRUD methods on collections. It mainly deals with adding/querying the database.
Having both schema and model could appear redundant when compared to other frameworks like Django (which provides only a Model) or SQL (where we create only Schemas and write SQL queries and there is no concept of model). But, this is just the way Mongoose implements it.
I'm using Sequelize with Postgres and Angular.js in the front-end.
I'm setting up some routes to expect arrays in the response:
'getData': {
method: 'GET',
// isArray: true,
url: 'stuff/:id',
params: {id: '#id'}
}
However, when only one record is retrieved Sequelize seems to return an object directly
rather than an array with one object in it, which breaks the resource:
Error in resource configuration. Expected response to contain an array but got an object
Is there a way of setting up Sequelize to always return arrays even if there's only one record retrieved?
Or, a clean way of wrapping the data when it gets to ng-resource?
Thanks!
Angular should support object responses, but in any case:
Model.find() will return an object, Model.findAll() will return an array. You can just swap to using a findAll, if filtering on primary key it won't make much of a difference.
Model.find() also takes query parameters as it's second parameter so you should also be able to do Model.find({where: ..}, {plain: false}) but that hasn't been tested and isn't exactly public API.
I'm using SailsJs (which is Express based) to send an JSON object with an array. For some reason, when I load the API in my browser, the array is not sent.
The code that sends the object is here:
exports.RESTifySend = function(res, objects) {
return RESTService.RESTify(objects).then(function(RESTedObjects) {
console.log("SENDING: ", RESTedObjects);
return res.json(RESTedObjects, 200); // I've also tried res.send()
}, function() {
res.send(500);
});
};
The logging statement SENDING: ... outputs:
SENDING: {
id: 'IKIlrgXhp6',
messages: [{
user: null,
text: 'trest',
sentAt: undefined
}]
}
The RESTifyService is just a small framework I built to remove object attributes that shouldn't be exposed in the API (passwords, emails, etc.).
Somewhere in the framework I built, I replaced toObject(); with lodash.clone([object]), which solved all the problems. For others experiencing a similar issue, I suggest trying the same thing in your toJSON method. The toObject(); method of waterline objects apparently has some odd side effects when you true to populate or edit an attribute that matches the name of an association.
Also ran into this issue when serializing a model with a collection attribute (like something with many Comments).
In my case I call .toJSON on the record object with the collection during serialization, which has a model definition like:
attributes: {
...
toJSON: function() {
var self = this.toObject()
// could pick a subset of attrs here
return _.pick(self, _.keys(self))
}
}
Which leaves me free to then populate the comments attribute and not have surprises about the data on the wire.