Skip GraphQL Resolvers to return data - node.js

I have a GraphQl resolvers that resolves nested data.
for eg. this is my type definitions
type Users {
_id: String
company: Company
}
For the post I have my resolver which resolves post._id as
Users: {
company: (instance, arguments, context, info) => {
return instance.company && Company.find({_id: instance.company});
}
}
The above example works perfectly fine when I query for
Query {
Users {
_id
name
username
company {
_id
PAN
address
}
}
}
But the problem is sometime I don't have to use the company resolver inside Users, because it is coming along with the user so I just need to pass what's in the user object (no need of database call here)
I can achieve this just by checking if instance.company is and _id or Object, if _id get from database otherwise resolve whatever coming in.
But the problem is I have these type of resolvers in many places so I don't think it's a good idea to have this check in all places wherever I have resolver.
Is there a better way where I can define a configuration just to skip this resolver check.
Any feedback or suggestions would be highly appreciated.
Thanks

Related

GraphQL/Mongoose: How to prevent calling the database once per requested field

I'm new to GraphQL but the way I understand is, if I got a User type like:
type User {
email: String
userId: String
firstName: String
lastName: String
}
and a query such as this:
type Query {
currentUser: User
}
implemeting the resolver like this:
Query: {
currentUser: {
email: async (_: any, __: any, ctx: any, ___: any) => {
const provider = getAuthenticationProvider()
const userId = await provider.getUserId(ctx.req.headers.authorization)
const { email } = await UserService.getUserByFirebaseId(userId)
return email;
},
firstName: async (_: any, __: any, ctx: any, ___: any) => {
const provider = getAuthenticationProvider()
const userId = await provider.getUserId(ctx.req.headers.authorization)
const { firstName } = await UserService.getUserByFirebaseId(userId)
return firstName;
}
}
// same for other fields
},
It's clear that something's wrong, since I'm duplicating the code and also the database's being queried once per field requested. Is there a way to prevent code-duplication and/or caching the database call?
How about the case where I need to populate a MongoDB field? Thanks!
I would rewrite your resolver like this:
// import ...;
type User {
email: String
userId: String
firstName: String
lastName: String
}
type Query {
currentUser: User
}
const resolvers = {
Query: {
currentUser: async (parent, args, ctx, info) {
const provider = getAuthenticationProvider()
const userId = await provider.getUserId(ctx.req.headers.authorization)
return UserService.getUserByFirebaseId(userId);
}
}
};
This should work, but... With more information the code could be better as well (see my comment).
More about resolvers you can read here: https://www.apollographql.com/docs/apollo-server/data/resolvers/
A few things:
1a. Parent Resolvers
As a general rule, any given resolver should return enough information to either resolve the value of the child or enough information for the child to resolve it on their own. Take the answer by Ruslan Zhomir. This does the database lookup once and returns those values for the children. The upside is that you don't have to replicate any code. The downside is that the database has to fetch all of the fields and return those. There's a balance act there with trade-offs. Most of the time, you're better off using one resolver per object. If you start having to massage data or pull fields from other locations, that's when I generally start adding field-level resolvers like you have.
1b. Field Level Resolvers
The pattern you're showing of ONLY field-level resolvers (no parent object resolvers) can be awkward. Take your example. What "is expected" to happen if the user isn't logged in?
I would expect a result of:
{
currentUser: null
}
However, if you build ONLY field-level resolvers (no parent resolver that actually looks in the database), your response will look like this:
{
currentUser: {
email: null,
userId: null,
firstName: null,
lastName: null
}
}
If on the other hand, you actually look in the database far enough to verify that the user exists, why not return that object? It's another reason why I recommend a single parent resolver. Again, once you start dealing with OTHER datasources or expensive actions for other properties, that's where you want to start adding child resolvers:
const resolvers = {
Query: {
currentUser: async (parent, args, ctx, info) {
const provider = getAuthenticationProvider()
const userId = await provider.getUserId(ctx.req.headers.authorization)
return UserService.getUserByFirebaseId(userId);
}
},
User: {
avatarUrl(parent) {
const hash = md5(parent.email)
return `https://www.gravatar.com/avatar/${hash}`;
},
friends(parent, args, ctx) {
return UsersService.findFriends(parent.id);
}
}
}
2a. DataLoaders
If you really like the child property resolvers pattern (there's a director at PayPal who EATS IT UP, the DataLoader pattern (and library) uses memoization with cache keys to do a lookup to the database once and cache that result. Each resolver asks the service to fetch the user ("here's the firebaseId"), and that service caches the response. The resolver code you have would be the same, but the functionality on the backend that does the database lookup would only happen once, while the others returned from cache. The pattern you're showing here is one that I've seen people do, and while it's often a premature optimization, it may be what you want. If so, DataLoaders are an answer. If you don't want to go the route of duplicated code or "magic resolver objects", you're probably better off using just a single resolver.
Also, make sure you're not falling victim to the "object of nulls" problem described above. If the parent doesn't exist, the parent should be null, not just all of the children.
2b. DataLoaders and Context
Be careful with DataLoaders. That cache might live too long or return values for people who didn't have access. It is generally, therefore, recommended that the dataLoaders get created for every request. If you look at DataSources (Apollo), it follows this same pattern. The class is instantiated on each request and the object is added to the Context (ctx in your example). There are other dataLoaders that you would create outside of the scope of the request, but you have to solve Least-Used and Expiration and all of that if you go that route. That's also an optimization you need much further down the road.
Is there a way to prevent code-duplication and/or caching the database call?
So first, this
const provider = getAuthenticationProvider()
should actually be injected into the graphql server request's context, such that you would use it in resolver for example as:
ctx.authProvider
The rest follows Dan Crews' answer. Parent resolvers, preferably with dataloaders. In that case you actually won't need authProvider and will use only dataloaders, depending on the entity type & by passing extra variables from context (like user Id)

What is a named Apollo Server field within a resolver? How does it work?

In a server I'm working on, I've noticed it has different named resolvers and I'm unsure how they work:
export const resolver = {
Query: {
getUsersById(...
},
Mutation: {
updateUserById(...
},
User: {
accounts(...
},
I understand that the Query field will mean that the resolver getUserById will be a query and the same with resolvers within the Mutation field. I can query those by doing:
query {
getUsersById(...)
}
I don't understand how this works with named fields since obviously I can't do:
user {
accounts(...)
}
I can't find any documentation on this either, so any clarification would be appreciated!
Those may be field resolvers. When Apollo finishes a query or mutation resolver and is about to return an object, e.g. in your example, Users or accounts, it runs it through any field resolvers that are present for that object. The field resolver can modify or add fields that are part of the object.

Prisma client type system creates strong coupling with service methods

I am using prisma ORM with nestjs and it is awesome. Can you please help me understand how can I separate my database layer from my service methods since results produced by prisma client queries are of types generated by prisma client itself ( so i wont be having those types when i shift to lets say typeorm ). how can i prevent such coupling of my service methods returning results of types generated by prisma client and not my custom entities. Hope it makes sense.
The generated #prisma/client library is responsible for generating both the types as well as the custom entity classes. As a result, if you replace Prisma you end up losing both.
Here are two possible workarounds that can decouple the types of your service methods from the Prisma ORM.
Workaround 1: Generate types indepedently of Prisma
With this approach you can get rid of Prisma altogether in the future by manually defining the types for your functions. You can use the types generated by Prisma as reference (or just copy paste them directly). Let me show you an example.
Imagine this is your Prisma Schema.
model Post {
id Int #default(autoincrement()) #id
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
title String #db.VarChar(255)
author User #relation(fields: [authorId], references: [id])
authorId Int
}
model User {
id Int #default(autoincrement()) #id
name String?
posts Post[]
}
You could define a getUserWithPosts function as follows:
// Copied over from '#prisma/client'. Modify as necessary.
type User = {
id: number
name: string | null
}
// Copied over from '#prisma/client'. Modify as necessary.
type Post = {
id: number
createdAt: Date
updatedAt: Date
title: string
authorId: number
}
type UserWithPosts = User & {posts: Post[]}
const prisma = new PrismaClient()
async function getUserWithPosts(userId: number) : Promise<UserWithPosts> {
let user = await prisma.user.findUnique({
where: {
id: userId,
},
include: {
posts: true
}
})
return user;
}
This way, you should be able to get rid of Prisma altogether and replace it with an ORM of your choice. One notable drawback is this approach increases the maintenance burden upon changes to the Prisma schema as you need to manually maintain the types.
Workaround 2: Generate types using Prisma
You could keep Prisma in your codebase simply to generate the #prisma/client and use it for your types. This is possible with the Prisma.validator type that is exposed by the #prisma/client. Code snippet to demonstrate this for the exact same function:
// 1: Define the validator
const userWithPosts = Prisma.validator<Prisma.UserArgs>()({
include: { posts: true },
})
// 2: This type will include a user and all their posts
type UserWithPosts = Prisma.UserGetPayload<typeof userWithPosts>
// function is same as before
async function getUserWithPosts(userId: number): Promise<UserWithPosts> {
let user = await prisma.user.findUnique({
where: {
id: userId,
},
include: {
posts: true
}
})
return user;
}
Additionally, you can always keep the Prisma types updated to your current database state using the Introspect feature. This will work even for changes you have made with other ORMS/Query Builders/SQL.
If you want more details, a lot of what I've mentioned here is touched opon in the Operating against partial structures of your model types concept guide in the Prisma Docs.
Finally, if this dosen't solve your problem, I would request that you open a new issue with the problem and your use case. This really helps us to track and prioritize problems that people are facing.

Apollo Cache ignoring Field Alias as key (MockedProvider)

I am seeing some differences in behaviour between ApolloProvider and MockedProvider and it's throwing an error in testing.
Assuming I have the following query:
query {
Author {
id: authorID
name
}
}
In ApolloProvider this query creates entries in the Apollo Cache using the field alias as the key, each Author in the cache has an id. Therefore, Apollo can automatically merge entities.
When using MockedProvider, this is not the case. When I mock the following response:
const mockResponse = {
data: {
Author: {
id: 'test!!',
name: 'test'
},
},
}
I get the following error:
console.warn
Cache data may be lost when replacing the Author field of a Query object.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.Author field, so InMemoryCache can safely merge these objects:
existing: {"authorID":"test!!"...
So the exact same query in ApolloProvider uses id (field alias) as the key and in MockedProvider it just adds authorID as another field entry. It ignores the field alias and has no key.
Obviously now nothing is able to merge. My first guess is that it's because the MockedProvider does not have access to the schema so it doesn't know that authorID is of type ID? Or am I way off?
One thing that's really weird to me is that my mockResponse doesn't even provide an authorID. My mockResponse is { id: "test!!" } but the cache shows an entry for {"authorID":"test!!"}, so it's somehow 'unaliased' itself.
I'm really struggling to understand what is happening here. Any insight at all would be enormously useful.

How to combine a mutation and a query in a single query?

I have an operation getFoo that requires that the user is authenticated in order to access the resource.
User authenticates using a mutation authenticate, e.g.
mutation {
authenticate (email: "foo", password: "bar") {
id
}
}
When user is authenticated, two things happen:
The request context is enriched with the authentication details
A cookie is created
However, I would like to combine authentication and getFoo method invocation into a single request, e.g.
mutation {
authenticate (email: "foo", password: "bar") {
id
}
}
query {
getFoo {
id
}
}
The latter produces a syntax error.
Is there a way to combine a mutation with a query?
There's no way to send a mutation and a query in one request according to the GraphQL specification.
However, you can add any fields to the mutation payload. So if there are only a handful of queries that you need to support for the authenticate mutation, you could to this, for example:
mutation {
authenticate (email: "foo", password: "bar") {
id
getFoo {
id
}
}
}
At the end of the day, it might be better to keep the mutation and query separate though. It gets hairy very quickly if you want to include many queries in many mutations like this. I don't see a problem with the overhead of an additional request here.
This is not possible without support by the server. However:
Some GraphQL APIs support batching of operations where you can send an array of queries and/or mutations in a single request.
Other APIs support HTTP2+, where you can easily send multiple requests on the same connection without any overhead.
If you control the API, you can also apply a little trick where you simply expose the getFoo field not just on the Query type but also on the Mutation type. There is not that much of a difference, except that mutation { … } fields are resolved sequentially. You can go even further and expose the whole Query type on mutations, allowing you to query arbitrary data and not just start querying the graph at the mutation result. It would look like this:
type Mutation {
query: Query!
… # authenticate() etc
}
mutation authAndFoo {
authenticate (email: "foo", password: "bar") {
id
}
query {
getFoo {
id
}
}
}

Resources