I am seeing some differences in behaviour between ApolloProvider and MockedProvider and it's throwing an error in testing.
Assuming I have the following query:
query {
Author {
id: authorID
name
}
}
In ApolloProvider this query creates entries in the Apollo Cache using the field alias as the key, each Author in the cache has an id. Therefore, Apollo can automatically merge entities.
When using MockedProvider, this is not the case. When I mock the following response:
const mockResponse = {
data: {
Author: {
id: 'test!!',
name: 'test'
},
},
}
I get the following error:
console.warn
Cache data may be lost when replacing the Author field of a Query object.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.Author field, so InMemoryCache can safely merge these objects:
existing: {"authorID":"test!!"...
So the exact same query in ApolloProvider uses id (field alias) as the key and in MockedProvider it just adds authorID as another field entry. It ignores the field alias and has no key.
Obviously now nothing is able to merge. My first guess is that it's because the MockedProvider does not have access to the schema so it doesn't know that authorID is of type ID? Or am I way off?
One thing that's really weird to me is that my mockResponse doesn't even provide an authorID. My mockResponse is { id: "test!!" } but the cache shows an entry for {"authorID":"test!!"}, so it's somehow 'unaliased' itself.
I'm really struggling to understand what is happening here. Any insight at all would be enormously useful.
Related
In a server I'm working on, I've noticed it has different named resolvers and I'm unsure how they work:
export const resolver = {
Query: {
getUsersById(...
},
Mutation: {
updateUserById(...
},
User: {
accounts(...
},
I understand that the Query field will mean that the resolver getUserById will be a query and the same with resolvers within the Mutation field. I can query those by doing:
query {
getUsersById(...)
}
I don't understand how this works with named fields since obviously I can't do:
user {
accounts(...)
}
I can't find any documentation on this either, so any clarification would be appreciated!
Those may be field resolvers. When Apollo finishes a query or mutation resolver and is about to return an object, e.g. in your example, Users or accounts, it runs it through any field resolvers that are present for that object. The field resolver can modify or add fields that are part of the object.
I am using prisma ORM with nestjs and it is awesome. Can you please help me understand how can I separate my database layer from my service methods since results produced by prisma client queries are of types generated by prisma client itself ( so i wont be having those types when i shift to lets say typeorm ). how can i prevent such coupling of my service methods returning results of types generated by prisma client and not my custom entities. Hope it makes sense.
The generated #prisma/client library is responsible for generating both the types as well as the custom entity classes. As a result, if you replace Prisma you end up losing both.
Here are two possible workarounds that can decouple the types of your service methods from the Prisma ORM.
Workaround 1: Generate types indepedently of Prisma
With this approach you can get rid of Prisma altogether in the future by manually defining the types for your functions. You can use the types generated by Prisma as reference (or just copy paste them directly). Let me show you an example.
Imagine this is your Prisma Schema.
model Post {
id Int #default(autoincrement()) #id
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
title String #db.VarChar(255)
author User #relation(fields: [authorId], references: [id])
authorId Int
}
model User {
id Int #default(autoincrement()) #id
name String?
posts Post[]
}
You could define a getUserWithPosts function as follows:
// Copied over from '#prisma/client'. Modify as necessary.
type User = {
id: number
name: string | null
}
// Copied over from '#prisma/client'. Modify as necessary.
type Post = {
id: number
createdAt: Date
updatedAt: Date
title: string
authorId: number
}
type UserWithPosts = User & {posts: Post[]}
const prisma = new PrismaClient()
async function getUserWithPosts(userId: number) : Promise<UserWithPosts> {
let user = await prisma.user.findUnique({
where: {
id: userId,
},
include: {
posts: true
}
})
return user;
}
This way, you should be able to get rid of Prisma altogether and replace it with an ORM of your choice. One notable drawback is this approach increases the maintenance burden upon changes to the Prisma schema as you need to manually maintain the types.
Workaround 2: Generate types using Prisma
You could keep Prisma in your codebase simply to generate the #prisma/client and use it for your types. This is possible with the Prisma.validator type that is exposed by the #prisma/client. Code snippet to demonstrate this for the exact same function:
// 1: Define the validator
const userWithPosts = Prisma.validator<Prisma.UserArgs>()({
include: { posts: true },
})
// 2: This type will include a user and all their posts
type UserWithPosts = Prisma.UserGetPayload<typeof userWithPosts>
// function is same as before
async function getUserWithPosts(userId: number): Promise<UserWithPosts> {
let user = await prisma.user.findUnique({
where: {
id: userId,
},
include: {
posts: true
}
})
return user;
}
Additionally, you can always keep the Prisma types updated to your current database state using the Introspect feature. This will work even for changes you have made with other ORMS/Query Builders/SQL.
If you want more details, a lot of what I've mentioned here is touched opon in the Operating against partial structures of your model types concept guide in the Prisma Docs.
Finally, if this dosen't solve your problem, I would request that you open a new issue with the problem and your use case. This really helps us to track and prioritize problems that people are facing.
Let's say my graphql server wants to fetch the following data as JSON where person3 and person5 are some id's:
"persons": {
"person3": {
"id": "person3",
"name": "Mike"
},
"person5": {
"id": "person5",
"name": "Lisa"
}
}
Question: How to create the schema type definition with apollo?
The keys person3 and person5 here are dynamically generated depending on my query (i.e. the area used in the query). So at another time I might get person1, person2, person3 returned.
As you see persons is not an Iterable, so the following won't work as a graphql type definition I did with apollo:
type Person {
id: String
name: String
}
type Query {
persons(area: String): [Person]
}
The keys in the persons object may always be different.
One solution of course would be to transform the incoming JSON data to use an array for persons, but is there no way to work with the data as such?
GraphQL relies on both the server and the client knowing ahead of time what fields are available available for each type. In some cases, the client can discover those fields (via introspection), but for the server, they always need to be known ahead of time. So to somehow dynamically generate those fields based on the returned data is not really possible.
You could utilize a custom JSON scalar (graphql-type-json module) and return that for your query:
type Query {
persons(area: String): JSON
}
By utilizing JSON, you bypass the requirement for the returned data to fit any specific structure, so you can send back whatever you want as long it's properly formatted JSON.
Of course, there's significant disadvantages in doing this. For example, you lose the safety net provided by the type(s) you would have previously used (literally any structure could be returned, and if you're returning the wrong one, you won't find out about it until the client tries to use it and fails). You also lose the ability to use resolvers for any fields within the returned data.
But... your funeral :)
As an aside, I would consider flattening out the data into an array (like you suggested in your question) before sending it back to the client. If you're writing the client code, and working with a dynamically-sized list of customers, chances are an array will be much easier to work with rather than an object keyed by id. If you're using React, for example, and displaying a component for each customer, you'll end up converting that object to an array to map it anyway. In designing your API, I would make client usability a higher consideration than avoiding additional processing of your data.
You can write your own GraphQLScalarType and precisely describe your object and your dynamic keys, what you allow and what you do not allow or transform.
See https://graphql.org/graphql-js/type/#graphqlscalartype
You can have a look at taion/graphql-type-json where he creates a Scalar that allows and transforms any kind of content:
https://github.com/taion/graphql-type-json/blob/master/src/index.js
I had a similar problem with dynamic keys in a schema, and ended up going with a solution like this:
query lookupPersons {
persons {
personKeys
person3: personValue(key: "person3") {
id
name
}
}
}
returns:
{
data: {
persons: {
personKeys: ["person1", "person2", "person3"]
person3: {
id: "person3"
name: "Mike"
}
}
}
}
by shifting the complexity to the query, it simplifies the response shape.
the advantage compared to the JSON approach is it doesn't need any deserialisation from the client
Additional info for Venryx: a possible schema to fit my query looks like this:
type Person {
id: String
name: String
}
type PersonsResult {
personKeys: [String]
personValue(key: String): Person
}
type Query {
persons(area: String): PersonsResult
}
As an aside, if your data set for persons gets large enough, you're going to probably want pagination on personKeys as well, at which point, you should look into https://relay.dev/graphql/connections.htm
I have a GraphQl resolvers that resolves nested data.
for eg. this is my type definitions
type Users {
_id: String
company: Company
}
For the post I have my resolver which resolves post._id as
Users: {
company: (instance, arguments, context, info) => {
return instance.company && Company.find({_id: instance.company});
}
}
The above example works perfectly fine when I query for
Query {
Users {
_id
name
username
company {
_id
PAN
address
}
}
}
But the problem is sometime I don't have to use the company resolver inside Users, because it is coming along with the user so I just need to pass what's in the user object (no need of database call here)
I can achieve this just by checking if instance.company is and _id or Object, if _id get from database otherwise resolve whatever coming in.
But the problem is I have these type of resolvers in many places so I don't think it's a good idea to have this check in all places wherever I have resolver.
Is there a better way where I can define a configuration just to skip this resolver check.
Any feedback or suggestions would be highly appreciated.
Thanks
I am trying to add documents, according to elastic search documents, we can add document, even if we dont provide id... See Here
I am trying to add a document even if it doesnt have any ID. in elastic search, how can i do that?
My current code looks like this
var params = _.defaults({}, {
index: index,
type: type, //'customer'
id: data.id || null,
body: data
})
debug(params)
return this.client.create(params);
The above code gives this error
{
"error": "Unable to build a path with those params. Supply at least index, type, id"
}
Any hint would help, thanks
With the create call you MUST provide an id.
If you are not sure if an ID will be present in your data , then you can use the client.index() function instead. using that function, ES will auto-generate an ID if none is provided.