For some reason I'm getting the error Unique constraint failed on the fields: (id) when trying to create a new Artist document.
Below is the function I'm calling.
async create(createArtistInput: CreateArtistInput): Promise<Artist> {
console.log(createArtistInput, 'create artist input')
const slug = slugify(createArtistInput.name, {
replacement: '-',
strict: true,
})
return this.db.artist.create({
data: {
name: createArtistInput.name,
spotifyArtistId: createArtistInput.spotifyArtistId,
spotifyArtistName: createArtistInput.spotifyArtistName,
slug,
},
})
}
The console log prints the following response, so I don't understand why the unique constraint of id is failing, as I'm not passing one in. I'm letting the prisma schema handle that.
{
name: 'twofiveone',
spotifyArtistId: '5Fex9xz9rkPqQqMBVtuIrE',
spotifyArtistName: 'twofiveone'
} create artist input
Here is the prisma schema if needed
model Artist {
id Int #id #default(autoincrement())
name String
slug String?
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
spotifyArtistId String?
spotifyArtistName String?
}
Does anyone have any idea what is happening? It's as if I can't create and new artists for some reason.
Autoincrement creates its own sequence starting from 1 which you can read more about here.
If you add random records with the id, then Postgres doesn't know that and it tries to start from 1. If 1 is already present, then it will throw an error.
So the best practice, in this case, is always let the database generate the id for you instead of you adding it manually. Hope this helps
The most likely reason for this is that somehow rows have been manually added with the id 0 at the start.
Related
I'm creating the backend for a simple app which allows users to create, update, and delete products. Using Express as my framework, with Postgres as my DB and Prisma, which I'm new to, as my ORM. Users and products have a one-to-many relationship. Prisma's documentation states that when updating a record, you should use the update method - so to update the name of a product with a given ID, your code would look something like this:
export const updateProduct = async (req, res) => {
const [productID, newProductName, userID] = [req.params.id, req.body.name, res.locals.user.id];
const product = await prisma.product.update({
where: {
id: productID,
},
data: {
name: newProductName
}
});
res.status(200);
res.json(product);
};
However, there's a problem here - I'm not checking to see that the product with the provided ID belongs to the user that has sent the request to update it. I have the ID of the user who has sent the request in the variable userID, and each product in the DB has a field belongsToID which is set to the ID of the user that the product belongs to. I should theoretically therefore be able to modify my query to get the product with the specified ID and a matching belongsToID like so:
export const updateProduct = async (req, res) => {
const [productID, newProductName, userID] = [req.params.id, req.body.name, res.locals.user.id];
const product = await prisma.product.update({
where: {
id: productID,
belongsToID: userID
},
data: {
name: newProductName
}
});
res.status(200);
res.json(product);
};
That, however, does not work - I get the following error: Type '{ id: any; belongsToID: any; }' is not assignable to type 'ProductWhereUniqueInput'. Object literal may only specify known properties, and 'belongsToId' does not exist in type 'ProductWhereUniqueInput'.ts(2322).
It appears that when trying to do a 'findUnique', Prisma doesn't allow non-unique fields to be used in the query (even if the combination of both fields is unique, as is the case here). I do get that logically, my query doesn't make much sense - the ID alone is already enough to find a unique entry without the second field, so in that sense, the second field is totally redundant. Yet how else am I meant to check that the belongsToID is what it should be before updating the record? Is there somewhere else within the object passed to .update where I can provide a check to be performed on the retrieved record before performing the update?
I believe that creating an index would allow me to query for both fields at once - but why should I have to create an index when the ID (which is already indexed) alone is all I need to retrieve the record I need? What I really need is a way to perform a check on a retrieved record before performing the update when using Prisma.table_name.update(), not a way to query for something with a unique combination of fields.
Im learning mongodb and I have the following question: In one Schema, I have a reference to another model - Im storing id's of books. I have a books model where I have a reference to other books - saving their id's.
The id's of 'similarBooks' I will insert manually. But id's of the books will be always in the format of
ObjectId("1234").
If user clicks on the name of book a query will be made - findById. However the id's I manually inserted are just strings, not ObjectId("id") so it wouldnt find the book. What is the best way to handle this? Do I then in my query take the id (the one thats just a string) and convert it to ObjectId("id") or do I not just insert manually the id as string but already convert to ObjectId. If so how? So far I just been adding data for this type of models in 3t studio.
Same question is for writing tests. If i have ids stored as strings, do I convert to the ObjectId ?
Thank you!
const bookSchema = new mongoose.Schema({
title: {
type: String,
required: true
},
similarBooks: {
name: {
type: [String] //would be only 2
},
id: {
type: [String] //would be only 2
}
}
...
})
There is an answer on the advantage of saving Id as ObjectId instead of string here. Mainly it saves on space.
MongoDb: Benefit of using ObjectID vs a string containing an Id?
So to answer you question, i would always convert that String id to ObjectId before adding it to your similarbooks array.
I am using Sequelize in my node js server. I am ending up with validation errors because my code tries to write the record twice instead of creating it once and then updating it since it's already in DB (Postgresql).
This is the flow I use when the request runs:
const latitude = req.body.latitude;
var metrics = await models.user_car_metrics.findOne({ where: { user_id: userId, car_id: carId } })
if (metrics) {
metrics.latitude = latitude;
.....
} else {
metrics = models.user_car_metrics.build({
user_id: userId,
car_id: carId,
latitude: latitude
....
});
}
var savedMetrics = await metrics();
return res.status(201).json(savedMetrics);
At times, if the client calls the endpoint very fast twice or more the endpoint above tries to save two new rows in user_car_metrics, with the same user_id and car_id, both FK on tables user and car.
I have a constraint:
ALTER TABLE user_car_metrics DROP CONSTRAINT IF EXISTS user_id_car_id_unique, ADD CONSTRAINT user_id_car_id_unique UNIQUE (car_id, user_id);
Point is, there can only be one entry for a given user_id and car_id pair.
Because of that, I started seeing validation issues and after looking into it and adding logs I realize the code above adds duplicates in the table (without the constraint). If the constraint is there, I get validation errors when the code above tries to insert the duplicate record.
Question is, how do I avoid this problem? How do I structure the code so that it won't try to create duplicate records. Is there a way to serialize this?
If you have a unique constraint then you can use upsert to either insert or update the record depending on whether you have a record with the same primary key value or column values that are in the unique constraint.
await models.user_car_metrics.upsert({
user_id: userId,
car_id: carId,
latitude: latitude
....
})
See upsert
PostgreSQL - Implemented with ON CONFLICT DO UPDATE. If update data contains PK field, then PK is selected as the default conflict key. Otherwise, first unique constraint/index will be selected, which can satisfy conflict key requirements.
I am using prisma ORM with nestjs and it is awesome. Can you please help me understand how can I separate my database layer from my service methods since results produced by prisma client queries are of types generated by prisma client itself ( so i wont be having those types when i shift to lets say typeorm ). how can i prevent such coupling of my service methods returning results of types generated by prisma client and not my custom entities. Hope it makes sense.
The generated #prisma/client library is responsible for generating both the types as well as the custom entity classes. As a result, if you replace Prisma you end up losing both.
Here are two possible workarounds that can decouple the types of your service methods from the Prisma ORM.
Workaround 1: Generate types indepedently of Prisma
With this approach you can get rid of Prisma altogether in the future by manually defining the types for your functions. You can use the types generated by Prisma as reference (or just copy paste them directly). Let me show you an example.
Imagine this is your Prisma Schema.
model Post {
id Int #default(autoincrement()) #id
createdAt DateTime #default(now())
updatedAt DateTime #updatedAt
title String #db.VarChar(255)
author User #relation(fields: [authorId], references: [id])
authorId Int
}
model User {
id Int #default(autoincrement()) #id
name String?
posts Post[]
}
You could define a getUserWithPosts function as follows:
// Copied over from '#prisma/client'. Modify as necessary.
type User = {
id: number
name: string | null
}
// Copied over from '#prisma/client'. Modify as necessary.
type Post = {
id: number
createdAt: Date
updatedAt: Date
title: string
authorId: number
}
type UserWithPosts = User & {posts: Post[]}
const prisma = new PrismaClient()
async function getUserWithPosts(userId: number) : Promise<UserWithPosts> {
let user = await prisma.user.findUnique({
where: {
id: userId,
},
include: {
posts: true
}
})
return user;
}
This way, you should be able to get rid of Prisma altogether and replace it with an ORM of your choice. One notable drawback is this approach increases the maintenance burden upon changes to the Prisma schema as you need to manually maintain the types.
Workaround 2: Generate types using Prisma
You could keep Prisma in your codebase simply to generate the #prisma/client and use it for your types. This is possible with the Prisma.validator type that is exposed by the #prisma/client. Code snippet to demonstrate this for the exact same function:
// 1: Define the validator
const userWithPosts = Prisma.validator<Prisma.UserArgs>()({
include: { posts: true },
})
// 2: This type will include a user and all their posts
type UserWithPosts = Prisma.UserGetPayload<typeof userWithPosts>
// function is same as before
async function getUserWithPosts(userId: number): Promise<UserWithPosts> {
let user = await prisma.user.findUnique({
where: {
id: userId,
},
include: {
posts: true
}
})
return user;
}
Additionally, you can always keep the Prisma types updated to your current database state using the Introspect feature. This will work even for changes you have made with other ORMS/Query Builders/SQL.
If you want more details, a lot of what I've mentioned here is touched opon in the Operating against partial structures of your model types concept guide in the Prisma Docs.
Finally, if this dosen't solve your problem, I would request that you open a new issue with the problem and your use case. This really helps us to track and prioritize problems that people are facing.
I am seeing some differences in behaviour between ApolloProvider and MockedProvider and it's throwing an error in testing.
Assuming I have the following query:
query {
Author {
id: authorID
name
}
}
In ApolloProvider this query creates entries in the Apollo Cache using the field alias as the key, each Author in the cache has an id. Therefore, Apollo can automatically merge entities.
When using MockedProvider, this is not the case. When I mock the following response:
const mockResponse = {
data: {
Author: {
id: 'test!!',
name: 'test'
},
},
}
I get the following error:
console.warn
Cache data may be lost when replacing the Author field of a Query object.
To address this problem (which is not a bug in Apollo Client), define a custom merge function for the Query.Author field, so InMemoryCache can safely merge these objects:
existing: {"authorID":"test!!"...
So the exact same query in ApolloProvider uses id (field alias) as the key and in MockedProvider it just adds authorID as another field entry. It ignores the field alias and has no key.
Obviously now nothing is able to merge. My first guess is that it's because the MockedProvider does not have access to the schema so it doesn't know that authorID is of type ID? Or am I way off?
One thing that's really weird to me is that my mockResponse doesn't even provide an authorID. My mockResponse is { id: "test!!" } but the cache shows an entry for {"authorID":"test!!"}, so it's somehow 'unaliased' itself.
I'm really struggling to understand what is happening here. Any insight at all would be enormously useful.