How to use ramda mergeRight in Typescript and respect interface definitions - node.js

I'm trying to make a copy of an object and change properties using Rambda's mergeRight function. The problem is it allows me to merge in properties that do not exist in the interface definition.
import {mergeRight} from "ramda";
export interface User {
readonly userId: string
readonly username: string
}
const user: User = {
userId: "12345",
username: "SomeUser"
}
//I want this to be a compile time error, because "something" is not a property of User interface
const updatedUser: User = mergeRight(user, {something: "3"})
Is there any way I can ensure that the properties I am merging are part of the User type, without having to specify an entire new User object (thus defeating the advantage of mergeRight)? This would prevent a simple typo from causing a runtime error that is difficult to debug.
Ideally I would like Typescript to detect this at compile time

To filter out keys that are not part of user, use R.pick to take just keys that exist in User from the new object.
This will only effect the root level of the object, and not deeper mismatches.
const { pick, keys, mergeDeepRight } = R
const user = {
userId: "12345",
username: "SomeUser"
}
const getUserKeys = pick(keys(user))
//I want this to be an error, because "something" is not a property of User interface
const updatedUser = mergeDeepRight(user, getUserKeys({
something: "3"
}))
console.log(updatedUser)
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>

It seems that simply casting the anonymous object as User will give the error I want. That's good enough for my use case.
//This causes a compile time error
const updatedUser: User = mergeRight(user, {something: "3"} as User)
//This does not
const updatedUser2: User = mergeRight(user, {userId: "3"} as User)

Related

How do you omit fields in TypegraphQL

Let's say we have the following User model.
{ id: ID, email: string, username: string }
Then I want to define 2 queries:
Used by the owner to get their settings page so it contains sensitive information such as email (perhaps sin number)
Used by other users to search for a user (by username) & we do NOT want to expose the email or sin number
I have been researching the documentation & cannot find how to accomplish this. I was thinking to grab the info for the fields manually & parse it per query but that seems like an oversight.
UPDATE:
Here is sort of what I am trying to do:
class User {
#Field(
() => ID
)
id: string;
#Authorized("CURRENT_USER")
#Field(
{
nullable: true
}
)
email: string;
#Field()
username: string;
}
Resolver:
export default class UserResolver {
#Authorized("CURRENT_USER")
#Query(
() => User
)
async user(#Arg('username', () => String) username: string) {
// TODO: if username is current user then allow email
// else do not allow email, (I need auth checker in here)
}
}
If I understand your question correctly you should be able to use the Authorized decorator in TypegraphQL. With this solution you should be able to add it to the email field in your User model. This should also be able to work with the sid field as well
Have a look here: https://typegraphql.com/docs/authorization.html
For example your User model could look something like this:
class User {
#Field()
id: ID;
#Authorized("LOGGEDINUSER")
#Field({nullable: true})
email: string;
#Field()
username: string;
}
You will have to allow the email field to be nullable
You will also need to define an authChecker, with this you can run your logic to check if the user is the owner of the data, therefore granting them access to the data.
An authChecker can look something like this:
export const customAuthChecker: AuthChecker<Context> = (
{ args, context, info, root },
roles
) => {
// roles is an array of string which contains the authorization required to access the current resource
// this is specified in the #Authorized decorator
if (roles.includes("LOGGEDINUSER")) {
// here check if the user is actually logged in and if they are allowed to access the resource
// return true if they are allowed to access the resource
}
return false;
};
You will also need to change your call to the buildSchema to include the custom authChecker and authMode.
For example:
const schema = await buildSchema({
resolvers: [UserResolver],
authChecker: customAuthChecker,
authMode: "null",
});
Note this will still return an email field but instead of returning the actual email it will return null when the user does not meet the authentication requirements

GraphQL/Mongoose: How to prevent calling the database once per requested field

I'm new to GraphQL but the way I understand is, if I got a User type like:
type User {
email: String
userId: String
firstName: String
lastName: String
}
and a query such as this:
type Query {
currentUser: User
}
implemeting the resolver like this:
Query: {
currentUser: {
email: async (_: any, __: any, ctx: any, ___: any) => {
const provider = getAuthenticationProvider()
const userId = await provider.getUserId(ctx.req.headers.authorization)
const { email } = await UserService.getUserByFirebaseId(userId)
return email;
},
firstName: async (_: any, __: any, ctx: any, ___: any) => {
const provider = getAuthenticationProvider()
const userId = await provider.getUserId(ctx.req.headers.authorization)
const { firstName } = await UserService.getUserByFirebaseId(userId)
return firstName;
}
}
// same for other fields
},
It's clear that something's wrong, since I'm duplicating the code and also the database's being queried once per field requested. Is there a way to prevent code-duplication and/or caching the database call?
How about the case where I need to populate a MongoDB field? Thanks!
I would rewrite your resolver like this:
// import ...;
type User {
email: String
userId: String
firstName: String
lastName: String
}
type Query {
currentUser: User
}
const resolvers = {
Query: {
currentUser: async (parent, args, ctx, info) {
const provider = getAuthenticationProvider()
const userId = await provider.getUserId(ctx.req.headers.authorization)
return UserService.getUserByFirebaseId(userId);
}
}
};
This should work, but... With more information the code could be better as well (see my comment).
More about resolvers you can read here: https://www.apollographql.com/docs/apollo-server/data/resolvers/
A few things:
1a. Parent Resolvers
As a general rule, any given resolver should return enough information to either resolve the value of the child or enough information for the child to resolve it on their own. Take the answer by Ruslan Zhomir. This does the database lookup once and returns those values for the children. The upside is that you don't have to replicate any code. The downside is that the database has to fetch all of the fields and return those. There's a balance act there with trade-offs. Most of the time, you're better off using one resolver per object. If you start having to massage data or pull fields from other locations, that's when I generally start adding field-level resolvers like you have.
1b. Field Level Resolvers
The pattern you're showing of ONLY field-level resolvers (no parent object resolvers) can be awkward. Take your example. What "is expected" to happen if the user isn't logged in?
I would expect a result of:
{
currentUser: null
}
However, if you build ONLY field-level resolvers (no parent resolver that actually looks in the database), your response will look like this:
{
currentUser: {
email: null,
userId: null,
firstName: null,
lastName: null
}
}
If on the other hand, you actually look in the database far enough to verify that the user exists, why not return that object? It's another reason why I recommend a single parent resolver. Again, once you start dealing with OTHER datasources or expensive actions for other properties, that's where you want to start adding child resolvers:
const resolvers = {
Query: {
currentUser: async (parent, args, ctx, info) {
const provider = getAuthenticationProvider()
const userId = await provider.getUserId(ctx.req.headers.authorization)
return UserService.getUserByFirebaseId(userId);
}
},
User: {
avatarUrl(parent) {
const hash = md5(parent.email)
return `https://www.gravatar.com/avatar/${hash}`;
},
friends(parent, args, ctx) {
return UsersService.findFriends(parent.id);
}
}
}
2a. DataLoaders
If you really like the child property resolvers pattern (there's a director at PayPal who EATS IT UP, the DataLoader pattern (and library) uses memoization with cache keys to do a lookup to the database once and cache that result. Each resolver asks the service to fetch the user ("here's the firebaseId"), and that service caches the response. The resolver code you have would be the same, but the functionality on the backend that does the database lookup would only happen once, while the others returned from cache. The pattern you're showing here is one that I've seen people do, and while it's often a premature optimization, it may be what you want. If so, DataLoaders are an answer. If you don't want to go the route of duplicated code or "magic resolver objects", you're probably better off using just a single resolver.
Also, make sure you're not falling victim to the "object of nulls" problem described above. If the parent doesn't exist, the parent should be null, not just all of the children.
2b. DataLoaders and Context
Be careful with DataLoaders. That cache might live too long or return values for people who didn't have access. It is generally, therefore, recommended that the dataLoaders get created for every request. If you look at DataSources (Apollo), it follows this same pattern. The class is instantiated on each request and the object is added to the Context (ctx in your example). There are other dataLoaders that you would create outside of the scope of the request, but you have to solve Least-Used and Expiration and all of that if you go that route. That's also an optimization you need much further down the road.
Is there a way to prevent code-duplication and/or caching the database call?
So first, this
const provider = getAuthenticationProvider()
should actually be injected into the graphql server request's context, such that you would use it in resolver for example as:
ctx.authProvider
The rest follows Dan Crews' answer. Parent resolvers, preferably with dataloaders. In that case you actually won't need authProvider and will use only dataloaders, depending on the entity type & by passing extra variables from context (like user Id)

Exclude user's password from query with Prisma 2

Recently I started working on a new project to learn some new technologies (Prisma 2, REST api with Express, etc.). Tho, I faced a problem.
My app has a user authentication system and the user model has a password column. So, when the client requests a user, the backend selects all the columns from the database including the password (that's hashed by the way).
I tried to not select the password column on the prisma findMany, like this:
await prisma.user.findUnique({
where: {
...
},
select: {
password: false
}
});
But I got an error by prisma saying that the select should contain at least one truly value. Thus, I added id: true to the select. I made an api request and I saw that only the id was returning for the user.
By my understanding, prisma expects me to add all the columns I care to the select object. But, I need a lot of columns from the user and I am making a lot of queries to fetch users and I cannot just write all the field I need everytime.
So, I wanted to ask you if there is a legit way to do that.
PS: I don't take "use rawQuery instead" as a solution.
The only legit way is adding column: true to the columns you want to include. There are requests for excluding columns here so it would be great if you could add a 👍 to the request relevant to you so that we can look at the priority.
https://github.com/prisma/prisma/issues/5042
https://github.com/prisma/prisma/issues/7380
https://github.com/prisma/prisma/issues/3796
I've been wondering about how to implement this as well, and bafflingly the issues linked in #Ryan's post are over two years old, and still unresolved. I came up with a temporary workaround, which is to implement a middleware function for the Prisma client which removes the password field manually after each call.
import { PrismaClient } from '#prisma/client'
async function excludePasswordMiddleware(params, next) {
const result = await next(params)
if (params?.model === 'User' && params?.args?.select?.password !== true) {
delete result.password
}
return result
}
const prisma = new PrismaClient()
prisma.$use(excludePasswordMiddlware)
This will check if the model being queried is a User, and it will not delete the field if you explicitly include the password using a select query. This should allow you to still get the password when needed, like when you need to authenticate a user who is signing in:
async validateUser(email: string, password: string) {
const user = await this.prisma.user.findUnique({
where: { email },
select: {
emailVerified: true,
password: true,
},
})
// Continue to validate user, compare passwords, etc.
return isValid
}
Check out the following code
Exclude keys from user
function exclude(user, ...keys) {
for (let key of keys) {
delete user[key]
}
return user
}
function main() {
const user = await prisma.user.findUnique({ where: 1 })
const userWithoutPassword = exclude(user, 'password')
}
reference
prima official Website

Automatically manipulating argument for Mongoose Document constructor

Let's say I have have this model:
const employeeSchema = new Schema({
name: String,
age: Number,
employeeData: {
department: String,
position: String,
lastTraining: Date
}
});
const Employee = mongoose.model('employee', employeeSchema);
In the database, the only thing that is going to be saved is something that looks like this:
{
_id: ...
name: 'John Smith',
age: 40,
employeeCode: '.... '
}
What's going on is that by some business rules, the employeeData info, which is coming from the reqeust body, is going through some function that compiles out of it the employeeCode, and when saving to the database I just use to the employeeCode.
Right now, the way I am implementing this is using statics. So, I have in the model the follwing:
employeeSchema.statics.compileEmployeeCode = (doc) => {
if (!doc.employeeData) {
doc.employeeCode= compileCode(doc.employeeData);
delete doc.employeeData;
}
return doc;
}
And then, I need to remember, for each call that receives info from the client, to call this function before creating the document (an instance of the model):
const compiledDoc = Employee.compileEmployeeCode(req.body);
const employee = new Employee(comiledDoc);
My question is: is there a way to automatically invoke some function that compiles the code out of the data any time I create a document like that, so I won't need to remember to always call on the static method beforehand?
Middlaware is what you are looking for. You need to create a function that will set a pre-save hook on the schema (which will be triggered every time before saving a new document) and to plug this function into the schema.
function compileEmployeeCode (schema) {
schema.pre('save', next => {
if (this.employeeData) {
this.employeeCode= compileCode(this.employeeData);
delete this.employeeData;
next();
}
});
}
employeeSchema.plugin(compileEmployeeCode);
OK. It was really hard but I finally managed to find the solution. The trick is to use a setter on a specific path. Each field in the schema is of type SchemaType which can have a setter apply on it:
https://mongoosejs.com/docs/api.html#schematype_SchemaType-set
Anyway, if I want to make it possible for the request to enter an object that will be converted to some other format, say a string, I would need to define the schema like this:
const employeeSchema = new Schema({
name: String,
age: Number,
employeeCode: {
type: String,
set: setCodeFromObj,
alias: 'employeeData'
}
});
The setter function I'm using here looks something like this (I'm omitting here all the error handling and the like to keep this short:
function setCodeFromObj(v) {
const obj = {};
obj.department = v.department;
obj.position = v.position;
obj.lastTraining = v.lastTraing
// breaking the object to properties just to show that v actually includes them
return compileEmployeeCode(obj);
}
I used an alias to make the name visible to the user different from what is actually saved in the database. I could have also done that using virtuals or just design the system a bit differently to use up the same name.

NodeJS+TypeScript, mongoose custom interface doesn't extend mongoose.Document properly

I'm relatively new to typescript, so I partially followed this guide:
http://brianflove.com/2016/11/11/typescript-2-express-mongoose-mocha-chai/
And I ended up with the following code(only the relevant parts):
import { Document } from "mongoose";
import { IUser } from "../interfaces/user";
export interface IUserModel extends IUser, Document {
// custom methods for your model would be defined here
}
and:
import { IUserModel } from "./models/user";
let connection: mongoose.Connection = mongoose.createConnection(MONGODB_CONNECTION);
this.model.user = connection.model<IUserModel>('User', userSchema);
var newUser: IUserModel = <IUserModel>{username:'asd',password:'bsd',email:'lol',admin:false};
newUser.save();
And according to the editor, it should work, however newUser only has the properties I gave it after compiling.
My setup is pretty much identical with the one in the tutorial.
Could anyone tell me what am I doing wrong?
Apparently, my entire approach was wrong.
Creating a newUser with <IUserModel>{...} will simply create an object that matches its interface parameters, and nothing else, so in this case an object filled with username, password, etc., and not an actual model instance.
Also it doesn't even relate to the connection above it, which is something I completely missed.
So instead I just had to do the following:
var newUser = new this.model.user({ username: 'asd', password: 'bsd', email: 'lol', admin: false });
newUser.save();
This creates the proper model instance I wanted.

Resources