I would like to prevent a registration with an email address which already exists. Is it possible to use express-validator's new syntax for this? For example:
router.post('/register', [
check('email').custom((value, {req}) => {
return new Promise((resolve, reject) => {
Users.findOne({email:req.body.email}, function(err, user){
if(err) {
reject(new Error('Server Error'))
}
if(Boolean(user)) {
reject(new Error('E-mail already in use'))
}
resolve(true)
});
});
})
]
....
How would i pass Users?
express-validator is only aware of the request object itself, what keeps its complexity pretty low for the end-user.
More importantly, it only truly knows about the request's input locations -- body, cookies, headers, query and params.
Your custom validator is completely correct. That being said, it might not be testable, as you seem to be depending on global context.
In order to make it testable, the 2 options that I see are:
1. Inject req.Users:
This one would involve using some middleware that sets your store objects onto req:
// Validator definition
const emailValidator = (value, { req }) => {
return req.Users.findOne({ email: value }).then(...);
}
// In production code
// Sets req.Users, req.ToDo, req.YourOtherBusinessNeed
app.use(myObjectsStore.middleware);
app.post('/users', check('email').custom(emailValidator));
// In tests
req = { Users: MockedUsersObject };
expect(emailValidator('foo#bar.com', { req })).rejects.toThrow('email exists');
2. Write a factory function that returns an instance of your validator:
This is my preferred solution, as it doesn't involve using the request object anymore.
// Validator definition
const createEmailValidator = Users => value => {
return Users.findOne({ email: value }).then(...);
};
// In production code
app.post('/users', [
check('email').custom(createEmailValidator(myObjectsStore.Users)),
]);
// Or in tests
expect(createEmailValidator(MockedUsersObject)('foo#bar.com')).rejects.toThrow('email exists');
Hope this helps!
Converting my comments into a final, conclusive answer here :
A validator is simply supposed to validate the fields of request entities against the given criteria of data type / length / pattern.
You would need to write the method yourself, to determine if the user pre-exists or not. An express-validator ( or rather any validator ) would not do the task of cherry picking if the item exists in your list of items ( or your data-source), neither should it interact with the data-source concerned.
Related
Currently I'm on a legacy application using pug.js as view engine in a node.js express-app.
I want to implement a generic way to display feedback messages. I want to be able to display messages (successes, errors), even if the handler does reply with a redirect.
This is what I want:
handlePostRequest(req, res){
// do stuff with the post request
doStuff(req.body);
//This should be done of course somewhere else.
req.session.successes=req.session.successes|[];
//save some success-message for the user
req.session.successes.push("Your post has been saved. Thank you!");
//but reply with a 302
res.redirect(req.headers.referer);
}
//a get request. maybe the handler above redirected here
handleGetRequest(req,res){
// we do NOT get the successes here. Just the 'pure' data.
const renderData=getRenderData();
res.render('fancy-pug-template', renderData);
}
fancyMiddlewareForMessages(req, res, next){
//how to implement getRenderDataByBlackMagic()????
const renderData = getRenderDataByBlackMagic();
//set the messages
renderData.successes = req.session.successes;
//empty saved messages
req.session.successes = [];
next();
}
Obviously, I do not want to polute every handler which actually renders a template with some logic which retrieves the messages and adds them to the parameter object. I would like to move this cross-cutting concern in a middleware callback or something like that.
So, the question is: Can this be achieved? How? I'm fairly new to pug.js, maybe I'm overlooking something obvious.
Ok, I found a way. This is what I did:
const requestStorage = new AsyncLocalStorage<Request>();
function patchRenderFunction(req: Request, res: Response, next: NextFunction) {
const render = res.render;
res.render = function (view: string, options?: any, callback?: (err: Error, html: string) => void) {
const messages = new MessageManager(req);
//merge errorMessages
options.errorMessages = mergeMessageArrays(options.errorMessages, messages.errors);
//same for successMessages
options.successMessages = mergeMessageArrays(options.successMessages, messages.successes);
render.bind(this)(view, options, callback);
};
requestStorage.run(req, () => {
next();
});
}
export function applyAutomaticRenderAttributes(app: Express): void {
app.use(patchRenderFunction);
}
export function successMessage(message: string, req?: Request) {
if (!req) {
req = requestStorage.getStore();
}
if (!req) {
console.error('No request found in async storage. This should not happen. Please report this issue. (successMessage)');
return;
}
new MessageManager(req).addSuccessMessage(message);
}
//export function errorMessage(...) omitted
The MessageManager uses the requests session to store messages. It also filters them in some respect. I'm using the session because the application runs clustered (thank you, pm2). Since the session is stored in the db via express-mysql-session, I avoid problems with non-sticky sessions.
So I have a default model set up for viewing my data, and a form for inputting the data. I want to know what the best practice is for retrieving the one item of selected data? it's for a MERN stack
Currently I am using window hash and adding the id onto the url and retrieving from database that way, I feel this is janky though and trying to add update functionality it seems like it might get confusing.
I've thought about adding a currentID to redux, but then I can see problems occurring when that is persisted and you go to create a recipe after viewing and end up editing instead of creating.
retrieving id from url
const recipeId = window.location.hash.substr(1);
const recipe = useSelector((state) =>
state.recipes.find((r) => r._id === recipeId)
);
I get my recipes from mongo
export const recipeList = async (req, res) => {
try {
const recipes = await recipeSheet.find();
res.status(200).json(recipes);
} catch (error) {
res.status(404).json({ message: error.message });
}
};
and store to redux
export const getRecipes = () => async (dispatch) => {
try {
const { data } = await api.fetchRecipes();
dispatch({ type: "FETCH_ALL_RECIPES", payload: data });
} catch (error) {
console.log(error.message);
}
};
It depends on how large is your data. It'd better define a new GET path to retrieve a single record, like BASE_URL/api/recipes/123 or you can add query acceptance for the current endpoint to find a specific id in DB and return it, like BASE_URL/api/recipes?id=123. The reason for that is besides the optimization (for large data sets), the record may change after you store all records to the redux store, and by the current solution, you show the old data to the user. Best practices tell us to choose the first way as your solution, the second way is usually for filtering the data. Then simply by sending the new URL by the user, trigger a new API call to the new endpoint and get the single record.
I am working on a project that requires the usage of a few rabbitmq queues. One of the queues requires that the messages are delayed for processing at a time in the future. I noticed in the documentation for rabbmitmq there is a new plugin called RabbitMQ Delayed Message Plugin that seems to allow this functionality. In the past for rabbmitmq based projects, I used seneca-amqp-transport for message adding and processing. The issue is that I have not seen any documentation for seneca or been able to find any examples outlining how to add header properties.
It seems as if I need to initially make sure the queue is created with x-delayed-type. Additionally, as each message is added to the queue, I need to make sure the x-delay header parameter is added to the message before it is sent to rabbbitmq. Is there a way to pass this parameter, x-delay, with seneca-amqp-transport?
Here is my current code for adding a message to the queue:
return new Promise((resolve, reject) => {
const client = require('seneca')()
.use('seneca-amqp-transport')
.client({
type: 'amqp',
pin: 'action:perform_time_consuming_act',
url: process.env.AMQP_SEND_URL
}).ready(() => {
client.act('action:perform_time_consuming_act', {
message: {data: 'this is a test'}
}, (err, res) => {
if (err) {
reject(err);
}
resolve(true);
});
});
}
In the code above, where would header-related data go?
I just looked up the code of the library and under lib/client/publisher.js , this should do the trick
function publish(message, exchange, rk, options) {
const opts = Object.assign({}, options, {
replyTo: replyQueue,
contentType: JSON_CONTENT_TYPE,
x-delay: 5000,
correlationId: correlationId
});
return ch.publish(exchange, rk, Buffer.from(message), opts);
}
Give it a Try , should work. Here the delay value if set to 5000 milliseconds. You can also overload the publish method to take the value as a parameter.
What's the best practice for sharing DB query code between multiple Node.js Express controller methods? I’ve searched but the samples I’ve found don’t really get into this.
For example, I have this getUser method (using Knex for MySQL) that makes a call to get user info. I want to use it in other methods but I don't need all the surrounding stuff like the response object.
export let getUser = (req: Request, res: Response, next: NextFunction) => {
try {
knex.select().where('email', req.params.email)
.table('users')
.then( (dbResults) => {
const results: IUser = dbResults[0];
res
.status(200)
.set({ 'Content-Type': 'application/json', 'Connection': 'close' })
.send(results);
});
} catch (err) {
res.send({ error: "Error getting person" + req.params.email });
return next(err);
}
};
It seems wrong to repeat the query code somewhere else where I need to get the user. Should I turn my DB query code into async functions like this example and then call them from within the controller methods that use the query? Is there a simpler way?
/**
* #param {string} email
*/
async function getUserId(email: string) {
try {
return await knex.select('id')
.where('email', email)
.table('users');
} catch (err) {
return err;
}
}
You can for example create "service" modules, which contain helpers for certain type of queries. Or you could use ORM and implement special queries in each model that is called "fat model" design. Pretty much anything goes as long as you remember to not create new knex instance in every helper module, but you just pass knex (containing its connection pool) for the helper methods so that all queries will share the same connection pool.
ORM's like objection.js also provides a way to extend query builder API so you can inherit custom query builder with any special query helper that you need.
I wonder what way should I organize my routing in expressJS :
Params parsing in Controller
router.get('/users/:id', UserController.get);
class UserController {
get(res, req) {
var id = res.params.id;
UserModel.get(id, function(user) {
res.send(user);
}
}
}
Params parsing in Route
router.get('/users/:id', function(req, res) {
var id = req.params.id;
UserController.get(id, function(user) {
res.json(user);
}
});
class UserController {
get(id, fn) {
UserModel.get(id, fn);
}
}
I find the second version Params parsing in Route easier for
unit test
In case of change in the URL params or request body
but most of the example I found use the first version, why ?
If you consider a much larger, messier real world application, with route names that no longer match controller names etc., it might be beneficial to place the full routing table (all of the router.xxx calls) in one place, such as a routes.js. For a given url, this makes it much simpler for a new developer to figure out which code handles which url.
If you included all of the parameter parsing in your routes.js, it would become really messy and you'd likely lose some of the benefit of having collected all that into one file in the first place.
That said, there's no reason why you cant have the best of both worlds by separating the routing, the parameter parsing/response formatting, and the controller logic each into their own modules.