NestJS + TypORM design patterns: resolver vs service - node.js

I've found numerous examples of nest 'example' apps, but each one seems to have slightly different opinions on design patterns.
I'm currently interested in where the object preparation work should go between a resolver and service when coupled with TypeORM.
for example:
comment.resolver.ts:
/********************
* #MUTATION
*********************/
/**
*
* #param payload
* #param args
*/
#Mutation('createComment')
async create(#CurrentUser() payload: JwtPayload, #Args('input') args: CreateCommentInputDto): Promise < CommentEntity > {
const currentUser = await this.userService.getCurrentUser(payload);
const initComment = new CommentEntity();
const newComment: CommentEntity = {
...args,
createdBy: currentUser,
createdDate: new Date(),
modifiedDate: new Date(),
...initComment,
};
const createdComment = await this.commentService.create(newComment);
pubSub.publish('CommentCreated', {
CommentCreated: createdComment
});
return createdComment;
}
comment.service.ts:
/**
*
* #param comment
*/
async create(comment: CommentEntity): Promise < CommentEntity > {
return await this.CommentsRepository.save(comment);
}
i.e.
Create new empty comment Entity
Add field values that are not supplied by query
use spread operator to combine them all together
pass them all to the comment service to save via TypeORM repository
The reasoning being that the comment service just accepts and saves a well formatted entity. Maybe in the future I i will need to prepare the comment to be created in a different way, and so that would be defined in a new mutation.
Is this an anti-pattern? Should I be moving that object create / combine / formatting into the service, and keep the resolver method as light as possible?
If so, what's the logic behind that?

You should check the preload method that is provided by the Repository item provided by TypeOrm. It allows batching changes on an existing entity or new one, which should be what you want.
I think that TypeOrm is very unopiniated, you are free to choose how you organise your mutations on entities. Still I think the 'preload' repository pattern is a safe one as you always want first to get value from the database corresponding to your proposed changes, then you batch the changes in the entity and save it afterwards. It should lower your chances of getting a conflict value on an entity or getting double values etc.
You can see the database as a git repository, fetch first, rebase your local changes on the remote head value, commit and push your changes.

Related

How to lock table with pg-promise

I have
db.result('DELETE FROM categories WHERE id = ${id}', category).then(function (data) { ...
and
db.many('SELECT * FROM categories').then(function (data) { ...
initially delete is called from one API call and then select on following API call, but callback for db request happens in reverse order, so I get list of categories with removed category.
Is there a way how to lock categories table with pg-promise?
If you want the result of the SELECT to always reflect the result of the previous DELETE, then you have two approaches to consider...
The standard approach is to unify the operations into one, so you end up executing all your dependent queries against the same connection:
db.task(function * (t) {
yield t.none('DELETE FROM categories WHERE id = ${id}', category);
return yield t.any('SELECT FROM categories');
})
.then(data => {
// data = only the categories that weren't deleted
});
You can, of course, also use either the standard promise syntax or even ES7 await/async.
The second approach would be to organize an artificial lock inside your service that would hold off on executing any corresponding SELECT until the DELETE requests are all done.
However, this is a very awkward solution, typically pointing at the flaw in the architecture. Also, as the author of pg-promise, I won't be even getting into that solution, as it would be way outside of my library anyway.

BreezeJS SaveChanges() security issue

I'm using BreezeJS and have a question regarding how data is saved. Here's my code and comments
[Authorize]
/*
* I want to point out the security hole here. Any Authorized user is able to pass to this method
* a saveBundle which will be saved to the DB. This saveBundle can contain anything, for any user,
* or any table.
*
* This cannot be stopped at the client level as this method can be called from Postman, curl, or whatever.
*
* The only way I can see to subvert this attack would be to examine the saveBundle and verify
* no data is being impacted that is not owned or related directly to the calling user.
*
* Brute force could be applied here because SaveResult contains Errors and impacted Entities.
*
*/
[HttpPost]
public SaveResult SaveChanges(JObject saveBundle)
{
return _efContext.SaveChanges(saveBundle);
}
To limit access to a callers ability to retrieve data I first extract from the access_token the user_id and limit all my queries to include this in a where clause, making it somewhat impossible for a user to retrieve another users data.
But that would not stop a rogue user who had a valid access_token from calling SaveChanges() in a brute force loop with incremental object ids.
Am I way off on this one? Maybe I'm missing something.
Thanks for any help.
Mike
The JObject saveBundle that the client passes to the SaveChanges method is opaque and hard to use. The Breeze ContextProvider converts that to a map of entities and passes it to the BeforeSaveEntities method. BeforeSaveEntities is a method you would implement on your ContextProvider subclass, or in a delegate that you attach to the ContextProvider, e.g.:
var cp = new MyContextProvider();
cp.BeforeSaveEntitiesDelegate += MySaveValidator;
In your BeforeSaveEntities or delegate method, you would check to see if the entities can be saved by the current user. If you find an entity that shouldn't be saved, you can either remove it from the change set, or throw an error and abort the save:
protected override Dictionary<Type, List<EntityInfo>> BeforeSaveEntities(
Dictionary<Type, List<EntityInfo>> saveMap)
{
var user = GetCurrentUser();
var entityErrors = new List<EFEntityError>();
foreach (Type type in saveMap.Keys)
{
foreach (EntityInfo entityInfo in saveMap[type])
{
if (!UserCanSave(entityInfo, user))
{
throw new HttpResponseException(new HttpResponseMessage(HttpStatusCode.Forbidden)
{ ReasonPhrase = "Not authorized to make these changes" });
}
}
}
return saveMap;
}
You will need to determine whether the user should be allowed to save a particular entity. This could be based on the role of the user and/or some other attribute, e.g. users in the Sales role can only save Client records that belong to their own SalesRegion.

MongoDB update object and remove properties?

I have been searching for hours, but I cannot find anything about this.
Situation:
Backend, existing of NodeJS + Express + Mongoose (+ MongoDB ofcourse).
Frontend retrieves object from the Backend.
Frontend makes some changes (adds/updates/removes some attributes).
Now I use mongoose: PersonModel.findByIdAndUpdate(id, updatedPersonObject);
Result: added properties are added. Updated properties are updated. Removed properties... are still there!
Now I've been searching for an elegant way to solve this, but the best I could come up with is something like:
var properties = Object.keys(PersonModel.schema.paths);
for (var i = 0, len = properties.length; i < len; i++) {
// explicitly remove values that are not in the update
var property = properties[i];
if (typeof(updatedPersonObject[property]) === 'undefined') {
// Mongoose does not like it if I remove the _id property
if (property !== '_id') {
oldPersonDocument[property] = undefined;
}
}
}
oldPersonDocument.save(function() {
PersonModel.findByIdAndUpdate(id, updatedPersonObject);
});
(I did not even include trivial code to fetch the old document).
I have to write this for every Object I want to update. I find it hard to believe that this is the best way to handle this. Any suggestions anyone?
Edit:
Another workaround I found: to unset a value in MongoDB you have to set it to undefined.
If I set this value in the frontend, it is lost in the REST-call. So I set it to null in the frontend, and then in the backend I convert all null-values to undefined.
Still ugly though. There must be a better way.
You could use replaceOne() if you want to know how many documents matched your filter condition and how many were changed (I believe it only changes one document, so this may not be useful to know). Docs: https://mongoosejs.com/docs/api/model.html#model_Model.replaceOne
Or you could use findOneAndReplace if you want to see the document. I don't know if it is the old doc or the new doc that is passed to the callback; the docs say Finds a matching document, replaces it with the provided doc, and passes the returned doc to the callback., but you could test that on your own. Docs: https://mongoosejs.com/docs/api.html#model_Model.findOneAndReplace
So, instead of:
PersonModel.findByIdAndUpdate(id, updatedPersonObject);, you could do:
PersonModel.replaceOne({ _id: id }, updatedPersonObject);
As long as you have all the properties you want on the object you will use to replace the old doc, you should be good to go.
Also really struggling with this but I don't think your solution is too bad. Our setup is frontend -> update function backend -> sanitize users input -> save in db. For the sanitization part, we use a helper function where we integrate your approach.
private static patchModel(dbDocToUpdate: IModel, dataFromUser: Record<string, any>): IModel {
const sanitized = {};
const properties = Object.keys(PersonModel.schema.paths);
for (const key of properties) {
if (key in dbDocToUpdate) {
sanitized[key] = data[key];
}
}
Object.assign(dbDocToUpdate, sanitized);
return dbDocToUpdate;
}
That works smoothly and sets the values to undefined. Hence, they get removed from the document in the db.
The only problem that remains for us is that we wanted to allow partial updates. With that solution that's not possible and you always have to send everything to the backend.
EDIT
Another workaround we found is setting the property to an empty string in the frontend. Mongo then also removes the property in the database

How to read/write a document in parallel execution with mongoDB/mongoose

I'm using MongoDB with NodeJS. Therefore I use mongoose.
I'm developing a multi player real time game. So I receive many requests from many players sometimes at the very same time.
I can simplify it by saying that I have a house collection, that looks like this:
{
"_id" : 1,
"items": [item1, item2, item3]
}
I have a static function, called after each request is received:
house.statics.addItem = function(id, item, callback){
var HouseModel = this;
HouseModel.findById(id, function(err, house){
if (err) throw err;
//make some calculations such as:
if (house.items.length < 4){
HouseModel.findByIdAndUpdate(id, {$push: {items: item}}, cb);
}
});
}
In this example, I coded so that the house document can never have more than 4 items. But what happens is that when I receive several request at the very same time, this function is executed twice by both requests and since it is asynchronous, they both push a new item to the items field and then my house has 5 items.
I am doing something wrong? How can I avoid that behavior in the future?
yes, you need better locking on the houseModel, to indicate that an addItem
is in progress.
The problem is that multiple requests can call findById and see the same
house.items.length, then each determine based on that (outdated) snapshot
that it is ok to add one more item. The nodejs boundary of atomicity is the
callback; between an async call and its callback, other requests can run.
One easy fix is to track not just the number of items in the house but the
number of intended addItems as well. On entry into addItem, bump the "want
to add more" count, and test that.
One possible approach since the release of Mongoose 4.10.8 is writing a plugin which makes save() fail if the document has been modified since you loaded it. A partial example is referenced in #4004:
#vkarpov15 said:
8b4870c should give you the general direction of how one would write a plugin for this
Since Mongoose 4.10.8, plugins now have access to this.$where. For documents which have been loaded from the database (i.e., are not this.isNew), the plugin can add conditions which will be evaluated by MongoDB during the update which can prevent the update from actually happening. Also, if a schema’s saveErrorIfNotFound option is enabled, the save() will return an error instead of succeeding if the document failed to save.
By writing such a plugin and changing some property (such as a version number) on every update to the document, you can implement “optimistic concurrency” (as #4004 is titled). I.e., you can write code that roughly does findOne(), do some modification logic, save(), if (ex) retry(). If all you care about is a document remaining self-consistent and ensuring that Mongoose’s validators run and your document is not highly contentious, this lets you write code that is simple (no need to use something which bypasses Mongoose’s validators like .update()) without sacrificing safety (i.e., you can reject save()s if the document was modified in the meantime and avoid overwriting committed changes).
Sorry, I do not have a code example yet nor do I know if there is a package on npm which implements this pattern as a plugin yet.
I am also building a multiplayer game and ran into the same issue. I believe I have solved it my implementing a queue-like structure:
class NpcSaveQueue {
constructor() {
this.queue = new Map();
this.runQueue();
}
addToQueue(unitId, obj) {
if (!this.queue.has(unitId)) {
this.queue.set(String(unitId), obj);
} else {
this.queue.set(String(unitId), {
...this.queue.get(unitId),
...obj,
})
}
}
emptyUnitQueue(unitId) {
this.queue.delete(unitId);
}
async executeUnitQueue(unitId) {
await NPC.findByIdAndUpdate(unitId, this.queue.get(unitId));
this.emptyUnitQueue(unitId);
}
runQueue() {
setInterval(() => {
this.queue.forEach((value, key) => {
this.executeUnitQueue(key);
})
}, 1000)
}
}
Then when I want to update an NPC, instead of interacting with Mongoose directly, I run:
npcSaveQueue.addToQueue(unit._id, {
"location.x": newLocation.x,
"location.y": newLocation.y,
});
That way, every second, the SaveQueue just executes all code for every NPC that requires updating.
This function never executes twice, because update operation is atomic on a level of single document.
More info in official manual: http://docs.mongodb.org/manual/core/write-operations-atomicity/#atomicity-and-transactions

Subclass QueryReadStore or ItemFileWriteStore to include write api and server side paging and sorting.

I am using Struts 2 and want to include an editable server side paging and sorting grid.
I need to sublclass the QueryReadStore to implement the write and notification APIs. I do not want to inlcude server side REST services so i do not want to use JsonRest store. Any idea how this can be done.? What methods do i have to override and exactly how. I have gone through many examples but i am not getting how this can be done exactly.
Also is it possible to just extend the ItemFileWriteStore and just override its methods to include server side pagination? If so then which methods do i need to override. Can i get an example about how this can be done?
Answer is ofc yes :)
But do you really need to subclass ItemFileWriteStore, does it not fit your needs? A short explaination of the .save() follows.
Clientside does modify / new / delete in the store and in turn those items are marked as dirty. While having dirty items, the store will keep references to those in a has, like so:
store._pending = { _deletedItems: [], _modifiedItems: [], _newItems: [] };
On call save() each of these should be looped, sending requests to server BUT, this does not happen if neither _saveEverything or _saveCustom is defined. WriteStore simply resets its client-side revert feature and saves in client-memory.
See source search "save: function"
Here is my implementation of a simple writeAPI, must be modified to use without its inbuilt validation:
OoCmS._storeAPI
In short, follow this boiler, given that you would have a CRUD pattern on server:
new ItemFileWriteStore( {
url: 'path/to/c**R**ud',
_saveCustom: function() {
for(var i in this._pending._newItems) if(this._pending._deletedItems.hasOwnProperty(i)) {
item = this._getItemByIdentity(i);
dxhr.post({ url: 'path/to/**C**rud', contents: { id:i }});
}
for(i in this._pending._modifiedItems) if(this._pending._deletedItems.hasOwnProperty(i)) {
item = this._getItemByIdentity(i);
dxhr.post({ url: 'path/to/cr**U**d', contents: { id:i }});
}
for(i in this._pending._deletedItems) if(this._pending._deletedItems.hasOwnProperty(i)) {
item = this._getItemByIdentity(i);
dxhr.post({ url: 'path/to/cru**D**', contents: { id:i }});
}
});
Now; as for paging, ItemFileWriteStore has the pagination in it from its superclass mixins.. You just need to call it with two setups, one being directly on store meaning server should only return a subset - or on a model with query capeabilities where server returns a full set.
var pageSize = 5, // lets say 5 items pr request
currentPage = 2; // note, starting on second page (with *one* being offset)
store.fetch({
onComplete: function(itemsReceived) { },
query: { foo: 'bar*' }, // optional filtering, server gets json urlencoded
count: pageSize, // server gets &count=pageSize
start: currentPage*pageSize-pageSize // server gets &start=offsetCalculation
});
quod erat demonstrandum

Resources