How to prevent duplicate requests in NodeJS without using any cache? - node.js

Scenario:
The user requested twice to create a resource simultaneously, and we are checking whether the resource has already been there in the database. How to prevent duplicate requests in NodeJS without using any cache?
I've an idea in my mind to make two database lookups one after another to ensure that in the second call, we'll get the correct state of that resource. I don't think this solution will work because the order of request/response to the database matters. And the second problem is the redundant call(the second lookup)to database. The other way we can think about mantaining versions to use it during updates

Related

How to block duplicate orders?

I am using shopware 6.4.17 and I have a problem with duplicated orders, the order process is a little long so there is a way to send a request once more and put new order from same cart. Is there any way to do not send the same request any more?
Does anyone have the same problem or know the solution?
I am using Varnish so any keys generated in a template are impossible to implement.
Im assuming you are using multiple app servers. With that in mind, this might be related to missing session locking which allows multiple requests to the same OrderController Endpoint from the user perspective. With session locking, only the first request is allowed to proceed to the controller. Any other request waits for the first request to finish, therefore preventing duplicate orders.
PHP is using session locking by default when you use the default session handlers. Internally its just a flock thats used for file based sessions.
Also see https://symfony.com/doc/current/session.html for more informations.
For Redis this php.ini config should work:
session.save_handler = redis
session.save_path = "tcp://redis:6379"
redis.session.locking_enabled = 1

What is the best way to combine a GET and POST method in a middleware api?

I have to create a middleware API which a functionality to check for a key present in my database. If the key exists then it should simply fetch it(GET method). If not, then the API should create the key and its value in the database and return that(POST method). So since we have 2 fundamentally different methods being combined in this API, is it correct to do so? What should be the best way to design such API?
Don't combine them.
Return zero results from your GET method if you the record doesn't exist. Then in the client, if you receive zero results, POST the needed information to another API endpoint.
Combining the two ideas into one will create a hard to understand system. Your system should be deterministic, i.e. you can always know the result of every call before you call it.
One way to look at your API is to forget about the underlying database, but think about how an API client uses it.
If an API client does a GET request, 2 things happen:
The existing record is returned
A new record is created and is returned
A client might not actually care if 1 or 2 happened. For the perspective of the client, it might look like the resource always existed (even if it was technically just created).
So as long as there's no extra information that must be sent along with a POST request, it might be fine to use a GET request for both cases.
I don't know about your situation, typically it is best to have your get and post seperated. Though, if your client thinks that it needs to create a record and then posts the data, i dont see the problem with returning the resource and a 409 for the resource already existing. Here is a similar question HTTP response code for POST when resource already exists
Then the client can handle the 409 differently or the same as a 200 depending on your needs.

Multiple pouchdbs vs single pouchdb

I created couchdb with multiple dbs for use in my ionic 3 app. Also upon integrating it with pouchdb for client side syncing i created seperate pouchdbs for each one of the dbs. Total 5 pouchdbs. My question
whether it is good idea storing multiple pouchdbs on client side owing to the no. of http connections that would be created by syncing the pouchdbs. Or shall I put all Couchdb databases into one database and use type fields to separate the docs. Then only one pouchdb need to be created and synced on client.
Also using pouchdb-authenticaion plugin, authentication data is valid for only the database on which signup/login methods were called. Accessing other databases returns unauthenticated.
I would say, if your pouchdbs are syncing in realtime, that should be less expensive to reduce their amount to one and distinguish records by type.
But it should not be that costly, but still very convinient to set up multiple changes feed per each ItemStore (e.g. TodoStore, CommentStore, etc) with corresponding filter function passing only docs of the matching type into the store it belongs to. It can also be achieved by filtering on the basis of design_docs (I'm not sure if it saves anything, at least in the browser)
One change feed distributing docs to store would be probably the cheapest solution. But I suppose the filter function can't be changes after the change feed was established, so it must know about all the stores (i.e. doc types) beforehand

Generate document ID server side

When creating a document and letting Couch create the ID for you, does it check if the ID already exists, or could I still produce a conflict?
I need to generate UUIDs in my app, and wondered if it would be any different than letting Couch do it.
Use POST /db request for that, but you should be aware the fact that the underlying HTTP POST method is not idempotent, and a client may automatic retry it due to a problem some networking problems, which may create multiple documents in the database.
As Kxepal already mentioned it is generally not recommended to POST a document without providing your own _id.
You could, however, use GET /_uuids to retrieve a list of UUIDs from the server and use that for storing your documents. The UUIDs returned will depend on the algorithm that is used, but the chance of a duplicate are (for most purposes) insignificantly small.
You can and should give a document id, even when using the bulk document interface. Skipping that step makes the problem of resubmitted requests creating duplicate documents even worse. On the other hand, if you do assign ID's, and part of the request reaches couchdb twice (as in the case of a reconnecting proxy), then your response will include some conflicts, which you can safely ignore, you know the conflict was from you, in the same request

Should Domain Entities always be loaded in their entirety?

I have a custom ASP.NET Membership Provider that I am trying to add password history functionality to. User's passwords expire after X days. Then they have to change their password to one that has not been used in their past X changes.
I already had the User entity, which has a password attribute for their current password. This maps to the User table in the db. Since I needed a list of previous passwords I created a UserPassword table to store this information with a FK reference to the UserId.
Since passwords are value objects, and have no meaning outside of the user, they belong inside the User aggregate, with the User as the root. But here in lies my dilemma. When I retrieve a User from the repository do I always have to get all of their previously used passwords? 99% of the time I don't care about their old passwords, so retrieving them each time I need a User entity seems like a dumb thing to do for db performance. I can't use lazy loading because the User entity is disconnected from the context.
I was thinking of creating a PasswordHistory entity but for the reason stated above, passwords aren't really entities.
How would you DDD experts out there handle this situation?
Thanks.
Edit 1: After considering this some more, I realized this is essentially a question about Lazy Loading. More specifically, how do you handle lazy-loading in a disconnected entity?
Edit 2: I am using LINQ to SQL. The entities are completely detached from the context using this from CodePlex.
It is hard to fully answer this question because you do not specify a platform, so I cannot be exactly sure what you even mean by "disconnected". With Hibernate "disconnected" means you have an object in a valid session but the database connection is not currently open. That is trivial, you simply reconnect and lazy load. The more complicated situation is where you have an object which is "detached" i.e no longer associated with an active session at all and in that case you cannot simply reconnect, you have to either get a new object or attach the one you have to an active session.
Either way, even in the more complicated scenarios, there is still not a whole lot to lazy loading strategies because the requirements are so inflexible: You have to be "connected" to load anything, lazy or otherwise. Period. I will assume "disconnected" means the same thing as detached. Your strategy comes down to two basic scenarios: is this a situation where you probably need to just reconnect/attach on the fly to lazy load, or is it a scenario where you want to make a decision to sometimes conditionally load additional objects before you disconnect in the first place?
Sometimes you may in fact need to code for both possibilities.
In your case you also have to be connected not only to lazy load the old passwords but to update the User object in the first place. Also since this is ASP.NET you might be using session per request, in which case your option is now basically down to only one - conditionally lazy load before your disconnect and that is about it.
The most common scenario would be a person logs in and the system determines they are required to change their password, and asks them to do so before proceeding. In that case you might as well just take care of it immediately after login and keep the User connected. But you are probably using session per request, so what you could do is in the first request process the time limit and if it is expired, you are still connected here so go ahead and return a fully loaded User (assuming you are using the historic passwords in some kind of client side script validation). Then on the submit trip you could reattach or just get a new User instance and update that.
Then there is always the possibility you also have to provide them with the option to change their password at any time. They are already logged in. Does not matter much here, you have a User but the request ended long ago and it does not have passwords loaded. Here, I would probably just write a service method where when they invoke a change password function the service gets a second copy of the User object with the full history for update purposes only, then updates the password, and then discards that object without ever even using it for session or authentication purposes. Or if you are using Session per request you have to do the equivalent - get a fully initialized object for client side validation purposes, then when the data is submitted you can either reattach either one you already have or just get yet a third instance to actually do the update.
If the password is needed after beginning an authenticated session, you could still do the same things and either replace the local User or update the local User's in memory password version as well.
If you have too much stuff going on with multiple levels of authentication most likely you are going to have to require them to logoff and do a full log back in after a password change anyway, so the state of the User does not matter much once they request a password change.
In any case if you are using session per request and your objects become fully detached after every request, in the first scenario you can still lazy load while you are on the server on the original request to return data for client side validation. In the second scenario you have to make another trip (there really is no such thing as lazy loading here). In both case though you have to weigh your two update options because you are always disconnected before an update. You can either just get a second instance from the database on the submit trip to update, or you can reattach the one you already have. It depends on what is optimal/easiest - does saving a db round trip for an uncommon event really matter? Does reattaching using your ORM of choice possibly hit the database again anyway? I would probably not bother to reattach and instead just get a new instance for the actual update as I needed it.

Resources