Stop executing sequelize default Delete on moleculer service restart - node.js

I am using moleculer micro services and postgres database with modules 'moleculer-db-adapter-sequelize' and 'Sequelize'. Every time i save any code moleculer service gets restarted at that time sequelize runs a DELETE query. How to stop it running that delete query?

As I can't add a comment, I ask this in the answer section.
We will need a bit more informations could you provide some sample of your code?
I suspect that you have a DELETE query in some part of your service lifecycle event.

Related

Mongo watch change stream suddenly stopped working

I'm using mongo watch() to subscribe to change stream events. I've noticed that the change stream events automatically stopped without throwing any specific error and become idle. Then have to restart the server to listen to the change stream again.
I'm not able to find out the specific reason for this strange behavior.
We are using Nodejs server. mongoose for db connection and watch.
If any one of you faced the same issue please guide me on that. We have 1 primary node and 2 secondary node cluster and hosted in mongodb atlas.
The collection.watch(...) method has to be called on the collection on every server restart. Common mistake is to call it once upon the creation of the collection. However, the database does not maintain reference to the result of this call as it does for other calls such as the collection.createIndexes(...).
Change streams only notify on data changes that have persisted to a majority of data-bearing members in the replica set. This ensures that notifications are triggered only by majority-committed changes that are durable in failure scenarios.
Change stream events stop working when a node fails in a replica set

Where do I save functions that are called with a setTimeout?

I'm learning NodeJS and am trying to stick with the MVC architecture. I'm getting stuck on where to place those functions that update data from an outside source on a set loop, with a 30 second or so delay.
Example: I build an app that takes data from a API, Orders in this case, and stores it in a database. I can add orders to my database locally, and I want the orders database to be synchronized with the outside source mentioned previously, every 30 seconds.
My models directory will contain Order.js which includes an order schema and it will connect to MongoDB via Mongoose. My controller will have API endpoints for CRUD operations.
Where does the function go that refreshes the data from the server? In the controller? Then I would export that function so that I can set up the loop that updates the database in my app.js (or whatever I use to start the application)?
I recommend using something like node-cron to handle the setTimeout for you. It gives you the advantage of cron-like syntax to run your jobs on a schedule and will run while your node app is. I would put these jobs in a separate directory with node cron jobs. The individual node cron job can then import your MongoDB model. Your main application can then import index.js or something similar from the cronjobs dir which imports all your node cron jobs to bootstrap them on application startup.

How to update shared MongoDB instance safely?

I'm trying to work out how I can establish what I think needs to be a write-lock on a Mongo database during application startup.
I've got a setup whereby we have several (in this diagram just 2) Node API's that establish a connection to a Mongo replica set. Upon startup I want to be able to run some scripts against Mongo if it's an old version. For example:
MongoDB: v1 schema
Node App: expecting v3 schema
So during startup the Node App will run v2Upgrade.js and v3Upgrade.js or similar. However, I want to ensure that only 1 Node app can run this at any one time. So 2 questions:
Am I thinking about this in the right way?
How would I best create some sort of lock, so only 1 process runs these updates before the database is "ready"?

Which all libraries for NodeJS provide persistent scheduling and cron jobs

From what I have read, only Agenda,Node-crontab and schedule-drone provide this feature. It would be grateful if you provide a small description of the mechanism which these library use for persistent storage of jobs.
I need to send emails by reading the mail options from MongoDB and want my nodeJS application to somehow schedule and be in sych with these even if nodeJS is stopped temporarily.
For MySQL you can try with nodejs-persistable-scheduler
In other cases you need to build your own solution. For example, I created a collection/table to store the schedule state and rules. Then, if the service's crashes or restarted, I can get all the schedules form the database and restart them again from the app.listen event.

Node.js(&MongoDB) server crashes , Database operations halfway?

I have a node.js app with mongodb backend going to production in a week and i have few doubts on how to handle app crashes and restart .
Say i have a simple route /followUser in which i have 2 database operations
/followUser
----->Update User1 Document.followers = User2
----->Update User2 Document.followers = User1
----->Some other mongodb(via mongoose)operation
What happens if there is a server crash(due to power failure or maybe the remote mongodb server is down ) like this scenario :
----->Update User1 Document.followers = User2
SERVER CRASHED , FOREVER RESTARTS NODE
What happens to these operations below ? The system is now in inconsistent state and i may have error everytime i ask for User2 followers
----->Update User2 Document.followers = User1
----->Some other mongodb(via mongoose)operation
Also please recommend good logging and restart/monitor modules for apps running in linux.
Right now im using domains for to catch exceptions , doing server.close , but before process.exit() i want to make sure all database transactions are done , can i check this by testing if the event loop is empty or not (how?) and then process.exit(1) ?
You need transactions for this, and since MongoDB doesn't have them here is a workaround http://docs.mongodb.org/manual/tutorial/perform-two-phase-commits/
One way to address this problem is to add cleanup code to your application that runs whenever the application starts. You write the cleanup code to perform sanity checks on any portions of your data that can be updated in multiple steps like your example and then repairs that data in whatever way make sense for your application.
Depending on the complexity of your application/data, this may also require that you keep a log of actions the app was trying to perform, but that gets complicated real fast. Ideally it's more a matter of refreshing denormalized data and deleting partial data.
You want to do this during startup rather than shutdown as there's no guarantee your shutdown code will fully run and if you're shutting down because of an exception you don't know what the state of your system is at that point.
the solution given by vkurchatkin in this link is a workaround in case your appserver crashes, because you will be able of knowing which transactions were pending at that moment. If you implement this in your code you can create cleanup code when your system restart as suggested by JohnnyHK. The code you mention (catching exceptions, test when closing, etc) will not work because...well.. your server crashed! ;-)
That said, this is done using the database, so you will have to warrantee to a certain point that your database does not crash. I would suggest your use replication for that. It is basically a cluster of servers that recovers itself if one node fails, and also you can make some check to make sure that the data reached the servers and is safe.
Hope this helps.

Resources