I have two different Node Projects that access to the same database with sequelize.
One of the Node Apps (kind of backoffice) updates some tables and the other one use the data of that tables to performs some operation.
The thing is that this data should not be changed constantly and the second app need to be as fast as possible, thats why the second app querys the tables once (when app starts) and then stores this data in memory so it can do the operations faster (because the are no i/o to database).
My problem is that sometimes, this data may change throw the first app, and as this two apps have no contact between them (for security reasons) the only way I see is to have some "dirty" flag in some table of the database, and then make the first app to change it after some update and the second app to query each X seconds in order to check if this flag has been changed.
I don't like this approach and that's why I'm posting this question:
Does Sequelize provides a better or fancy way to do this ?
like some kind of "changes/dirty" watcher
Thanks in advance
Related
Let's say, hypothetically, I am working on a website which provides live score updates for sporting fixtures.
A script checks an external API for updates every few seconds. If there is a new update, the information is saved to a database, and then pushed out to the user.
When a new user accesses the website, a script queries the database and populates the page with all the information ingested so far.
I am using socket.io to push live updates. However, when someone is accessing the page for the first time, I have a couple of options:
I could use the existing socket.io infrastructure to populate the page
I could request the information when routing the user, pass it into res.render() as an argument and render the data using, for example, Pug.
In this circumstance, my instinct would be to utilise the existing socket.io infrastructure; purely because it would save me writing additional code. However, I am curious to know whether there are any other reasons for, or against, using either approach. For example, would it be more performant to render the data, initially, using one approach or the other?
As the title says I am trying to am creating a Dashboard.
The Dashboard should include an option to view Data inserted in a Database, live or at least "live" with minimal delay.
I was thinking about 2 approaches:
When the option is used the Back-End creates a Trigger in the Database(its only certain Data so i would have to change the Trigger according to the Data). Said trigger should then send the new Data via http to the Back-End.
What i see as a problem is that the delay of sending the Data and possible errors could block the whole database.
1.1. Same as 1. but the trigger puts the new Data in a seperate Table where i can then query and delete the Data.
Just query for the newest data every 1-5 sec. or so. This just seems extremly bad and avoidable.
Which of those is the best way to do this? Am i missing something? How is this usually done?
The Database is a pgsql Database,Back and Front-end are in NodeJs.
We've got an application in Django running against a PGSQL database. One of the functions we've grown to support is real-time messaging to our UI when data is updated in the backend DB.
So... for example we show the contents of a customer table in our UI, as records are added/removed/updated from the backend customer DB table we echo those updates to our UI in real-time via some redis/socket.io/node.js magic.
Currently we've rolled our own solution for this entire thing using overloaded save() methods on the Django table models. That actually works pretty well for our current functions but as tables continue to grow into GB's of data, it is starting to slow down on some larger tables as our engine digs through the current 'subscribed' UI's and messages out appropriately which updates are needed as which clients.
Curious what other options might exist here. I believe MongoDB and other no-sql type engines support some constructs like this out of the box but I'm not finding an exact hit when Googling for better solutions.
Currently we've rolled our own solution for this entire thing using
overloaded save() methods on the Django table models.
Instead of working on the app level you might want to work on the lower, database level.
Add a PostgreSQL trigger after row insertion, and use pg_notify to notify external apps of the change.
Then in NodeJS:
var PGPubsub = require('pg-pubsub');
var pubsubInstance = new PGPubsub('postgres://username#localhost/tablename');
pubsubInstance.addChannel('channelName', function (channelPayload) {
// Handle the notification and its payload
// If the payload was JSON it has already been parsed for you
});
See that and that.
And you will be able to to the same in Python https://pypi.python.org/pypi/pgpubsub/0.0.2.
Finally, you might want to use data-partitioning in PostgreSQL. Long story short, PostgreSQL has already everything you need :)
I'm working on an e-commerce website project. I want to count views on each product and display on the single product display page. I know it can be easily implemented by adding a count into express routes and then load into database.
But it will be a burden for the DB connection if for each view I need to connect to the DB and increment the index.
I have a second solution but not sure if it is a better solution since I didn't have any experience on these fields.
The solution is : use a variable to count number of views for each item, and send a query every day to record this variable, or load into a json file every X (minutes/hours..)
What is the best way to count these stuff without sacrificing the performance of the website?
Any suggestions?
I would store the counter against each endpoint in a Redis server. It's in-memory so read/writes are fast. And you can persist it to disk too.
Check out this redis client for Node.js.
Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.