I just started using node, backbone and mongoose not so long ago to create my first web app.
At the very beginning, I followed tutorials, and used backbone client side to define models. Those models mirror my mongoose schemas server side.
When I run schema.save() on one of my models, my data is automatically sent back to the client, with an _id.
But now that my app is almost finished, I realize that I don´t really need to save anything, as the only thing I do is query an api, and the data doesn´t have to be reused.
So my question is, what is the best way to keep the same mechanisms, but without saving anything?
The end reason is that I plan to run the app on an ec2 instance, and knowing that I don´t need to save anything, I think it is more beneficial to reduce the IO usage by not having any database.
Thanks, and sorry if the question seems dumb.
Related
My project involves somewhat of a checklist. Initially, I used Redux to keep track of the state (whether something is checked off or not). Later I implemented a backend node server and a mongo database, and I load data from the database every time I fire up or refresh localhost. Since the checkoffs directly modify the elements in the database, there's not a whole lot Redux is doing that pre-emptive loading isn't already doing.
So my main question is that if the data is fetched from the backend the moment I start everything up, what else can I use Redux for in this case? I know my project might be too small and simple to give out a good answer, but I'd still like to know possibilities if possible.
No matter, your data is coming form backend but you still need redux for many reasons. Redux is not about just for storing data but it best for performance. Let discuss it with use cases.
Suppose you have main component of COMPANY and that is fetching data from API/backend and data cam to COMPANY component and same data is required to your ADMIN component and you again call network for data, and you know fetching data for each component from backend is very heavy and make your application slow.
So the best solution is to fetch all you data one time and save them in REDUX STORE and distribute data over your components.
MAIN ROLE:
1- Easy to manage data and state
2- Optimization and performace improvement with SELECTORS
3- Debugging is very easy
4- Easy to track data
I am trying out React-Starter-Kit for the first time and loving all the cutting edge features baked in (apollo/graphql-client in particular). A crucial part of any app for me is the database, and for that my understanding is the same author provides nodejs-api-starter which sets up a REST interface for accessing Postgres at localhost:5000 and has a graphql webui at localhost:5000/graphl.
That is about as far as I have been able to understand of the setup so far. I have changed the frontend code a little bit so a new Component "Counter" is loaded on the home page. I need to be able to make a new counter, fetch the latest counter, and increment decrement the counter. Write now the component just outputs the 'value' retrieved from the server at 5000.
I do not think I am accessing the 5000 server correctly, do I put the port in this url line somehow?
You can pull the repo down from : https://github.com/Falieson/react-starter-kit-crud-counter-demo
This is my first time setting up a nodejs api server, I am used to using MeteorJS which has pub/sub to MongoDB baked in. I am looking forward to the separation the RSK strategy (which seems more industry standard?) provides.
I've just done setting up the full site with Database from React-Stater-Kit, I'm also a newbie so I understand your frustration.
About this question, you don't need the NodeJS-API-Starter, it has enhanced function ( such as Redis cache ) and it's not suited for newbies. You should look deeper into the RSK, it already has the DB. If you ran the boilerplate and played around, change is you'll see file database.sqlite in your folder, it's the database. Here are the things you should learn:
Use SequelizeJS to connect the NodeJS server with database. Your database can be MySQL/MariaDB, PostgreSQL or SQLite. The connection is easy and there's tool to auto-generate Models from your database
How to create GraphQL's Types and Queries. If your queries need to search through the database, import Sequelize's models and use its functions.
Test your API via GraphQLi
Note: if you want to use MongoDB or other NoSQL, try Mongoose instead of Sequelize.
I have an nodejs server running witch show data on a web interface. The data is fetched from a MongoDB using mongoose. The data is added via an node-red application witch is isolated from the rest.
Currently my nodejs server fetches the data every 5 seconds. Is there a way to know if the data in my MongoDB has changed?
Thanks, I hope my question is clear.
I was also looking for something similar to what you are asking for few months back. Few ways which i know to do it are:
1) You can try to use middlewares while inserting your documents in DB. You can then send that new data either after saving it in DB or at the time of insertion only.
2) Refer to this answer which talks about solving your problem using inbuilt functions provided by mongoDb. You can study in deep about them in mongoDb docs.
3) There is also another way to do this which includes listening to changes in log files. As you know everything done in mongo is recorded and logged in files so whenever there is some change in data you can know it from there also. You will have to do the digging by yourself in this method.
Hope it helps!
We've got an application in Django running against a PGSQL database. One of the functions we've grown to support is real-time messaging to our UI when data is updated in the backend DB.
So... for example we show the contents of a customer table in our UI, as records are added/removed/updated from the backend customer DB table we echo those updates to our UI in real-time via some redis/socket.io/node.js magic.
Currently we've rolled our own solution for this entire thing using overloaded save() methods on the Django table models. That actually works pretty well for our current functions but as tables continue to grow into GB's of data, it is starting to slow down on some larger tables as our engine digs through the current 'subscribed' UI's and messages out appropriately which updates are needed as which clients.
Curious what other options might exist here. I believe MongoDB and other no-sql type engines support some constructs like this out of the box but I'm not finding an exact hit when Googling for better solutions.
Currently we've rolled our own solution for this entire thing using
overloaded save() methods on the Django table models.
Instead of working on the app level you might want to work on the lower, database level.
Add a PostgreSQL trigger after row insertion, and use pg_notify to notify external apps of the change.
Then in NodeJS:
var PGPubsub = require('pg-pubsub');
var pubsubInstance = new PGPubsub('postgres://username#localhost/tablename');
pubsubInstance.addChannel('channelName', function (channelPayload) {
// Handle the notification and its payload
// If the payload was JSON it has already been parsed for you
});
See that and that.
And you will be able to to the same in Python https://pypi.python.org/pypi/pgpubsub/0.0.2.
Finally, you might want to use data-partitioning in PostgreSQL. Long story short, PostgreSQL has already everything you need :)
I have a website which can have up to 500 concurrent viewers, with data updated every three seconds. Currently each user has an AJAX object which calls a web-page every three seconds which queries a DB and returns with the results.
What I would love to do is have each client get a socket to a node.js object, this node.js would poll the DB every 3 seconds for updated data, if it had updated data it would then be announced (ideally through JSON) and each client would then have the data pushed to it and update the page accordingly.
If this is possible, does anyone have a recommendation as to where I start? I am fairly familiar with JS but node.js seems to confuse me.
Thanks
I myself have quite few experience with node.js.
It is absolutely doable and looks like the perfect use case for node.js.
I recommend starting with an Express Tutorial and later on use socket.io.
I don't know which DBMS you are using, but there probably is a nice package for that as well. Just look through this list.