How to fetch from nodejs-api-starter into react-starter-kit - node.js

I am trying out React-Starter-Kit for the first time and loving all the cutting edge features baked in (apollo/graphql-client in particular). A crucial part of any app for me is the database, and for that my understanding is the same author provides nodejs-api-starter which sets up a REST interface for accessing Postgres at localhost:5000 and has a graphql webui at localhost:5000/graphl.
That is about as far as I have been able to understand of the setup so far. I have changed the frontend code a little bit so a new Component "Counter" is loaded on the home page. I need to be able to make a new counter, fetch the latest counter, and increment decrement the counter. Write now the component just outputs the 'value' retrieved from the server at 5000.
I do not think I am accessing the 5000 server correctly, do I put the port in this url line somehow?
You can pull the repo down from : https://github.com/Falieson/react-starter-kit-crud-counter-demo
This is my first time setting up a nodejs api server, I am used to using MeteorJS which has pub/sub to MongoDB baked in. I am looking forward to the separation the RSK strategy (which seems more industry standard?) provides.

I've just done setting up the full site with Database from React-Stater-Kit, I'm also a newbie so I understand your frustration.
About this question, you don't need the NodeJS-API-Starter, it has enhanced function ( such as Redis cache ) and it's not suited for newbies. You should look deeper into the RSK, it already has the DB. If you ran the boilerplate and played around, change is you'll see file database.sqlite in your folder, it's the database. Here are the things you should learn:
Use SequelizeJS to connect the NodeJS server with database. Your database can be MySQL/MariaDB, PostgreSQL or SQLite. The connection is easy and there's tool to auto-generate Models from your database
How to create GraphQL's Types and Queries. If your queries need to search through the database, import Sequelize's models and use its functions.
Test your API via GraphQLi
Note: if you want to use MongoDB or other NoSQL, try Mongoose instead of Sequelize.

Related

the process between a frontend, backend and cloud database

I'm having a hard time finding information on simple straight forward process. I keep getting forwarded things like the "google cloud engine" and such.
I am attempting to start a new project to expand my knowledge. Previously, I developed a localhost web app which included; working frontend with react, express backend (REST api) and mongo database. I understood the concepts effectively of rest calls, state management and authentications and such.
The new setup is flutter, nodeJS (express), and firebase.
Looking at quick tutorials I have a simple flutter app working with a http post for a user sign up. Makes sense.
Normally in nodeJS, I'd have a route it hits e.g. router.post('/users', function (req, res, next) ... and then I'd have a model scheme to and if everything is correct it would post.
Exploring the relationship with firebase and nodeJS I'm slightly overwhelmed on how this works. I thought it would be something simple as an authentication key (which, btw I have sorted out with firebase-admin) and then proceed on my merry way with my models and routes/services.
Are the models defined within firebase, and my node just confirms the requests and talks through the firebase API? I haven't been able to locate any simple resources for this.
Since you didn't say which product within Firebase you're using (Firebase is a suite of products, not just one thing), I'm going to assume you mean Realtime Database or Cloud Firestore. They are both schemaless NoSQL databases -- they don't impose any structure on the data you put into them. There's no model, there's no validation. That's all stuff you have to do on your own, if you want. Or not, if you want flexibility.

Real-Time Database Messaging

We've got an application in Django running against a PGSQL database. One of the functions we've grown to support is real-time messaging to our UI when data is updated in the backend DB.
So... for example we show the contents of a customer table in our UI, as records are added/removed/updated from the backend customer DB table we echo those updates to our UI in real-time via some redis/socket.io/node.js magic.
Currently we've rolled our own solution for this entire thing using overloaded save() methods on the Django table models. That actually works pretty well for our current functions but as tables continue to grow into GB's of data, it is starting to slow down on some larger tables as our engine digs through the current 'subscribed' UI's and messages out appropriately which updates are needed as which clients.
Curious what other options might exist here. I believe MongoDB and other no-sql type engines support some constructs like this out of the box but I'm not finding an exact hit when Googling for better solutions.
Currently we've rolled our own solution for this entire thing using
overloaded save() methods on the Django table models.
Instead of working on the app level you might want to work on the lower, database level.
Add a PostgreSQL trigger after row insertion, and use pg_notify to notify external apps of the change.
Then in NodeJS:
var PGPubsub = require('pg-pubsub');
var pubsubInstance = new PGPubsub('postgres://username#localhost/tablename');
pubsubInstance.addChannel('channelName', function (channelPayload) {
// Handle the notification and its payload
// If the payload was JSON it has already been parsed for you
});
See that and that.
And you will be able to to the same in Python https://pypi.python.org/pypi/pgpubsub/0.0.2.
Finally, you might want to use data-partitioning in PostgreSQL. Long story short, PostgreSQL has already everything you need :)

How to fetch data from an api and store it in a database with nodejs and mongodb

I want to clone a public api into a mongodb database with nodejs so I can run my own queries and extend the data like I want. My problem is that I want to fetch the newest updates as well. The good thing is that I know when the data will change at the api app approximately.
I thought about something like cronjobs or a setInterval.
In the internet I could find many solutions how to transform a database into a rest api but no vice versa.
Please give me suggestions how to solve this problem.

Are client and server side validations supported in breeze when using the MEAN stack?

I went through the Zza sample where BreezeJS is used in combination with an NodeJS (+ MongoDb) backend.
http://www.breezejs.com/samples/zza
In the sample there is no client nor server side validation implemented as we can do with a .Net backend.
Is this simply not possible when using breeze + MongoDb or is it just not present in the sample?
The big difference with the .Net backend is that the meta data are stored client side and not autogenerated from the server. Can we assume that something similar will be possible one day with MongoDb ?
Is Breeze + MEAN production ready or is it still beta material?
Client side validation in Breeze is not dependent on the server. You can define validations directly on the client. There are plenty of examples of this in the documentation for the other non Mongo providers, but the code is the same. In terms of metadata coming from the server, since MongoDB has no schema there is no way to return what would be nonexistent metadata to the client. The only way to do this would be if you were to also use something on the server that more strongly typed the Mongo data ( i.e. something like Mongoose). This has been a request on the Breeze User Voice.
We have not yet created a Mongo example where we automatically validate the data on the server before saving, but this shouldn't be that much of a stretch, but it will be "custom" code.
In terms of being production ready, we are still adding features to the breeze ecosystem, both on the client and the server. However, we do try to limit the number of breaking changes.

mongoose: send data back without saving

I just started using node, backbone and mongoose not so long ago to create my first web app.
At the very beginning, I followed tutorials, and used backbone client side to define models. Those models mirror my mongoose schemas server side.
When I run schema.save() on one of my models, my data is automatically sent back to the client, with an _id.
But now that my app is almost finished, I realize that I don´t really need to save anything, as the only thing I do is query an api, and the data doesn´t have to be reused.
So my question is, what is the best way to keep the same mechanisms, but without saving anything?
The end reason is that I plan to run the app on an ec2 instance, and knowing that I don´t need to save anything, I think it is more beneficial to reduce the IO usage by not having any database.
Thanks, and sorry if the question seems dumb.

Resources