So I have been playing with NodeJS/Express for a little with now and I would really like to try to rewrite a relatively large side project using a full JavaScript stack just to see how it will work. Sails.js seems to be a pretty good choice for a NodeJS backend for a REST API with support for web sockets which is exactly what I am looking for however is one more issue I am looking to resolve and that is transactional SQL within NodeJS.
Most data layer/orms I have seen on the NodeJS side of things don't seem to support transactions when dealing with MySQL. The ORM provided with Sails.js (Waterline) also does not seem to support transactions which is weird because I have seen places where is mentioned it did though those comments are quite old. Knex.js has support for transactions so I was wondering if it is easy to replace the ORM is Sails.js with this (or if Sails.js assumes Waterline in the core framework).
I was also wondering if there is an ORM built on top of Knex.js besides Bookshelf as I am not a fan of Backbones Model/Collection system?
You can still write SQL queries directly using Model.query(). Since this is an asynchronous function, you'll have to use promises or async to reserialize it. For instance, using the MySQL adapter, async, and a model called User:
async.auto({
transaction: function(next){
User.query('BEGIN', next);
},
user: ['transaction', function(next) {
User.findOne(req.param('id')).exec(next);
}],
// other queries in the transaction
// ...
}, function(err, results) {
if (err) {
User.query('ROLLBACK', next);
return next(err);
}
User.query('COMMIT', next);
// final tasks
res.json(results.serialize);
});
We're working on native support for transactions at the ORM level:
https://github.com/balderdashy/waterline/issues/62
Associations will likely come first, but transactions are next. We just finished GROUP BY and aggregations (SUM, AVG, etc.)
Transactions in SailsJS turned out to be much trickier than anticipated. The goal is to let the ORM adapter know that two very disparate controller actions on models are to be sent through a single MySQL connection.
The natural way to do it is two write a new adapter that accepts an additional info to indicate that a query belongs to a transaction call. Doing that requires a change in waterline (sails ORM abstraction module) itself.
Checkout if this helps - https://www.npmjs.com/package/sails-mysql-transactions
Related
I developed an application with couch DB. I used jquery couch API. Now i want to add Pouch DB also in my application. So What should be the workflow for this implementation.
For Example
Currently to insert data in couch DB, I am using this,
var mydata = {
_id: 'dave#gmail.com',
name: 'David',
age: 68
};
$.couch.db('dbname').saveDoc(mydata,{
success:function(data){
console.log(data);
}
});
And on Pouch DB Tutorial, they given example to insert like following
var db = new PouchDB('dbname');
db.put({
_id: 'dave#gmail.com',
name: 'David',
age: 68
});
So Now As I want to add Pouch Db in my application, Do I need to change whole jquery couch API to Pouch Db API?
Or I can still achieve that with jquery couch API.?
Also My already developed CouchDB application has 3GB of database. So when I try to apply two way sync, it is just not happening.How can I handle this?
You can either write a proxy that will take the $.couch calls and forward them to PouchDB, or replace the $.couch ones with PouchDB calls, I would personally do the latter, the 2 apis are fairly closely matched so it should be mostly a simple find / replace
As for the sync issue, 3G is a fair amount of data to load into the browser but it should work (with prompts) or at the least fail in a way that tells you why, if you are seeing something not working please feel free to file an issue in the pouchdb repo
I'm trying to decide between two methods for inserting a new document to a collection from the client using Meteor.js. Call a Server Method or using the db API directly.
So, I can either access the db api directly on the client:
MyCollection.insert(doc)
Or, I can create a new Server Method (under the /server dir):
Meteor.methods({
createNew: function(doc) {
check(doc, etc)
var id = MyCollection.insert(doc);
return project_id;
}
});
And then call it from the client like this:
Meteor.call('createNew', doc, function(error, result){
// Carry on
});
Both work but as far as I can see from testing, I only benefit from latency compensation (the local cache updating and showing on the screen before the server responds) if I hit the db api directly, not if I use a Server Method, so my preference is for doing things this way. But I also get the impression the most secure approach is to use a Method on the server (mainly because Emily Stark gave it as an example in her video here) but then the db api is available on the client no matter what so why would a Server Method be better?
I've seen both approaches taken when reading source code elsewhere so I'm stumped.
Note. In both cases I have suitable Allow/Deny rules in place:
MyCollection.allow({
insert: function(userId, project){
return isAllowedTo.createDoc(userId, doc);
},
update: function(userId, doc){
return isAllowedTo.editDoc(userId, doc);
},
remove: function(userId, doc){
return isAllowedTo.removeDoc(userId, doc);
}
});
In short: Which is recommended and why?
The problem was that I had the method declarations under the /server folder, so they were not available to the client and this broke latency compensation (where the client creates stubs of these methods to simulate the action but in my case could not because it couldn't see them). After moving them out of this folder I am able to use Server Methods in a clean, safe and latency-compensated manner (even with all my Allow/Deny rules set to false - they do nothing and only apply to direct db api access from the client, not server).
In short: don't use the db api on the client or allow/deny rules on the server, forget they ever existed and just write Server Methods, make sure they're accessible to both client and server, and use these for crud instead.
I am new to mongodb and nodejs.So far I have been able to create a new mongodb database and access it via nodejs. However I want to write some generic set of methods for accessing collections (CRUD), as my list of collections will grow in number. For example I have a collection which contains books and authors
var books = db.collection('books');
var authors = db.collection('authors');
exports.getBooks = function(callback) {
books.find(function(e, list) {
list.toArray(function(res, array) {
if (array) callback(null, array);
else callback(e, "Error !");
});
});
};
Similar to this I have the method for getting authors as well.Now this is getting too repetitive as I want to add methods for CRUD operations as well. Is there a way to have common/generic CRUD methods for all my collections ?
You should take a look at Mongoose, it makes it easy to handle Mongodb from node.js, Mongoose js has a schema based solution, where each schema maps to a Mongodb collection, and you have a set of methods to manipulate these collections via models, that are obtained by compiling the schemas. I was exactly in the same place couple of months ago and have found that Mongoosejs is a good enough for all your needs.
#Dilpa - not sure if you have looked at or are utilizing Mongoose link, but it can be helpful with implementing CRUD.
I wrote my own service to handle very simple CRUD operations on mongodb documents. Mongoose is excellent, but imposes structure on the documents (which IMO goes against the purpose of mongodb--if you're going to have a schema, why not just use a relational db?).
https://github.com/stupid-genius/MongoCRUD
This service also has the advantage of being implemented as a REST API, so it can be consumed by a node.js app or others. If you send a GET request to the root path of the server, you'll get a screen that shows the syntax for all the CRUD operations (the GUI isn't implemented yet). My basic approach was to specify the db and collection on the URL path host/db/collection and then pass the doc in the POST body. The route handlers then just pass on the doc to an appropriate mongodb function; my service just exposes those methods in a pretty raw state (it does require authentication though).
I read :
How do I manage MongoDB connections in a Node.js web application?
http://mongodb.github.io/node-mongodb-native/driver-articles/mongoclient.html
How can I set up MongoDB on a Node.js server using node-mongodb-native in an EC2 environment?
And I am really confused. How I should work with mongoDB from node.js? I’m a rookie, and my question may look stupid.
var db = new db.MongoClient(new db.Server('localhost', 27017));
db.open(function(err, dataBase) {
//all code here?
dataBase.close();
});
Or every time when I needing something from db I need call:
MongoClient.connect("mongodb://localhost:27017/myDB", function(err, dataBase) {
//all code here
dataBase.close();
});
What is the difference betwen open and connect? I read in the manual that open: Initialize and second connect. But what exactly does that mean? I assume that both do the same, but in the other way, so when should I use one instead the other?
I also wanna ask it's normal that mongoClient needing 4 socket? I running two myWEbServer at the same time, here’s picture:
http://i43.tinypic.com/29mlr14.png
EDIT:
I wanna mention that this isn't a problem ( rather doubt :D), my server works perfect. I ask because I wanna know if I am using mongoDB driver correctly.
Now/Actually I use first option,init mongo dirver at the beginning and inside load put all code.
I'd recommend trying the MongoDB tutorial they offer. I was in the same boat, but this breaks it down nicely. In addition, there's this article on github that explains the basics of DB connection.
In short, it does look like you're doing it right.
MongoClient.connect("mongodb://localhost:27017/myDB", function(err, dataBase) {
//all code here
var collection = dataBase.collection('users');
var document1 = {'name':'John Doe'};
collection.insert(document1, {w:1}, function(err,result){
console.log(err);
});
dataBase.close();
});
You still can sign up for a free course M101JS: MongoDB for Node.js Developers, provided by MongoDB guys
Here is short description:
This course will go over basic installation, JSON, schema design,
querying, insertion of data, indexing and working with language
drivers. In the course, you will build a blogging platform, backed by
MongoDB. Our code examples will be in Node.js.
I had same question. I couldn't find any proper answer from mongo documentation.
All document say is to prefer new db connection and then use open (rather than using connect() )
http://docs.mongodb.org/manual/reference/method/connect/
This is the first time I've used Node.js and Mongo, so please excuse any ignorance. I come from a PHP background. It was my understanding that Node.js scaled well because of the event-driven nature of it. As such, I built my API in node and have been testing it on a localhost. Today, I deployed it to my cloud server and everything works great, except...
As the requests start to pile up, they start to take a long time to fulfill. With just 2 clients connecting to the API, already I'm seeing 30sec+ page load times when both clients are trying to make several requests at once (which does sometimes happen).
Most of the work done by the API is either (a) reading/writing to MongoDB, which resides on a 2nd server on the cloud (b) making requests to other APIs, websites, etc. and returning the results. Both of these operations should not be blocking, but I can imagine the problem being something to do with a bottleneck either on the Mongo DB server (a) or to the external APIs (b).
Of course, I will have multiple application servers in the end, but I would expect each one to handle more than a couple concurrent clients without choking.
Some considerations:
1) I have some console.logs that I left in my node code, and I have a SSH client open to monitor the cloud server. I suspect that this could cause slowdown
2) I use express, mongoose, Q, request, and a handful of other modules
Thanks for taking the time to help a node newb ;)
Edit: added some pics of performance graphs after some responses below...
EDIT: here's a typical callback -- it is called by the express router, and it uses the Q module and OAuth to make a Post API call to Facebook:
post: function(req, links, images, callback)
{
// removed some code that calculates the target (string) and params (obj) variables
// the this.request function is a simple wrapper around the oauth.getProtectedResource function
Q.ncall(this.request, this, target, 'POST', params)
.then(function(res){
callback(null, res);
})
.fail(callback).end();
},
EDIT: some "upsert" code
upsert: function(query, callback)
{
var id = this.data._id,
upsertData = this.data.toObject(),
query = query || {'_id': id};
delete upsertData._id;
this.model.update(query, upsertData, {'upsert': true}, function(err, res, out){
if(err)
{
if(callback) callback(new Errors.Database({'message':'the data could not be upserted','error':err, 'search': query}));
return;
}
if(callback) callback(null);
});
},
Admittedly, my knowledge of Q/promises is weak. But, I think I have consistently implemented them in a way that does not block...
Your question has provided half of the relevant data: the technology stack. However, when debugging performance issues, you also need the other half of the data: performance metrics.
You're running some "cloud servers", but it's not clear what these servers are actually doing. Are they spiked on CPU? on Memory? on IO?
There are lots of potential issues. Are you running Express in production mode? Are you taking up too much IO on your MongoDB server? Are you legitimately downloading too much data? Did you get caught in an infinite Node.JS loop? (it happens)
I would like to provide better advice, but without knowing the status of the servers involved it's really impossible to start picking at any specific underlying technology. You may be a "Node newb", but basic server monitoring is pretty standard across programming languages.
Thank you for the extra details, I will re-iterate the most important part of my comments above: Where are these servers blocked?
CPU? (clearly not from your graph)
Memory? (doesn't seem likely here)
IO? (where are the IO graphs, what is your DB server doing?)