JavascriptMVC: does it cache models? - javascriptmvc

I'm a rank beginner with JMVC. I'm trying to figure out whether it stores models anywhere after they are retrieved from the server.
For example, the Model docs have this code snippet:
$.Controller("Tasks",
{
init: function() {
Task.findAll({}, this.callback('tasks'));
},
Does calling Task.findall() save the list of tasks in a variable somewhere, like Task.tasks, or do I need to store them myself?
Thanks!

No it does not seem to cache.
However, you can make your REST resource cached quite simply. Let's assume you have a RESTful resource like this.
$.Model('Example.Models.Example',
{
findAll: REST_BASEPATH + "/example"
}
Now to make this cached, you first have re-implement that query with some explicit jQuery:
$.Model('Example.Models.Example',
{
findAll: function(){
$.ajax({
url: REST_BASEPATH + "/example",
type: 'get',
dataType: 'json',
})
}
}
Now the findAll function will return a jQuery Deferred object that JMVC is able to use. To add caching, you can store the deferred object on first call and return the same object on subsequent calls. Like this:
var cache = undefined
$.Model('Example.Models.Example',
{
findAll: function(){
if (!cache) {
cache = $.ajax({
url: REST_BASEPATH + "/example",
type: 'get',
dataType: 'json',
})
}
return cache
}
}
I find this somewhat kludgy, but this is what I just came up with today. If there's a more elegant way, please let me know.

Related

Why I am getting array[i] value of for loop is undefined

I am new to nodeJs. In for loop I am getting array[i] value undefined after get the data from some url. I don't know how to solve this problem. Please help me.
Here is my code
urls =['url1','url2','url3','url4']
for (var i = 0; i < urls.length; i++) {
console.log(urls[i]) // getting value
var opt = {
url :urls[i],
headers: {
'Authorization': 'Bearer '+reply
}
}
request.get(opt,function(error,response,body) {
console.log(urls[i]) // getting undefined
if(!error) {
enrichment_data.push({type:body});
} else {
enrichment_data.push({type:''})
}
})
}
According to MDN, this type of logical error in understanding JavaScript closures, is a common mistake.
There are two practical ways to solve this. Sticking with ES5 only, the most canonical way to solve this would be to use urls.forEach() instead:
var urls = ['url1', 'url2', 'url3', 'url4']
urls.forEach(function(url, i) {
console.log(urls[i]) // getting value
var opt = {
url: urls[i],
headers: {
'Authorization': 'Bearer ' + reply
}
}
request.get(opt, function(error, response, body) {
console.log(urls[i]) // getting undefined
if (!error) {
enrichment_data.push({
type: body
});
} else {
enrichment_data.push({
type: ''
})
}
})
})
Using this method, you create a new function scope that preserves the value of each i, even within the asynchronous function, so that the variable reference doesn't get overwritten by later iterations of the for loop you had earlier.
Your second option would be to introduce ES6 lexical scope variable declarations using let like so:
var urls = ['url1', 'url2', 'url3', 'url4']
for (let i = 0; i < urls.length; i++) {
console.log(urls[i]) // getting value
var opt = {
url: urls[i],
headers: {
'Authorization': 'Bearer ' + reply
}
}
request.get(opt, function(error, response, body) {
console.log(urls[i]) // getting undefined
if (!error) {
enrichment_data.push({
type: body
});
} else {
enrichment_data.push({
type: ''
})
}
})
})
Lexical scope means that any reference to i inside the block scope of the for loop, even within the body of the asynchronous function, will reference that particular iteration of the for loop, allowing it to behave as it appears like it should.
Great answer by Patrick. I wanted to add few points.
To answer the question, why you are getting undefined at console.log(urls[i]) because the value of i is 4 and nothing is present at index 4.
Now to the point why i is 4. request is an async call and once you call request.get, it goes through event loop and gets written to the stack (with the right url) to be called. . This stack would be called only when the current call is over, which is after your for loop is over. Your loops gets over only when i is 4 and var is global scope
Here is a great video to understand how event loop works.

Jhipster: how to save multiple records in one entity

I need to save multiple records in loop. In dialog.js i m getting my correct object which i m sending to service js to save. calling service js in loop only. while i m running this service js save function is called after dialog js loop. how is this happening? How can i save data in loop?
Dialog-controller.js
function save () {
vm.isSaving = true;
for(var i in resourceData)
{
vm.timesheet.sunday=resourceData[i].sunday;
vm.timesheet.monday=resourceData[i].monday;
vm.timesheet.tuesday=resourceData[i].tuesday;
vm.timesheet.wednesday=resourceData[i].wednesday;
vm.timesheet.thursday=resourceData[i].thursday;
vm.timesheet.friday=resourceData[i].friday;
vm.timesheet.saturday=resourceData[i].saturday;
vm.timesheet.resource=resourceData[i].saturday;
vm.timesheet.resource=resourceData[i];
Timesheet.save(vm.timesheet,onSaveSuccess, onSaveError);
}
this is my timesheet.service.js
(function() {
'use strict';
angular
.module('timesheetApp')
.factory('Timesheet', Timesheet);
Timesheet.$inject = ['$resource', 'DateUtils'];
function Timesheet ($resource, DateUtils) {
var resourceUrl = 'api/timesheets/:id';
return $resource(resourceUrl, {}, {
'query': { method: 'GET', isArray: true},
'save': {
method: 'POST',
isArray: false,
transformRequest: function (data) {
alert("123");
var copy = angular.copy(data);
return angular.toJson(copy);
}
}
});
}
})();
You might have to write a recursive function, depending upon the promise returned.
Timesheet.save(vm.timesheet,onSaveSuccess, onSaveError);
This returns a promise which calls you onSaveSuccess method.
in the function
function onSaveSuccess(){
// Need some recursive logic
}
Or try to implement this answer.
Need to call angular http service from inside loop and wait for the value of the service to return before executing code below call

Node.js Web Server fs.createReadStream vs fs.readFile?

So I am writing my web server in pure node.js, only use bluebird for promisify. This has been bothering me for a week, and I can't decide which one should I really use. I have read tons of posts, blogs and docs on these two topic, please answer based on your own working experience, thanks. Here goes the detailed sum up and related questions.
Both approach has been tested, they both work great. But I can't test for performance, I only have my own basic website files(html, css, img, small database, etc.), and I never have managed video files and huge databases.
Below are code parts, to give you basic ideas(if you really know which one to use, don't bother reading the code, to save you some time), this question is not about logic, so you can just read the parts between dash lines.
About fs.createReadStream:
Pros: good for huge files, it reads a chunk at a time, saves memory and pipe is really smart.
Cons: Synchronous, can't be promisified(stream is a different concept to promise, too hard to do, not worth it).
//please ignore IP, its just a custom name for prototyping.
IP.read = function (fpath) {
//----------------------------------------------------
let file = fs.createReadStream(fpath);
file.on('error', function () {
return console.log('error on reading: ' + fpath);
});
return file;
//----------------------------------------------------
};
//to set the response of onRequest(request, response) in http.createServer(onRequest).
IP.setResponse = function (fpath) {
let ext = path.extname(fpath),
data = IP.read(fpath);
return function (resp) {
//----------------------------------------------------
//please ignore IP.setHeaders.
resp.writeHead(200, IP.setHeaders(ext));
data.pipe(resp).on('error', function (e) {
cosnole.log('error on piping ' + fpath);
});
//----------------------------------------------------
}
};
About fs.readFile:
Pros: Asynchronous, can easily be promisified, which makes code really easy to write(develop) and read(maintain). And other benefits I haven't gotten a hand on yet, like data validation, security, etc.
Cons: bad for huge files.
IP.read = function (fpath) {
//----------------------------------------------------
let file = fs.readFileAsync(fpath);
return file;
//----------------------------------------------------
};
//to set the response of onRequest(request, response) in http.createServer(onRequest).
IP.setResponse = function (fpath) {
const ext = path.extname(fpath);
return function (resp) {
//----------------------------------------------------
IP.read(fpath).then((data) => {
resp.writeHead(200, IP.setHeaders(ext));
resp.end(data);
}).catch((e) => {
console.log('Problem when reading: ' + fpath);
console.log(e);
});
//----------------------------------------------------
}
};
Here are my options:
• The easy way: Using fs.createReadStream for everything.
• The proper way: Using fs.createReadStream for huge files only.
• The practical way: Using fs.readFile for everything, until related problems occur, then handle those problems using fs.createReadStream.
My final decision is using fs.createReadStream for huge files only(I will create a function just for huge files), and fs.readFile for everything else. Is this a good/proper decision? Any better suggestions?
P.S.(not important):
I really like building infrastructure on my own, to give you an idea, when I instantiate a server, I can just set the routes like this, and customize whatever or however I want. Please don't suggest me to use frameworks:
let routes = [
{
method: ['GET', 'POST'],
uri: ['/', '/home', '/index.html'],
handleReq: function () {return app.setResp(homeP);}
},
{
method: 'GET',
uri: '/main.css',
handleReq: function () {return app.setResp(maincssP);}
},
{
method: 'GET',
uri: '/test-icon.svg',
handleReq: function () {return app.setResp(svgP);}
},
{
method: 'GET',
uri: '/favicon.ico',
handleReq: function () {return app.setResp(iconP);}
}
];
Or I can customize it and put it in a config.json file like this:
{
"routes":[
{
"method": ["GET", "POST"],
"uri": ["/", "/home"],
//I will create a function(handleReq) in my application to handle fpath
"fpath": "./views/index.html"
},
{
"method": "GET",
"uri": "/main.css",
"fpath": "./views/main.css"
},
{
"method": "GET",
"uri": "/test-icon.svg",
"fpath": "./views/test-icon.svg"
}
]
}
Let's discuss the actual practical way.
You should not be serving static files from Node.js in production
createReadStream and readFile are both very useful - createReadStream is more efficient in most cases and consider it if you're processing a lot of files (rather than serving them).
You should be serving static files from a static file server anyway - most PaaS web hosts do this for you automatically and if you set up an environment yourself you'll find yourself reverse-proxying node anyway behind something like IIS which should serve static files.
This is only true for static files, again, if you read them and transform them multiple times your question becomes very relevant.
For other purposes, you can use fs.readFileAsync safely
I use readFile a lot for reading files to buffers and working with them, while createReadStream can improve latency - overall you should get similar throughput and the API is easier to work with and more high level.
So in conclusion
If you're serving static files and care about performance - don't do it from Node.js in production anyway.
If you're transforming files as streams and latency is important, use createReadStream.
Otherwise prefer readFile.

Return different results from a sequelize hook

I'm trying to get tidy up how my caching is working and thus would like to implement it into the hook of my model. This is what I've implemented so far and I can see that it is setting and getting the cache correctly.
hooks: {
beforeFind: function(opts,fn) {
cache.get(this.getTableName() + ':' + opts.where.id, function(err, result) {
if (result) {
return fn(null, result);
}
return fn(null, opts);
});
},
afterFind: function(result, options, fn) {
cache.set(this.getTableName() + ':' + result.getDataValue('id'), result, function () {
return fn(null, result);
});
},
}
The issue is, after the cache hit, it is still performing the database query and returning the result from the database.
Could someone please tell me how to return the result from the cache and not perform the db query in the scenario of a cache hit ?
Let's take a look at code findAll (because it is calling for all finds). You can see, that it will return Promise, where first executes hooks and in then block there is you query. That's why you can't implement cache in such way. There is a hot discussion in this issue about how sequelize need to implement plugin system(and cache in particular).
What can you do know? Take a look at this lib, where Cacher object is implement over models.

POST Request using req.write() and req.end()

I'm trying to do HTTP POST using the the request module from a node server to another server.
My code looks something like,
var req = request.post({url: "http://foo.com/bar", headers: myHeaders});
...
...
req.write("Hello");
...
...
req.end("World");
I expect the body of the request to be "Hello World" on the receiving end, but what I end up with is just "".
What am I missing here?
Note: The ellipsis in the code indicates that the write and the end might be executed in different process ticks.
It looks to me as if you are getting missed Request http.clientRequest/http.serverRequest
If you want to make a post to a server with request what you want to do is something like
request({ method:"post", url: "server.com", body:"Hello World"}, callback);
As 3on pointed, the correct syntax for a POST request is
request({ method:"post", url: "server.com", body:"Hello World"}, callback);
You also have a convenience method:
request.post({ url: "server.com", body:"Hello World"}, callback);
But from your question it seems like you want to stream:
var request = require('request');
var fs = require('fs');
var stream = fs.createWriteStream('file');
stream.write('Hello');
stream.write('World');
fs.createReadStream('file').pipe(request.post('http://server.com'));
Update:
You may break the chunks you write to the stream in any way you like, as long as you have the RAM (4mb is peanuts but keep in mind that v8 (the javascript engine behind node) has an allocation limit of 1.4GB I think);
You may see how much you "wrote" to the pipe with stream.bytesWritten where var stream = fs.createWriteStream('file') as you see in the piece of code above. I think you can't however know how much the other end of the pipe got, but bitesWritten should give you a pretty decent approximation.
You can listen to the data and end events of both stream and request.post('http://server.com')
I managed to make the code written in the question here valid and work as expected by modifying the request module a bit.
I noticed a block of code in request's main.js in the Request.prototype.init function (at line 356),
process.nextTick(function () {
if (self._aborted) return
if (self.body) {
if (Array.isArray(self.body)) {
self.body.forEach(function (part) {
self.write(part)
})
} else {
self.write(self.body)
}
self.end()
} else if (self.requestBodyStream) {
console.warn("options.requestBodyStream is deprecated, please pass the request object to stream.pipe.")
self.requestBodyStream.pipe(self)
} else if (!self.src) {
if (self.method !== 'GET' && typeof self.method !== 'undefined') {
self.headers['content-length'] = 0;
}
self.end();
}
self.ntick = true
})
I'm now overriding this function call by adding a new option (endOnTick) while creating the request. My changes: Comparing mikeal/master with GotEmB/master.

Resources