i'm using the request module to fetch image from website and pipe it into local file, the code is like:
url = http:/xxx.com/x.jpg;
if(url){
request(url).pipe(localFilePath);
}
if(xxx){
// save the localFilePath to db;
redirect('/index');
}
// The question is the filePath is needed in the index page, so if the file has not downloaded yet, then it can not show the file on index page.
i tried.
request(url).pipe(...).on('end',function(){
....
});
but it seems does't work..
so, i wonder how to do like :
yield xxxxx in node v0.11.x to pause the process until the file is already downloaded completely?
thanks
Yield is only presently available in Node 0.11 (when using the –harmony flag), but this is an unstable release that is probably not suitable for any kind of production use. Node 0.12 shouldn’t be too far away though, as 0.11 has been in development for a while now. So the good news is generators will be available in a stable Node.js release near you very soon!
If you want to stick with 0.10.x you will have to use callbacks or promises for now.
Related
I was trying to run some tests, but needed test data, so i created a generation file which created dummy html. When I attempt to run it though, it gives me a Reference Error: HTMLDivElement is not defined.
Is there something I need to import such that Node knows what HTMLDivElement is? I am not rendering anything, but just want correct data to pipe into follow on code.
I run my file through TSC, and then run it through node.
Sample Code:
const main = () => {
let root = new HTMLDivElement();
}
main();
Edit:
I was trying to just bypass it with: let root = document.createElement("div"); but node does not understand what document is, so I cant seem to get that running either as a fallback.
Indeed, Node.js doesn't come with a DOM implementation. If you want to run tests that use the DOM, you'll need to either load a Node.js-compatible DOM implementation such as jsdom, or if that doesn't meet your requirements, switch to a browser-based testing environment such as Selenium.
I have a simple utility that i use to size image on the fly via url params.
Having some troubles with the ruby image libraries (cmyk to rvb is, how to say… "unavailable"), i gave it a shot via nodejs, which solved the issue.
Basically, if the image does not exists, node or ruby transforms it. Otherwise when the image has already been requested/transformed, the ruby or node processes aren't touched, the image is returned statically
The ruby works perfectly, a bit slow if lot of transforms are requested at once, but very stable, it always go through whatever the amount (i see the images arriving one the page one after another)
With node, it works also perfectly, but when a large amount of images are requested, for a single page load, the first images is transformed, then all the others requests returns the very same image (the last transformed one). If I refresh the page, the first images (already transformed) is returned right away, the second one is returned correctly transformed, but then all the other images returned are the same as the one just newly transformed. and it goes on the same for every refresh. not optimal , basically the resquests are "merged" at some point and all return the same image. for reason i don't understand
(When using 'large amount', i mean more than 1)
The ruby version :
get "/:commands/*" do |commands,remote_path|
path = "./public/#{commands}/#{remote_path}"
root_domain = request.host.split(/\./).last(2).join(".")
url = "https://storage.googleapis.com/thebucket/store/#{remote_path}"
img = Dragonfly.app.fetch_url(url)
resized_img = img.thumb(commands).to_response(env)
return resized_img
end
The node js version :
app.get('/:transform/:id', function(req,res,next){
parser.parse(req.params,function(resized_img){
// the transform are done via lovell/sharp
// parser.parse parse the params, write the file,
// return the file path
// then :
fs.readFileSync(resized_img, function(error,data){
res.write(data)
res.end()
})
})
})
Feels like I'm missing here a crucial point in node. I expected the same behaviour with node and ruby, but obviously the same pattern transposed in the node area just does not work as expected. Node is not waiting for a request to process, rather processes those somehow in an order that is not clear to me
I also understand that i'm not putting the right words to describe the issue, hoping that it might speak to some experienced users, let them provide clarifiactions to get a better understanding of what happens behind the node scenes
Question is too broad / unclear. Anyone interested in this answer would be better served by visiting: Creating Callbacks for required modules in node.js
Basically I have included a CLI package in my node application. I need the CLI to spin up a new project (this entails creating a folder for the project). After the project folder is created, I need to create some files in the folder (using fs writeFile). The problem is right now, my writeFile function executes BEFORE the folder is created by the CLI package (This is detected by my console.log. This brings me to main main question.
Can I add an async callback function to the CLI.new without modifying the package I included?
FoundationCLI.new(null, {
framework: 'sites', // 'apps' or 'emails' also
template: 'basic', // 'advanced' also
name: projectName,
directory: $scope.settings.path.join("")
});
try{
if (!fs.existsSync(path)){
console.log("DIRECTORY NOT THERE!!!!!");
}
fs.writeFileSync(correctedPath, JSON.stringify(project) , 'utf-8');
} catch(err) {
throw err;
}
It uses foundation-cli. The new command executes the following async series. I'd love to add a callback to the package - still not quite sure how.
async.series(tasks, finish);
Anyone interested in this can probably get mileage out of:
Creating Callbacks for required modules in node.js
The code for the new command seem to be available on https://github.com/zurb/foundation-cli/blob/master/lib/commands/new.js
this code was not written to allow programmatic usage of the new command (it uses console.log everywhere) and does not call any callback when the work is finished.
so no there is no way to use this package to do what you are looking for. Either patch the package or find another way to do what you want to achieve.
im very new to nodejs but was wondering if the following was easily possible to achieve.
I use Gulp along with browser-sync plugin. I was wondering if there was a way to log every time the browser gets re-injected with the domain and time over a port range. The reason for this being I want to be able to plot productivity over projects without having to manually record this and this seems to be the most logical solution.
Is there anything out there like this or could this easily be added into a Gulp file?
Many thanks, Luke
There are some options and you can use the emitter to react to events like stream:changed, browser:reload, client:connected, connection,...
example:
var bs = require("browser-sync").create();
bs.init({}); //http://www.browsersync.io/docs/options/
....
bs.emitter.on("file:reload", function(){
console.log("File reload - Details:"+arguments)
});
i found the plugin mongo-sync which can make me to use mongoDB synchronized
on the git,there show :
It is a thin wrapper around the official MongoDB driver for Node. Here is a quick usage example that you can use with Common Node:
var Server = require("mongo-sync").Server;
var server = new Server('127.0.0.1');
var result = server.db("test").getCollection("posts").find().toArray();
console.log(result);
server.close();
how can i use like this?
it's mentioned that use with Common Node
whether it's means common-node?
so, how can i use it ? or use mongo-sync straightforwardly?
It means you have to follow Common-Node installation instructions and use common-node command instead of the plain-old node to run your program.
As the docs mention, to use it with plain-old node you need to use node-fibers and make queries inside a Fiber.
No way around node-fibers I'm afraid, as mongo-sync is just a "synchronous" wrapper around asynchronous mongo driver, and it's hard to make async js code synchronous without some low level monkey-patching.