PDF generation with PhantomJS in node-Webkit - node.js

I found a comment in this subject node wkhtmltopdf create corrupted PDF in node webkit which indicates that it's possible to generate pdf from html in node-webkit by using PhantomJS and especially with this script: https://github.com/ariya/phantomjs/blob/master/examples/rasterize.js
However I don't understand how to use this script without command line call...

It's not possible to use the script as-is directly in node.js. You would either use the child_process module to call phantomjs essentially as a commandline script with the rasterize.js script and options.
The other possibility is to use a phantom wrapper for node.js to directly include the code of rasterize.js. You would need to make only small adjustments like the page argument is passed from the wrapper and does not need to be created. Possible wrappers are node-phantom or phantomjs-node. If you package your app with node-webkit, then you will probably run into problems with the path to the phantomjs executable.

Phantomjs' Rasterize.js worked well to generate clean multi-page editable pdf's conserving all the tricky CSS. I got a little confused when trying to use it within a nodejs environment but its pretty straight forward.
As per NPM Phantomjs readme: (I've stated it a little more explicitly)
var path = require('path')
var childProcess = require('child_process')
var phantomjs = require('phantomjs')
var binPath = phantomjs.path
//Args for rasterize.js: [ rasterize.js URL filename [paperwidth*paperheight|paperformat] [zoom]') ]
var childArgs = [ path.join(__dirname, 'rasterize.js'),'url','docname.pdf','A4',1.00]
childProcess.execFile(binPath, childArgs, function(err, stdout, stderr) {
// handle results
})

Related

nodeJS:fs.write callback and fs.writeFile not working

I knew nothing about fs until I was learning to use casperjs to scrape some content from a website and save them to a file. Following some examples on the web, I write this file scrape.js (The json data has been tested so it has nothing to do with the issue):
var fs = require('fs');
var url = "http://w.nycweb.io/index.php?option=com_k2&view=itemlist&id=4&Itemid=209&format=json";
var casper = require('casper').create();
casper.start(url,function(){
var json = JSON.parse(this.fetchText('pre'));
var jsonOfItems={},items = json.items;
items.forEach(function(item){
jsonOfItems[item.id] = item.introtext.split('\n');
})
fs.write('videoLinks.json',JSON.stringify(jsonOfItems),function(err){
if (err) console.log(err);
console.log('videoLinks.json saved');
})
});
casper.run();
When I do casperjs scrape.js in command line of my Ubuntu 14.04 server, I won't see the file saved message as expected, although the file is properly saved. So this is the first question: why the callback isn't running at all?
Secondly, I also tried fs.writeFile, but when I replace fs.write with it, the file isn't saved at all, nor is there any error information.
I do notice that in casper documentation it's said that casper is not a node.js module and some module of node.js won't be available, but I doubt it has anything to do with my issues. And I think it worths to mention that previously when I run this script I only get a respond like
I'm 'fs' module.
I had to follow this question to reinstall fs module globally to get it working.
fs.write expects a file descriptor where you are trying to give it a filename. Try fs.writeFile. https://nodejs.org/dist/latest-v4.x/docs/api/fs.html#fs_fs_writefile_file_data_options_callback
Edit: Oh you tried that. Are you sure it didn't write it somewhere like the root directory? Tried a full path in there?
And what version of node are you running?

r.js from node script?

I feel like this must be so obvious but it's escaping me.
I'd like to run requirejs's r.js compilation from a node module instead of from the command line, and every bit of documentation I've seen just shows the command line option. Something like this is what I'm looking for:
var r = require('requirejs');
r('./build/common.js');
r('./build/app-main.js');
Let me explain the underlying motivation in case there's a better way to do it:
I've got a few different build.js files that I want to run r.js on (separate bundles for common dependencies and the main app). I'd like to wrap this up inside a gulpfile or gruntfile that runs both, and without putting all the r.js config in the actual grunt/gulp file like the grunt and gulp require.js plugins all seem to do. Leaving the r.js config in the separate build/*.js files would let us use grunt/gulp OR command line when we want to.
Any way to accomplish this?
Using the optimizer as a Node module is documented but it is not in the most evident place. This is the example that the documentation gives:
var requirejs = require('requirejs');
var config = {
baseUrl: '../appDir/scripts',
name: 'main',
out: '../build/main-built.js'
};
requirejs.optimize(config, function (buildResponse) {
//buildResponse is just a text output of the modules
//included. Load the built file for the contents.
//Use config.out to get the optimized file contents.
var contents = fs.readFileSync(config.out, 'utf8');
}, function(err) {
//optimization err callback
});

How to use Gulp to create a separate vendor bundle with Browserify from Bower components

I'm using Gulp and Browserify to package my Javascript into 2 separate bundles: application.js and vendor.js.
How do I bundle the vendor package if my vendor libraries are installed with Bower?
In my gulpfile, I'm using the following modules:
var gulp = require("gulp");
var browserify = require("browserify");
var debowerify = require("debowerify");
var source = require("vinyl-source-stream");
Assuming that I have only the Phaser framework installed with bower (for this example), my Gulp task to create the application package looks like this:
gulp.task("scripts-app", function () {
browserify("./app/javascripts/index.js")
.external("phaser")
.pipe(source("application.js"))
.pipe(gulp.dest("./tmp/assets"));
});
Meanwhile, the vendor task looks like this:
gulp.task("scripts-vendor", function () {
browserify()
.transform(debowerify)
.require("phaser")
.pipe(source("vendor.js"))
.pipe(gulp.dest("./tmp/assets"));
});
When I run this Gulp task, I get an error that states Error: Cannot find module 'phaser' from and then all the directories it search through (none of which are the bower_components directory).
Any ideas about how to package these up successfully are greatly appreciated. Thanks!
Answered my own question:
When using require in the Gulp task, you need to supply a path to a file, not just a name.
gulp.task("scripts-vendor", function () {
browserify()
.transform(debowerify)
.require("./bower_components/phaser/phaser.js")
.pipe(source("vendor.js"))
.pipe(gulp.dest("./tmp/assets"));
});
Notice that require("phaser") became require("./bower_components/phaser/phaser.js").
Doing this works, although the bundle takes forever to build (around 20 seconds). You're probably better of just loading giant libraries/frameworks directly into your app through a <script> tag and then using Browserify Shim.
This let's you require() (in the NodeJS/Browserify sense) global variables (documentation).
Seems like you figured out how to require the bower file. Hopefully you'll only have to bundle it once initially, and not every build. Including the library via a script tag isn't a bad idea. Another technique I'm using is to use scriptjs (a polyfill would work too), to async load whatever vender libraries I need, but make sure to include any/all require's after the script loads. For example, your index.js could be like:
$script.('/assets/vendor', function() {
var phaser = require('phaser');
//rest of code
});
It's especially nice for loading cdn files or having the ability to defer loading certain libraries that aren't necessarily used in the core app by every user, or loading libraries after client-side routing.

How to "require" text files with browserify?

I am using browserify (using browserify-middleware)
how can I require simple text files, something like:
var myTmpl = require("myTmpl.txt");
I cheked stringify plugin for browserify but the code in the documentation is not working with browserify V2
require() is really best for just javascript code and json files to maintain parity with node and to improve readability of your code to outsiders who expect require() to work the way it does in node.
Instead of using require() to load text files, consider using the brfs transform. With brfs, you maintain parity with node by calling fs.readFileSync() but instead of doing synchronous IO as in node, brfs will inline the file contents into the bundle in-place so
var src = fs.readFileSync(__dirname + '/file.txt');
becomes
var src = "beep boop\n";
in the bundle output.
Just compile with -t brfs:
browserify -t brfs main.js > bundle.js
More discussion about why overloading require() too much is a bad idea: http://mattdesl.svbtle.com/browserify-vs-webpack
stringify:
https://github.com/JohnPostlethwait/stringify
Here's author example:
var bundle = browserify()
.transform(stringify(['.hjs', '.html', '.whatever']))
.add('my_app_main.js');
If you really want to use require(), you may want to look at partialify:
my.txt:
Hello, world!
index.js:
alert( require( "my.txt" ) );
Where Browserify is configured:
var partialify = require( "partialify/custom" );
partialify.alsoAllow( "txt" );
bundle.add( "./index.js" );
bundle.transform( partialify );
Theoretically you will get a "Hello, world!" message in browser.
P.S. I haven't tried this myself.
Edit: note that this solution breaks NodeJS compatibility - it only works in browserified state, as NodeJS doesn't know how to require .txt files.

PhantomJS require() a relative path

In a PhantomJS script I would like to load a custom module but it seems relative paths do not works in PhantomJS ?
script.js:
var foo = require('./script/lib/foo.js');
foo.bar('hello world');
phantom.exit();
foo.js:
exports.bar = function(text){
console.log(text);
}
According to fs.workingDirectory I am in the good directory
foo.js is not in the lookup path of phantomjs
Am I missing something ?
EDIT:
inject() is not revelant because I do not need to inject a JS to an HTML page but instead load my own module like require('fs') but with a relative path.
After a lot of time searching for the same thing, here is what I understood, though I might be wrong :
PhantomJS doesn't use Node's require, but its own require, so things are different
when providing a relative path to phantomjs's require, it is always interpreted as relative to the current working directory
PhantomJS doesn't implement node's __dirname, and as such there is no direct way to have the directory of your script
The solution I found least annoying :
if using phantomjs pure, without casperjs :
require(phantom.libraryPath + '/script/lib/foo.js')
if using casperjs :
var scriptName = fs.absolute( require("system").args[3] );
var scriptDirectory = scriptName.substring(0, scriptName.lastIndexOf('/'));
require(scriptDirectory + '/script/lib/foo.js')
To load your own module, the right way to do it is to use module.exports, like this: foo.js
function bar(text) {
console.log(text);
}
exports.bar = bar
And in script.js (which is executed with phantomjs script.js):
var foo = require('./script/lib/foo');
foo.bar('hello world');
phantom.exit();
My solution to load a resource file (like let's say a json file) within a phantomjs subfolder from a outer folder like in this structure:
├── consumer.js
├── assets
├── data.json
├── loader.js
Supposed that data.json must be load by the consumer module and that this module is called by somewhere else on this machine, outside the project root folder, the fs.workingDirectory will not work, since it will be the path of the caller file.
So to solve this, I did a simple loader module within the assets folder, where the files I want to load are:
(function() {
var loader = {
load : function(fileName) {
var res=require('./'+fileName);
return res;
}
}
module.exports=loader;
}).call(this);
I therefore call the loader module from the consumer module like
var loader=require('./data/loader');
var assets=loader.load('data.json');
and that's it.
NOTE. The require here is the phantomjs require not the node version, so it works a bit differently. In this case the data.json was a json array with no module.exports declaration. The array will be backed in the assets variable directly when calling the loader.load(fileName) method.
have you tried to use injectJs(filename)
excerpt form PhantomJS documentation:
Injects external script code from the specified file. If the file can
not be found in the current directory, libraryPath is used for
additional look up.
This function returns true if injection is successful, otherwise it
returns false.
Which PhantomJS version are you running? Support for user provided modules was added in 1.7.

Resources