codes in file main.js is like this:
phantom.injectJs("libs/require-1.0.7.js");
require.config(
{
baseUrl: ""
}
);
require([], function(){});
when i run "phantomjs main.js" in the commandline, requirejs doesn't work well in the main.js. I know how to use requirejs in the page running in the browser(including phantomjs' way: page.open(url, callback)), but not like above. I tries using requirejs like the main.js, it is a popular problem, i think. Thank you!
I just struggled for some time. My solution is not clean, but it works, and I'm happy with that due to the unfinished api documentation from phantomjs.
Wordy explanation
You need three files. One is your amd phantomjs test file which I'll call "amd.js". The second is your html page to load which I'll name "amd.html". Finally the browser test which I called "amdTestModule.js".
In amd.html, declare your script tag per normal:
<script data-main="amdTestModule.js" src="require.js"></script>
In your phantomjs test file, this is where it gets hacky. Create your page, and load in the 'fs' module. This allows you to open a relative file path.
var page = require('webpage').create();
var fs = require('fs');
page.open('file://' + fs.absolute('tests/amd.html'));
Now since requirejs loads files asynchronously, we can't just pass in a callback into page.open and expect things to go smoothly. We need some way to either
1) Test our module in the browser and communicate the result back to our phantomjs context. Or
2) Tell our phantomjs context that upon loading all the resources, to run a test.
#1 was simpler for my case. I accomplished this via:
page.onConsoleMessage = function(msg) {
msg = msg.split('=');
if (msg[1] === 'success') {
console.log('amd test successful');
} else {
console.log('amd test failed');
}
phantom.exit();
};
**See full code below for my console.log message.
Now phantomjs apparently has an event api built in but it is undocumented. I was also successfully able to get request/response messages from their page.onResourceReceived and page.onResourceRequested - meaning you can debug when all your required modules are loaded. To communicate my test result however, I just used console.log.
Now what happens if the console.log message is never ran? The only way I could think of resolving this was to use setTimeout
setTimeout(function() {
console.log('amd test failed - timeout');
phantom.exit();
}, 500);
That should do it!
Full Code
directory structure
/projectRoot
/tests
- amd.js
- amdTestModule.js
- amd.html
- require.js (which I symlinked)
- <dependencies> (also symlinked)
amd.js
'use strict';
var page = require('webpage').create();
var fs = require('fs');
/*
page.onResourceRequested = function(req) {
console.log('\n');
console.log('REQUEST');
console.log(JSON.stringify(req, null, 4));
console.log('\n');
};
page.onResourceReceived = function(response) {
console.log('\n');
console.log('RESPONSE');
console.log('Response (#' + response.id + ', stage "' + response.stage + '"): ' + JSON.stringify(response, null, 4));
console.log('\n');
};
*/
page.onConsoleMessage = function(msg) {
msg = msg.split('=');
if (msg[1] === 'success') {
console.log('amd test successful');
} else {
console.log('amd test failed');
}
phantom.exit();
};
page.open('file://' + fs.absolute('tests/amd.html'));
setTimeout(function() {
console.log('amd test failed - timeout');
phantom.exit();
}, 500);
amd.html
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body>
<script data-main='amdTestModule.js' src='require.js'></script>
</body>
</html>
amdTestModule.js
require([<dependencies>], function(<dependencies>) {
...
console.log(
(<test>) ? "test=success" : "test=failed"
);
});
console
$ phantomjs tests/amd.js
amd test successful
you are misunderstanding webpage.injectJs()
it's for injecting scripts into the page you are loading, not into the phantomjs runtime environment.
So using .injectJs() is making requirejs load up into your page, not into phantomjs.exe.
That said, phantomjs's runtime environment has an aproximation of commonjs. RequireJs will not run on there by default. If you felt especially (VERY) motivated, you could attempt porting the require-shim made for nodejs, but it doesn't work out of the box, and would require an incredibly deep understanding of the runtimes. for more details: http://requirejs.org/docs/node.html
a better idea:
probably you should make sure you have commonjs versions of your javascript you wish to run. i personally write my code in typescript so i can build for either commonjs or amd. i use commonjs for phantomjs code, and amd for nodejs and browser.
Related
I created a test WASM program using Go. In the program's main, it adds an API to the "global" and waits on a channel to avoid from exiting. It is similar to the typical hello-world Go WASM that you can find anywhere in the internet.
My test WASM program works well in Browsers, however, I hope to run it and call the API using Node.js. If it is possible, I will create some automation tests based on it.
I tried many ways but I just couldn't get it work with Node.js. The problem is that, in Node.js, the API cannot be found in the "global". How can I run a GO WASM program (with an exported API) in Node.js?
(Let me know if you need more details)
Thanks!
More details:
--- On Go's side (pseudo code) ---
func main() {
fmt.Println("My Web Assembly")
js.Global().Set("myEcho", myEcho())
<-make(chan bool)
}
func myEcho() js.Func {
return js.FuncOf(func(this js.Value, apiArgs []js.Value) any {
for arg := range(apiArgs) {
fmt.Println(arg.String())
}
}
}
// build: GOOS=js GOARCH=wasm go build -o myecho.wasm path/to/the/package
--- On browser's side ---
<html>
<head>
<meta charset="utf-8"/>
</head>
<body>
<p><pre style="font-family:courier;" id="my-canvas"/></p>
<script src="wasm_exec.js"></script>
<script>
const go = new Go();
WebAssembly.instantiateStreaming(fetch("myecho.wasm"), go.importObject).then((result) => {
go.run(result.instance);
}).then(_ => {
// it also works without "window."
document.getElementById("my-canvas").innerHTML = window.myEcho("hello", "ahoj", "ciao");
})
})
</script>
</body>
</html>
--- On Node.js' side ---
globalThis.require = require;
globalThis.fs = require("fs");
globalThis.TextEncoder = require("util").TextEncoder;
globalThis.TextDecoder = require("util").TextDecoder;
globalThis.performance = {
now() {
const [sec, nsec] = process.hrtime();
return sec * 1000 + nsec / 1000000;
},
};
const crypto = require("crypto");
globalThis.crypto = {
getRandomValues(b) {
crypto.randomFillSync(b);
},
};
require("./wasm_exec");
const go = new Go();
go.argv = process.argv.slice(2);
go.env = Object.assign({ TMPDIR: require("os").tmpdir() }, process.env);
go.exit = process.exit;
WebAssembly.instantiate(fs.readFileSync(process.argv[2]), go.importObject).then((result) => {
go.run(result.instance);
}).then(_ => {
console.log(go.exports.myEcho("hello", "ahoj", "ciao"));
}).catch((err) => {
console.error(err);
process.exit(1);
});
This pseudo code represents 99% content of my real code (only removed business related details). The problem is that I not only need to run the wasm program (myecho.wasm) by Node.js, but I also need to call the "api" (myEcho), and I need to pass it parameters and receive the returned values, because I want to create automation tests for those "api"s. With Node.js, I can launch the test js scripts and validate the outputs all in the command line environment. The browser isn't a handy tool for this case.
Running the program by node wasm_exec.js myecho.wasm isn't enough for my case.
It would be nice to know more details about your environment and what are you actually trying to do. You can post the code itself, compilation commands, and versions for all the tools involved.
Trying to answer the question without these details:
Go WASM is very browser oriented, because the go compiler needs the glue js in wasm_exec.js to run. Nodejs shouldn't have a problem with that, and the following command should work:
node wasm_exec.js main.wasm
where wasm_exec.js is the glue code shipped with your go distribution, usually found at $(go env GOROOT)/misc/wasm/wasm_exec.js, and main.wasm is your compiled code. If this fails, you can post the output as well.
There is another way to compile go code to wasm that bypasses wasm_exec.js, and that way is by using the TinyGo compiler to output wasi-enabled code. You can try following their instructions to compile your code.
For example:
tinygo build -target=wasi -o main.wasm main.go
You can build for example a javascript file wasi.js:
"use strict";
const fs = require("fs");
const { WASI } = require("wasi");
const wasi = new WASI();
const importObject = { wasi_snapshot_preview1: wasi.wasiImport };
(async () => {
const wasm = await WebAssembly.compile(
fs.readFileSync("./main.wasm")
);
const instance = await WebAssembly.instantiate(wasm, importObject);
wasi.start(instance);
})();
Recent versions of node have experimental wasi support:
node --experimental-wasi-unstable-preview1 wasi.js
These are usually the things you would try with Go and WASM, but without further details, it is hard to tell what exactly is not working.
After some struggling, I noticed that the reason is simpler than I expected.
I couldn't get the exported API function in Node.js simply because the API has not been exported yet when I tried to call them!
When the wasm program is loaded and started, it runs in parallel with the caller program (the js running in Node).
WebAssembly.instantiate(...).then(...go.run(result.instance)...).then(/*HERE!*/)
The code at "HERE" is executed too early and the main() of the wasm program hasn't finished exporting the APIs yet.
When I changed the Node script to following, it worked:
WebAssembly.instantiate(fs.readFileSync(process.argv[2]), go.importObject).then((result) => {
go.run(result.instance);
}).then(_ => {
let retry = setInterval(function () {
if (typeof(go.exports.myEcho) != "function") {
return;
}
console.log(go.exports.myEcho("hello", "ahoj", "ciao"));
clearInterval(retry);
}, 500);
}).catch((err) => {
console.error(err);
process.exit(1);
});
(only includes the changed part)
I know it doesn't seem to be a perfect solution, but at least it proved my guess about the root cause to be true.
But... why it didn't happen in browser? sigh...
Need help.
I use gulp-conect and it livereload method. But if I build a few template in time, get a lot of page refresh. Is any solution, I want to build few templates with single page refresh?
So, I reproduce the problem you have and came accross this working solution.
First, lets check gulp plugins you need:
gulp-jade
gulp-livereload
optional: gulp-load-plugins
In case you need some of them go to:
http://gulpjs.com/plugins/
Search for them and install them.
Strategy: I created a gulp task called live that will check your *.jade files, and as you are working on a certain file & saving it, gulp will compile it into html and refresh the browser.
In order to accomplish that, we define a function called compileAndRefresh that will take the file returned by the watcher. It will compile that file into html and the refesh the browser (test with livereload plugin for chrome).
Notes:
I always use gulp-load-plugin to load plugins, so thats whay I use plugins.jad and plugins.livereload.
This will only compile files that are saved and while you have the task live exucting on the command line. Will not compile other files that are not in use. In order to accomplish that, you need to define a task that compiles all files, not only the ones that have been changed.
Assume .jade files in /jade and html output to /html
So, here is the gulpfile.js:
var gulp = require('gulp'),
gulpLoadPlugins = require('gulp-load-plugins'),
plugins = gulpLoadPlugins();
gulp.task('webserver', function() {
gulp.src('./html')
.pipe(plugins.webserver({
livereload: true
}));
gulp.watch('./jade/*.jade', function(event) {
compileAndRefresh(event.path);
});
});
function compileAndRefresh(file) {
gulp.src(file)
.pipe(plugins.jade({
}))
.pipe(gulp.dest('./html'))
}
Post edit notes:
Removed liveReload call from compileAndRefresh (webserver will do that).
Use gulp-server plugin insted of gulp-connect, as they suggest on their repository: "New plugin based on connect 3 using the gulp.src() API. Written in plain javascript. https://github.com/schickling/gulp-webserver"
Something you can do is to watch only files that changes, and then apply a function only to those files that have been changed, something like this:
gulp.task('live', function() {
gulp.watch('templates/folder', function(event) {
refresh_templates(event.path);
});
});
function refresh_templates(file) {
return
gulp.src(file)
.pipe(plugins.embedlr())
.pipe(plugins.livereload());
}
PS: this is not a working example, and I dont know if you are using embedlr, but the point, is that you can watch, and use a callback to call another function with the files that are changing, and the manipulate only those files. Also, I supposed that your goal is to refresh the templates for your browser, but you manipulate as you like, save them on dest or do whatever you want.
Key point here is to show how to manipulate file that changes: callback of watch + custom function.
var jadeTask = function(path) {
path = path || loc.jade + '/*.jade';
if (/source/.test(path)) {
path = loc.jade + '/**/*.jade';
}
return gulp.src(path)
.pipe(changed(loc.markup, {extension: '.html'}))
.pipe(jade({
locals : json_array,
pretty : true
}))
.pipe(gulp.dest(loc.markup))
.pipe(connect.reload());
}
First install required plugins
gulp
express
gulp-jade
connect-livereload
tiny-lr
connect
then write the code
var gulp = require('gulp');
var express = require('express');
var path = require('path');
var connect = require("connect");
var jade = require('gulp-jade');
var app = express();
gulp.task('express', function() {
app.use(require('connect-livereload')({port: 8002}));
app.use(express.static(path.join(__dirname, '/dist')));
app.listen(8000);
});
var tinylr;
gulp.task('livereload', function() {
tinylr = require('tiny-lr')();
tinylr.listen(8002);
});
function notifyLiveReload(event) {
var fileName = require('path').relative(__dirname, event.path);
tinylr.changed({
body: {
files: [fileName]
}
});
}
gulp.task('jade', function(){
gulp.src('src/*.jade')
.pipe(jade())
.pipe(gulp.dest('dist'))
});
gulp.task('watch', function() {
gulp.watch('dist/*.html', notifyLiveReload);
gulp.watch('src/*.jade', ['jade']);
});
gulp.task('default', ['livereload', 'express', 'watch', 'jade'], function() {
});
find the example here at GitHub
I have started writing unit tests. I need to call a function in signup.js from another script, unittest.js. How can I do this?
Unittest.html would incorporate both scripts.
<html>
<head>
<script scr ="signup.js"></script>
<script src="unittest,js"></script>
</head>
</html>
This is signup.js, which I have to test.
YUI.use(function(Y){
demo : function(){
window.alert('hello);
}
});
unittest.js:
YUI.use(function(Y){
var abc = new Y.Test.case(
testOk : function(){
demo(); // Calling this function but not working
<Some_Assestion_Stuff_Here>
}
);
});
Your two scripts have both created a YUI sandbox. Neither sandbox share anything with the other, so you cannot achieve unit testing demo() like this.
What you can do is to register a module in signup.js and use it in unittest.js. See the following example: http://jsfiddle.net/746nq/
In signup.js, create the module:
// Create a YUI module in signup.js.
YUI.add('signup', function (Y) {
// Write your module code here, and make your module available on the Y
// object if desired.
Y.Signup = {
demo: function () {
window.alert("Demo!");
}
};
});
In unittest.js, use the module:
// Create a YUI sandbox in unittest.js and use our newly created module
YUI().use('signup', function (Y) {
Y.Signup.demo();
// assert stuff
});
Hope this helps you.
I have a Node application where I want to use socket.io to communicate data to a client where it is displayed by smoothie. I have both packages installed (via NPM) on two different node environments and in both cases in the node_modules sub-directory of my project. One of the environments is the BeagleBone Black and the other is the Cloud9 IDE environment. In both cases the socket.io module resolves and works fine but no combination of path names gets the smoothie module to resolve (which I can get to work if I just pull it from GitHub directly).
Here are the relevant bits of the server side code for the Cloud9 IDE:
var app = require('http').createServer(handler)
, io = require('socket.io').listen(app)
, fs = require('fs')
app.listen(process.env.PORT, process.env.IP);
function handler (req, res) {
fs.readFile(__dirname + '/NotWorking.html',
function (err, data) {
if (err) {
res.writeHead(500);
return res.end('Error loading index.html');
}
res.writeHead(200);
res.end(data);
});
}
.
.
.
Here are the relevant bits from the client side:
<!DOCTYPE html>
<html>
<head>
<script src="smoothie/smoothie.js"></script>
<script src="socket.io/socket.io.js"></script>
<script>
var line1 = new TimeSeries();
var line2 = new TimeSeries();
var socket = io.connect('http://demo-project.wisar.c9.io/');
socket.on('news', function (data) {
for (var property in data) {
dataPoint = data[property];
}
line1.append(new Date().getTime(), dataPoint);
line2.append(new Date().getTime(), 40);
socket.emit('my other event', { my: dataPoint });
});
</script>
.
.
.
As I said, both modules are located in the node_modules sub directory of the project directory where the above scripts live. The node documentation describes how includes are supposed to be resolved (http://nodejs.org/api/modules.html#modules_all_together) and I think that I can follow the path to how it resolves the link to socket.io by way of the index.js route...but it also works when I put a "/" in front which I can not find a path for. No permutation or combination of paths makes the smoothie module resolve. smoothie, btw, is a small charting application that can be found in npm under that name.
Any help would be appreciated.
If your current file is in the same directory as node_modules, then to load smoothie try this path in src of script tag:
./node_modules/smoothie/smoothie.js
The path smoothie/smoothie.js is not giving the location of smoothie.js, which lies in node_modules/smoothie/smoothie.js. This worked for me, I hope this works for you.
My project includes the following files:
./index.html
./js/main.js
./js/vendor/require.js
./js/viewmodel/vm.js
The index.html has the following relevant snippet:
<script data-main="js/main.js" src="js/vendor/require.js"></script>
<script type="text/javascript">
require(['viewmodel/vm', 'ko'],
function(viewmodel, ko) {
ko.applyBindings(viewmodel);
}
);
</script>
The js/main.js file is as follows:
var root = this;
define('jquery', ['http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.8.3.js'], function () { return root.$; });
define('ko', ['http://ajax.aspnetcdn.com/ajax/knockout/knockout-2.1.0.js'], function (ko) { return ko; });
The js/viewmodel/vm.js file...
define(['jquery', 'ko'],
function($, ko) {
return {
subject: 'world',
greeting: 'hello'
}
}
);
When you open a browser to index.html, then the browser tries to load a file called js/ko.js instead of using the module defined in main.js. It seems like the js file pointed to by the data-main attribute is not guaranteed to run before dependency resolution. This does not seem correct to me since one purpose of the data-main js file is to define require configuration (i.e. path, shim, etc). I am using require v2.1.2.
This works perfectly fine if I copy the contents of my main.js file into the script block in index.html. By "perfectly fine" I mean that it resolved ko to be a module and finds the appropriate CDN link to resolve ko instead of trying to download ./js/ko.js.
to use the data-main attribute for configuring your whole application, it is necessary that it is the single entry point for all your code.
your 2nd script block breaks this requirement by providing a 2nd entry point. since these entry points will resolve independently of each other (and asynchronously), you cannot rely on one to affect the other.
to resolve it, refactor your code in a way that provides a single entry point to your application and do your configuration via this entry point.
That's because requirejs sets the async. Attribute on the script.
The boolean async attribute on script elements allows the external
JavaScript file to run when it's available, without delaying page load
first.
This means that both scripts are loaded and evaluated parallel, so none of the two scripts can access methods or functions from the other one.
If you want to define requirejs variables in one script you mustn't load that script with require js.
For me there are three possibilities how you can solve that problem:
Add the content of main.js to your page (as you mention)
Load the main.js file without requirejs as normal script
Define the require config before loading the scripts (link to requirejs docu )
I had the same problem. The architecture of the site that i was working was components that was loading asynchronous at each part of the page.
Each component has its own html, css, and js code.
So, my solution is to keep a guard function for all the required dependency code, to protect them from running before the main javascript file:
index.html
<head>
<script type="text/javascript">
window.BeforeMainGuard = {
beforeMainLoadedFunctions: [],
hasMainLoaded: false,
guard: function( func ) {
console.assert( typeof func === 'function' );
if( this.hasMainLoaded ) {
func();
}else {
this.beforeMainLoadedFunctions.push( func );
}
},
onMainLoaded: function() {
for( var i = 0; i<this.beforeMainLoadedFunctions.length; ++i ) {
var beforeMainLoadedFunction = this.beforeMainLoadedFunctions[i];
beforeMainLoadedFunction();
}
this.beforeMainLoadedFunctions = null;
this.hasMainLoaded = true;
}
};
</script>
<script data-main="js/main.js" src="js/vendor/require.js"></script>
<script type="text/javascript">
window.BeforeMainGuard.guard( function() {
require(['viewmodel/vm', 'ko'],
function(viewmodel, ko) {
ko.applyBindings(viewmodel);
}
);
});
</script>
</head>
js/main.js
require.config({
// your config
});
require( [ 'AppLogic' ], function( AppLogic ){
AppLogic.Init();
window.BeforeMainGuard.onMainLoaded();
} );