Trying to write my own plugins for gulp. So I've written a gulp task that attempts opens the chrome browser in windows (i'll work on getting working for mac/linux later).
It seems to work except it's not passing in my arguments:
/*
* Open
*/
gulp.task('open', function (done) {
var uri = 'http://localhost:' + CONFIG.PORT,
CONFIG.PORT = 8080,
args = [
uri,
'--no-first-run',
'--no-default-browser-check',
'--disable-translate',
'--disable-default-apps',
'--disable-popup-blocking',
'--disable-zero-browsers-open-for-tests',
'--disable-web-security',
'--new-window',
'--user-data-dir="C:/temp-chrome-eng"'
]
cp.spawn('C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe', args);
})
How do i get it to accept my arguments it passed in? Am in providing it the wrong arguments?
I would recommend using a quite popular npm module, opener, instead which will solve both your issue with arguments and cross platform support.
Instead of finding the browser executable like you are doing, you can simply write:
var opener = require('opener')
opener('http://google.com')
If you however want to go with your current method, try capturing the output by naming your process and then listening on stderr and stdout:
var chrome = cp.spawn ...
chrome.stdout.on('data', function (data) {
console.log(data.toString())
})
chrome.stderr.on('data', function (data) {
console.error(data.toString())
})
It does work for me on linux if I replace your chrome path with chromium.
Related
I created a test WASM program using Go. In the program's main, it adds an API to the "global" and waits on a channel to avoid from exiting. It is similar to the typical hello-world Go WASM that you can find anywhere in the internet.
My test WASM program works well in Browsers, however, I hope to run it and call the API using Node.js. If it is possible, I will create some automation tests based on it.
I tried many ways but I just couldn't get it work with Node.js. The problem is that, in Node.js, the API cannot be found in the "global". How can I run a GO WASM program (with an exported API) in Node.js?
(Let me know if you need more details)
Thanks!
More details:
--- On Go's side (pseudo code) ---
func main() {
fmt.Println("My Web Assembly")
js.Global().Set("myEcho", myEcho())
<-make(chan bool)
}
func myEcho() js.Func {
return js.FuncOf(func(this js.Value, apiArgs []js.Value) any {
for arg := range(apiArgs) {
fmt.Println(arg.String())
}
}
}
// build: GOOS=js GOARCH=wasm go build -o myecho.wasm path/to/the/package
--- On browser's side ---
<html>
<head>
<meta charset="utf-8"/>
</head>
<body>
<p><pre style="font-family:courier;" id="my-canvas"/></p>
<script src="wasm_exec.js"></script>
<script>
const go = new Go();
WebAssembly.instantiateStreaming(fetch("myecho.wasm"), go.importObject).then((result) => {
go.run(result.instance);
}).then(_ => {
// it also works without "window."
document.getElementById("my-canvas").innerHTML = window.myEcho("hello", "ahoj", "ciao");
})
})
</script>
</body>
</html>
--- On Node.js' side ---
globalThis.require = require;
globalThis.fs = require("fs");
globalThis.TextEncoder = require("util").TextEncoder;
globalThis.TextDecoder = require("util").TextDecoder;
globalThis.performance = {
now() {
const [sec, nsec] = process.hrtime();
return sec * 1000 + nsec / 1000000;
},
};
const crypto = require("crypto");
globalThis.crypto = {
getRandomValues(b) {
crypto.randomFillSync(b);
},
};
require("./wasm_exec");
const go = new Go();
go.argv = process.argv.slice(2);
go.env = Object.assign({ TMPDIR: require("os").tmpdir() }, process.env);
go.exit = process.exit;
WebAssembly.instantiate(fs.readFileSync(process.argv[2]), go.importObject).then((result) => {
go.run(result.instance);
}).then(_ => {
console.log(go.exports.myEcho("hello", "ahoj", "ciao"));
}).catch((err) => {
console.error(err);
process.exit(1);
});
This pseudo code represents 99% content of my real code (only removed business related details). The problem is that I not only need to run the wasm program (myecho.wasm) by Node.js, but I also need to call the "api" (myEcho), and I need to pass it parameters and receive the returned values, because I want to create automation tests for those "api"s. With Node.js, I can launch the test js scripts and validate the outputs all in the command line environment. The browser isn't a handy tool for this case.
Running the program by node wasm_exec.js myecho.wasm isn't enough for my case.
It would be nice to know more details about your environment and what are you actually trying to do. You can post the code itself, compilation commands, and versions for all the tools involved.
Trying to answer the question without these details:
Go WASM is very browser oriented, because the go compiler needs the glue js in wasm_exec.js to run. Nodejs shouldn't have a problem with that, and the following command should work:
node wasm_exec.js main.wasm
where wasm_exec.js is the glue code shipped with your go distribution, usually found at $(go env GOROOT)/misc/wasm/wasm_exec.js, and main.wasm is your compiled code. If this fails, you can post the output as well.
There is another way to compile go code to wasm that bypasses wasm_exec.js, and that way is by using the TinyGo compiler to output wasi-enabled code. You can try following their instructions to compile your code.
For example:
tinygo build -target=wasi -o main.wasm main.go
You can build for example a javascript file wasi.js:
"use strict";
const fs = require("fs");
const { WASI } = require("wasi");
const wasi = new WASI();
const importObject = { wasi_snapshot_preview1: wasi.wasiImport };
(async () => {
const wasm = await WebAssembly.compile(
fs.readFileSync("./main.wasm")
);
const instance = await WebAssembly.instantiate(wasm, importObject);
wasi.start(instance);
})();
Recent versions of node have experimental wasi support:
node --experimental-wasi-unstable-preview1 wasi.js
These are usually the things you would try with Go and WASM, but without further details, it is hard to tell what exactly is not working.
After some struggling, I noticed that the reason is simpler than I expected.
I couldn't get the exported API function in Node.js simply because the API has not been exported yet when I tried to call them!
When the wasm program is loaded and started, it runs in parallel with the caller program (the js running in Node).
WebAssembly.instantiate(...).then(...go.run(result.instance)...).then(/*HERE!*/)
The code at "HERE" is executed too early and the main() of the wasm program hasn't finished exporting the APIs yet.
When I changed the Node script to following, it worked:
WebAssembly.instantiate(fs.readFileSync(process.argv[2]), go.importObject).then((result) => {
go.run(result.instance);
}).then(_ => {
let retry = setInterval(function () {
if (typeof(go.exports.myEcho) != "function") {
return;
}
console.log(go.exports.myEcho("hello", "ahoj", "ciao"));
clearInterval(retry);
}, 500);
}).catch((err) => {
console.error(err);
process.exit(1);
});
(only includes the changed part)
I know it doesn't seem to be a perfect solution, but at least it proved my guess about the root cause to be true.
But... why it didn't happen in browser? sigh...
I use Chrome 79 and and Nodejs 13, in windows 10.
I also use an IRC client, AdiIRC, to connect to some IRC channel. This program can call external programs with arguments via its scripting language (via Run and $exec ).
So, what I want to do is, probably using Native Messaging, whenever a specific message appears in the IRC channel,
it to be passed as argument from AdiIRC to my Chrome extension, most probably via something like: Run \path\to\some_executable.exe %message
where "some_executable.exe" would send the wanted message data (using stdout) to the extension .
So, my question is, how can I pass data from AdiIRC to my Chrome extension? Is it possible?
Am I entirely missing something regarding the Native Messaging concept, and so, what I want to do is possible via a different way??
Below is my effort, i.e. my Chrome extension and a native messaging host.
Note: in Google's Native Messaging documentation, at some point is says that:
Chrome starts each native messaging host in a separate process and
communicates with it using standard input (stdin) and standard output
(stdout).
So, from what I understand from that,
in order to be able to pass data from AdiIRC to my extension whenever needed, I need AdiIRC to manually launch another instance of NativeMessaging.js .
So, to test my theory, I launched node "C:\MDN\app\NativeMessaging.js" manually, outside of Chrome (after loading my extension in Chrome), and tried entering random test text inside it, but unfortunately they are not received in my extension's background.html page|Console .
background.js (of my Chrome extension) :
var port = chrome.runtime.connectNative('my_messaging_host');
port.onMessage.addListener((message) => {
console.log("Received: " + message);
});
Native Messaging host setup:
install-host.bat
REG ADD "HKCU\Software\Google\Chrome\NativeMessagingHosts\my_messaging_host" /ve /t REG_SZ /d "%~dp0my_messaging_host.json" /f
my_messaging_host.json
{
"name": "my_messaging_host",
"description": "Example host for native messaging",
"path": "C:\\MDN\\app\\my_messaging_host_win.bat",
"type": "stdio",
"allowed_origins": [
"chrome-extension://(my extension's hash)/"
]
}
my_messaging_host_win.bat
node "C:\MDN\app\NativeMessaging.js"
NativeMessaging.js ( NodeJS code from MDN )
#!/usr/local/bin/node
process.stdin.on('readable', () => {
var input = [];
var chunk;
while (chunk = process.stdin.read()) {
input.push(chunk);
}
input = Buffer.concat(input);
var msgLen = input.readUInt32LE(0);
var dataLen = msgLen + 4;
if (input.length >= dataLen) {
var content = input.slice(4, dataLen);
var json = JSON.parse(content.toString());
handleMessage(json);
}
});
function sendMessage(msg) {
var buffer = Buffer.from(JSON.stringify(msg));
var header = Buffer.alloc(4);
header.writeUInt32LE(buffer.length, 0);
var data = Buffer.concat([header, buffer]);
process.stdout.write(data);
}
process.on('uncaughtException', (err) => {
sendMessage({error: err.toString()});
});
Possible addition to NativeMessaging.js, in case it's to be run with an argument:
e.g. via node NativeMessaging.js messageToSend
sendMessage(process.argv[2]);
Having some trouble using the webpage API in a phantomJS script I'm using for load testing.
I'm running the script in a child process, like so:
var path = require('path');
var childProcess = require('child_process');
var binPath = require('phantomjs').path;
var childArgs = [
path.join(__dirname, 'phantom-script.js')
];
var spawn = childProcess.spawn;
var child = spawn(binPath, childArgs);
child.stdout.on('data', function(data) {
const buf = Buffer.from(data);
console.log('stdout:', buf.toString());
});
child.stderr.on('data', function(data) {
const buf = Buffer.from(data);
console.log('stderr:', buf.toString());
});
And my simple phantomJS script:
var webPage = require('webpage');
var page = webPage.create();
page.onConsoleMessage = function (msg) {
console.log(msg);
};
page.onResourceError = function(resourceError) {
console.log(resourceError.errorCode + ':', resourceError.errorString);
};
function runScript() {
page.open('<webpage-url>', function(status) {
console.log('Status:', status);
if (status === 'success') {
page.evaluate(function() {
console.log('Title:', document.title);
});
}
});
}
runScript();
So to start the phantomJS script, if both of these files are in the test/ directory, and my current directory is up one from that: node test/child-process.js, which then spawns the child process and runs my phantomJS script.
So, this gets the script to run, but it always fails in page.open because of a resource error. Replacing my url with Google's, or really any website, works fine.
The error logged in onResourceError is stdout: 202: Cannot open file:///Users/<user>/path/to/local/current/directory: Path is a directory.
This is always the path from which I'm running this script. If I move down a directory into test/ and run it with node child-process.js, the error instead logs that directory.
As a headless browser, I assumed phantomJS would interface with a webpage like any client would, just without rendering the template--what does the current directory from which the script was run have anything to do with opening the webpage? Why would it be trying to load resources from my local directory when the webpage URL points to a public website, hosted at the IP and PORT specified in the first argument of page.open (e.g. xx.xxx.xx.xx:PORT)?
I'm at a bit of a loss here. The phantomJS path and all that is correct, since it runs the script fine. I just don't understand why page.open would attempt to open the directory from which the script was called--what does that have to do with its function, which is to open the URL and load it to the page?
Not sure if this is even worthy of answering--as opposed to just deleting.
I figured it out when I manually typed in the argument www.google.com, instead of copy/pasting from the browser, and and I got this as the path in the error: file:///Users/<user>/path/to/local/current/directory/www.google.com.
Now I know why I couldn't find a SO question for it. A stupid error on my part at any rate, it would've been a quick debug if the error had appended the IP address and PORT (my "url") to the end of the file path like it did for www.google.com, a clear indicator that it's not pinging a URL.
TL;DR: It's a URL, you need http(s)://...
I have the following NodeJS code:
var spawn = require('child_process').spawn;
var Unzipper = {
unzip: function(src, dest, callback) {
var self = this;
if (!fs.existsSync(dest)) {
fs.mkdir(dest);
}
var unzip = spawn('unzip', [ src, '-d', dest ]);
unzip.stdout.on('data', function (data) {
self.stdout(data);
});
unzip.stderr.on('data', function (data) {
self.stderr(data);
callback({message: "There was an error executing an unzip process"});
});
unzip.on('close', function() {
callback();
});
}
};
I have a NodeUnit test that executes successfully. Using phpStorm to debug the test the var unzip is assigned correctly
However if I run the same code as part of a web service, the spawn call doesn't return properly and the server crashes on trying to attach an on handler to the nonexistent stdout property of the unzip var.
I've tried running the program outside of phpStorm, however it crashes on the command line as well for the same reason. I'm suspecting it's a permissions issue that the tests don't have to deal with. A web server spawning processes could cause chaos in a production environment, therefore some extra permissions might be needed, but I haven't been able to find (or I've missed) documentation to support my hypothesis.
I'm running v0.10.3 on OSX Snow Leopard (via MacPorts).
Why can't I spawn the child process correctly?
UPDATES
For #jonathan-wiepert
I'm using Prototypical inheritance so when I create an "instance" of Unzipper I set stdout and stderr ie:
var unzipper = Unzipper.spawn({
stdout: function(data) { util.puts(data); },
stderr: function(data) { util.puts(data); }
});
This is similar to the concept of "constructor injection". As for your other points, thanks for the tips.
The error I'm getting is:
project/src/Unzipper.js:15
unzip.stdout.on('data', function (data) {
^
TypeError: Cannot call method 'on' of undefined
As per my debugging screenshots, the object that is returned from the spawn call is different under different circumstances. My test passes (it checks that a ZIP can be unzipped correctly) so the problem occurs when running this code as a web service.
The problem was that the spawn method created on the Object prototype (see this article on Protypical inheritance) was causing the child_process.spawn function to be replaced, so the wrong function was being called.
I saved child_process.spawn into a property on the Unzipper "class" before it gets clobbered and use that property instead.
I've heard of soda, but it seems like it requires you to signup and there's a limit on the # of minutes ( free acct / 200 minutes ).
Does anyone know if there's some alternative way to control a browser, or more specifically invoke JS on a web page?
https://github.com/LearnBoost/soda/raw/master/examples/google.js
/**
* Module dependencies.
*/
var soda = require('../')
, assert = require('assert');
var browser = soda.createClient({
host: 'localhost'
, port: 4444
, url: 'http://www.google.com'
, browser: 'firefox'
});
browser.on('command', function(cmd, args){
console.log(' \x1b[33m%s\x1b[0m: %s', cmd, args.join(', '));
});
browser
.chain
.session()
.open('/')
.type('q', 'Hello World')
.clickAndWait('btnG')
.getTitle(function(title){
assert.ok(~title.indexOf('Hello World'), 'Title did not include the query');
})
.clickAndWait('link=Advanced search')
.waitForPageToLoad(2000)
.assertText('css=#gen-query', 'Hello World')
.assertAttribute('as_q#value', 'Hello World')
.testComplete()
.end(function(err){
if (err) throw err;
console.log('done');
});
Zombie.js might work for you. It is headless and seems really cool.
There are actually now Selenium bindings for JavaScript that work with Node.js.
Here are some basic steps to get started:
1 Install Node.js, you can find the download here.
Make sure you
have the latest Chrome driver and put it in your path.
Use npm install selenium-webdriver to get the module added to your project.
Write a test, for example:
var webdriver = require('selenium-webdriver');
var driver = new webdriver.Builder().
withCapabilities(webdriver.Capabilities.chrome()).
build();
driver.get('http://www.google.com');
driver.findElement(webdriver.By.name('q')).sendKeys('simple programmer');
driver.findElement(webdriver.By.name('btnG')).click();
driver.quit();</code>
I cover how to do this with some screenshots and how to use Mocha as a test driver in my blog post here.
Here's a pure node.js wrapper around the java API for selenium's webdriver:
https://npmjs.org/package/webdriver-sync
Here's an example:
var webdriverModule = require("webdriver-sync");
var driver = new webdriverModule.ChromeDriver;
var By = webdriverModule.By;
var element = driver.findElement(By.name("q"));
element.sendKeys("Cheese!");
element.submit();
element = driver.findElement(By.name("q"));
assert.equal(element.getAttribute('value'), "Cheese!");
Save that in a .js file and run it with node.
The module is a pure wrapper, so things like sleep or synchronous calls are entirely possible. Here's the current interface of the module:
module.exports={
ChromeDriver:ChromeDriver,
FirefoxDriver:FirefoxDriver,
HtmlUnitDriver:HtmlUnitDriver,
By:new By(),
ExpectedConditions:new ExpectedConditions(),
WebDriverWait:WebDriverWait,
Credentials:UserAndPassword,
Cookie:Cookie,
TimeUnits:TimeUnits,
/**
* #param {number} amount in mills to sleep for.
*/
sleep:function(amount){
java.callStaticMethodSync(
"java.lang.Thread",
"sleep",
new Long(amount)
);
}
};
You can see an integration test that tests the full capabilities here:
https://github.com/jsdevel/webdriver-sync/blob/master/test/integrations/SmokeIT.js
wd is "A node.js javascript client for webdriver/selenium 2"