How to execute shell script in non blocking manner using protractor? - node.js

Using steps below
Execute some selenium code to build an app.
Run the app by downloading it.
Observations:
Step 1 gets executed after step 2 since I am issuing shell commands in step 2. To avoid this, I put step 2 in afterEach block. This is working, but I don't have much flexibility and this approach is not scale-able. Any pointers on how to achieve sequential execution? I would like to run a .sh file as step 2, thus giving much more flexibility.

Your spec file should look something like this:
// step1: whatever selenium webdriver code you want to run
thisThing.click();
// step2: whatever other code you want to run, but inserted into the controlFlow queue
browser.controlFlow().execute(() => {
// for example...
var child_process = require('child_process');
console.log(String(child_process.exec('ls -al downloads')));
});

Related

linux wait command only proceed if previous job successful putty

Im an end user that has access to Putty in order to run selective scripts on our server as they would run during our overnight batch process.
Some of these I run in sequence using
run_process.ksh kj_job1 & wait; run_process.ksh kj_job2
However kj_job1 can fail and kj_job2 would still run. Id like a way for kj_job2 to only proceed if kj_job1 was completed succesfully but i cant find a guide online to walk me through what i need to do.
My knowledge in this area is limited, i simply navigate to the directory where we have the file run_process.ksh and then add the job name i want to run. I recently found out about the & wait command in order to run strings and the () s i can run things in parallel.
Any help is apprecaited.

Determine if Javascript (NodeJS) code is running in a REPL

I wish to create one NodeJS source file in a Jupyter notebook which is using the IJavascript kernel so that I can quickly debug my code. Once I have it working, I can then use the "Download As..." feature of Jupyter to save the notebook as a NodeJS script file.
I'd like to have the ability to selectively ignore / include code in the notebook source that will not execute when I run the generated NodeJS script file.
I have solved this problem for doing a similar thing for Python Jupyter notebooks because I can determine if the code is running in an interactive session (IPython [REPL]). I accomplished this by using this function in Python:
def is_interactive():
import __main__ as main
return not hasattr(main, '__file__')
(Thanks to Tell if Python is in interactive mode)
Is there a way to do a similar thing for NodeJS?
I don't know if this is the correct way but couldn't find anything else
basically if you
try {
const repl = __dirname
} catch (err) {
//code run if repl
}
it feels a little hacky but works ¯\_(ツ)_/¯
This may not help the OP in all cases, but could help others googling for this question. Sometimes it's enough to know if the script is running interactively or not (REPL and any program that is run from a shell).
In that case, you can check for whether standard output is a TTY:
process.stdout.isTTY
The fastest and most reliable route would just be to query the process arguments. From the NodeJS executable alone, there are two ways to launch the REPL. Either you do something like this without any script following the call to node.
node --experimental-modules ...
Or you force node into the REPL using interactive mode.
node -i ...
The option ending parameter added in v6.11.0 -- will never append arguments into the process.argv array unless it's executing in script mode; via FILE, -p, or -e. Any arguments meant for NodeJS will be filtered into the accompanying process.execArgv variable, so the only thing left in the process.argv array should be process.execPath. Under these circumstances, we can reduce the query to the solution below.
const isREPL = process.execArgv.includes("-i") || process.argv.length === 1;
console.log(isREPL ? "You're in the REPL" : "You're running a script m8");
This isn't the most robust method since any user can otherwise instantiate a REPL from an intiator script which your code could be ran by. For that I'm pretty sure you could use an artificial error to crawl the traceback and look for a REPL entry. Although I haven't the time to implement and ensure that solution at this time.

Is it possible to incorporate an environment variable into a ruby script for Calabash?

I am testing a feature on an app that requires the user to be a certain age. The only time you see the prompt that asks for your age is once you open the app for the first time and once you log out of the app. I don't want my test to only go through my steps to log in and then log out to be able to see this prompt, but I also don't want to manually reset the data in between tests either. Isn't this why we write scripts? Anyways, before I launch the test, I use the environment variable RESET_BETWEEN_SCENARIOS=1 cucumber features/my_feature.feature. Is there a way that I can use this variable INSIDE of my step definition so that it resets the data on its own once I run the script?
I'm not familiar with Calabash, but it appears to be using cucumber. If that is the case, you could handle the action in a before or after hook which would run before or after each scenario.
Within the features/support folder, add a file hooks.rb
Before() do
if ENV['RESET_BETWEEN_SCENARIOS'] == '1'
#code to reset data
end
end
This could also be run after the scenario by using After() do. The same if/then could be used within a scenario step as well.

Calling a sh script from node.js

I have a node.js application, which connect everyday to a server.
On this server, a new version of the app can be available, if so, the installed app download it, check if the download is complete, and if so, stop itself calling a shell script, which replace the old app by the new one, and start it.
I m struggling at starting the update script.
I know I can start it with child_process_execFile function, which I do:
var execF = require('child_process').execFile;
var PATH = process.argv[1].substr(0, process.argv[1].lastIndexOf('/')+1),
filename = 'newapp.js',
execF(PATH + 'up.sh', [PATH + filename], function () {console.log('done'); return ;});
up.sh, for now is just:
cat $1 > /home/pi/test
I get 'done' printed in the console, but test isn t created.
I know that execFile create a subprocess, is it what block the script to do that?
If I suceed to start this, I know I only have to make some cp in the script to have my app auto-updating.
EDIT:
Started as usual (calling the script from console), it work well, is there a reason for the script to don t execute when called from node.js?
I'd suggest that you consider using a module that can do this for you automatically rather than duplicating the effort. Or, at least use their technique as inspiration for you own requirements.
One example is: https://github.com/edwardhotchkiss/always
It's simple to use:
Usage: always <app.js>
=> always app.js
Then, anytime your code changes, the app is killed, and restarted.
As you can see in the source, it uses the Monitor class to watch a specified file, and then uses spawn to kick it off (and of course kill to end the process when a change has happened).
Unfortunately, the [always] output is currently hardcoded into the code, but it would be a simple change/pull request I'm sure to make it optional/configurable. If the author doesn't accept your change, you could just modify a local copy of the code (as it's quite simple overall).
Make sure when you spawn/exec the process you are executing the shell that will be processing the script and not the script itself.
Should be something like
execF("/usr/bin/sh", [PATH + 'up.sh', PATH + filename]);

node.js -- execute command synchronously and get result

I'm trying to execute a child_process synchronously in node.js (Yes, I know this is bad, I have a good reason) and retrieve any output on stdout, but I can't quite figure out how...
I found this SO post: node.js execute system command synchronously that describes how to use a library (node-ffi) to execute the command, and this works great, but the only thing I'm able to get is the process exit code. Any data the command executes is sent directly to stdout -- how do I capture this?
> run('whoami')
username
0
in otherwords, username is echo'd to stdout, the result of run is 0.
I'd much rather figure out how to read stdout
So I have a solution working, but don't exactly like it... Just posting here for reference:
I'm using the node-ffi library referenced in the other SO post. I have a function that:
takes in a given command
appends >> run-sync-output
executes it
reads run-sync-output synchronously and stores the result
deletes this tmp file
returns result
There's an obvious issue where if the user doesn't have write access to the current directory, it will fail. Plus, it's just wasted effort. :-/
I have built a node.js module that solves this exact problem. Check it out :)
exec-plan
Update
The above module solves your original problem, because it allows for the synchronous chaining of child processes. Each link in the chain gets the stdout from the previous process in the chain.
I had a similar problem and I ended up writing a node extension for this. You can check out the git repository. It's open source and free and all that good stuff !
https://github.com/aponxi/npm-execxi
ExecXI is a node extension written in C++ to execute shell commands
one by one, outputting the command's output to the console in
real-time. Optional chained, and unchained ways are present; meaning
that you can choose to stop the script after a command fails
(chained), or you can continue as if nothing has happened !
Usage instructions are in the ReadMe file. Feel free to make pull requests or submit issues!
However it doesn't return the stdout yet... Well, I just released it today. Maybe we can build on it.
Anyway, I thought it was worth to mention it. I also posted this to a similar question: node.js execute system command synchronously
Since Node version v0.11.12, there is a child_process.execSync function for this.
Other than writing code a little diferent, there's actually no reason to do anything synched.
What don't you like about this? (docs)
var exec = require('child_process').exec;
exec('whoami', function (error, username) {
console.log('stdout: %s', username);
continueWithYourCode();
});

Resources