I have this node CLI tool that consumes all my network's NAT (network address translation) address ports whenever it runs because the node tool creates a connection per outbound http request. The tool doesn't finish the job because the NAT gateway blocks the tool from opening any more outbound ports after the node tool exceeds the limit.
I could set http.globalAgent.keepAlive, but the problem is that this node CLI tool execs to other modules so I would have to set http.globalAgent.keepAlive in all those sub modules as well. Is there a way I can force http.globalAgent.keepAlive everywhere without changing code in every sub-execed node tool?
The node command supports a --require flag which preloads a module (or several modules) before executing file passed to node.
Because of the require cache, if a module imports the http module and sets http.globalAgent.keepAlive = true then any other module that imports http will be importing http from the require cache with http.globalAgent.keepAlive = true.
Therefore, the key is to override the system's node command to --require a module that sets http.globalAgent.keepAlive = true. Whenever the CLI tool execs node the script runs instead and imports the override module first. The require cache will be pre-populated with http and the desired settings for that node process.
A script like the following will pass through any arguments to node and load the overriding module any time a call to node occurs.
#! /bin/sh
exec node_bin --require /path/to/module "$#"
You'll also need to move the original node binary to something like node_bin and insert the above script as the new "node" binary.
Related
I have a pre-written package.json file for an app which I need to modify. More specifically, I want to change the NODE_PORT environment variable through the package.json file and I'm working on a Windows machine.
In the package.json I have several scripts that I run through npm when I like to spin up an instance of the app.
For example:
set NODE_PORT=80&& set NODE_ENV=test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/access.log -e ./logs/err.log --time --name Test
This script for example works fine.
However, when I'm trying to set the NODE_PORT variable to 8080 (that's the port I need) like so:
set NODE_PORT=8080&& set NODE_ENV=parallel_test&& pm2 install pm2-logrotate&& pm2 start app.js -i max -o ./logs/parallel_access.log -e ./logs/parallel_err.log --time --name Parallel_Test
a whitespace at the end of the variable gets added.
I verified this by printing out the number of chars of $process.env.NODE_PORT in the log file which prints 5. Moreover the login for the app via Google crashes as the redirect link of the app doesn't match with the one in the Google Cloud Platform. That is:
app: http://localhost:8080 /auth/check-google vs. Google Cloud Platform: http://localhost:8080/auth/check-google
Any idea why this is happening?
i have faced similar issue recently. Handled it with .trimEnd() while adding variables with dotenv. But I think using cross-env can solve your problems.
Most Windows command prompts will choke when you set environment
variables with NODE_ENV=production like that. (The exception is Bash
on Windows, which uses native Bash.) Similarly, there's a difference
in how windows and POSIX commands utilize environment variables. With
POSIX, you use: $ENV_VAR and on windows you use %ENV_VAR%.
Adding this inside your script: "cross-env NODE_PORT=8080 ..."
I have variable set in my .bash_rc file:
whoami#cloudshell:~/source/NodePrototype (x-alcove-9999999)$ echo $APP_ENVIRONMENT
LIVE
Yet node.js application out of:
const app_environment_config=require('./APP_ENVIRONMENT/' + process.env.APP_ENVIRONMENT)
produce
2019-02-21 14:18:16 default[20190221t141628] Error: Cannot find module './APP_ENVIRONMENT/undefined'
Eventhough when I enter node shell:
whoami#cloudshell:~/source/NodePrototype (x-alcove-9999999)$ node
> process.env.APP_ENVIRONMENT
'LIVE'
The same part works locally.
It depends on how your Node app is being launched, because looks like is not running in an environment where that variable exists, to make sure print all your current env vars to make this sure: console.log(process.env).
Also, a good practice, when you need something like that, is to use .env files with this module: https://www.npmjs.com/package/dotenv is a good practice to pass configuration to your Node apps.
I have to execute a node command within another node process as shown below:
require('child_process').exec (`${<path of current node executable>} someModule`);
How can i retrieve the node executable path on runtime to execute this command.
process.execPath should be what you require.
process.argv0 is not alway pointed to node binary.
As the in the official Node document.
The process.argv0 property stores a read-only copy of the original value of argv[0] passed when Node.js starts.
In the example of the official document, it demo the case that process.argv0 is not the node binary. customArgv0 is for exec's -a flag.
$ bash -c 'exec -a customArgv0 ./node'
> process.argv[0]
'/Volumes/code/external/node/out/Release/node'
> process.argv0
'customArgv0'
If you are trying to execute another node app, how about taking a look at child_process.fork?
You code then should be as follows.
// fork()'s first argument should be the same as require();
// except that fork() executes the module in a child process
require('child_process').fork(`someModule`);
As stated in document, fork() use the same node binary as process.execPath, or you can specify other node binary to execute the module.
By default, child_process.fork() will spawn new Node.js instances
using the process.execPath of the parent process. The execPath
property in the options object allows for an alternative execution
path to be used.
IntelliJ IDEA 13 has really excellent support for Mocha tests through the Node.js plugin: https://www.jetbrains.com/idea/webhelp/running-mocha-unit-tests.html
The problem is, while I edit code on my local machine, I have a VM (vagrant) in which I run and test the code, so it's as production-like as possible.
I wrote a small bash script to run my tests remotely on this VM whenever I invoke "Run" from within IntelliJ, and the results pop up in the console well enough, however I'd love to use the excellent interface that appears whenever the Mocha test runner is invoked.
Any ideas?
Update: There's a much better way to do this now. See https://github.com/TechnologyAdvice/fake-mocha
Success!!
Here's how I did it. This is specific to connecting back to vagrant, but can be tweaked for any remote server to which you have key-based SSH privileges.
Somewhere on your remote machine, or even within your codebase, store the NodeJS plugin's mocha reporter (6 .js files at the time of this writing). These are found in NodeJS/js/mocha under your main IntelliJ config folder, which on OSX is ~/Library/Application Support/IntelliJIdea13. Know the absolute path to where you put them.
Edit your 'Run Configurations'
Add a new one using 'Mocha'
Set 'Node interpreter' to the full path to your ssh executable. On my machine, it's /usr/bin/ssh.
Set the 'Node options' to this behemoth, tweaking as necessary for your own configuration:
-i /Users/USERNAME/.vagrant.d/insecure_private_key vagrant#MACHINE_IP "cd /vagrant; node_modules/mocha/bin/_mocha --recursive --timeout 2000 --ui bdd --reporter /vagrant/tools/mocha_intellij/mochaIntellijReporter.js test" #
REMEMBER! The # at the end is IMPORTANT, as it will cancel out everything else the Mocha run config adds to this command. Also, remember to use an absolute path everywhere that I have one.
Set 'Working directory', 'Mocha package', and 'Test directory' to exactly what they should be if you were running mocha tests locally. These will not impact the test execution, but this interface WILL check to make sure these are valid paths.
Name it, save, and run!
Fully integrated, remote testing bliss.
1) In Webstorm, create a "Remote Debug" configuration, using port 5858.
2) Make sure that port is open on your server or VM.
3) On the remote server, execute Mocha with the --debug-brk option: mocha test --debug-brk
4) Back in Webstorm, start the remote-debug you created in Step 1, and and execution should pause on set breakpoints.
I'm setting up my devel environment for an Ember.js app using rake-pipeline as described here.
During development, my html and javascript are served by webrick (rake-filter magic that I don't quite understand) on http://0.0.0.0:9292 and I have a REST service developed in php served by Apache on http://somename.local
My ajax calls from the client app are getting lost because of the browser's anti-cross-domain-ajax thing. How do I work around this issue?
You can't configure the proxy directly in your Assetfile. You'll have to create a config.ru file and use the rackup command to launch the server.
Here's an example Assetfile:
input "app"
output "public"
And config.ru:
require 'rake-pipeline'
require 'rake-pipeline/middleware'
require "rack/streaming_proxy" # Don't forget to install the rack-streaming-proxy gem.
use Rack::StreamingProxy do |request|
# Insert your own logic here
if request.path.start_with?("/api")
"http://localhost#{request.path.sub("/api", "")}"
end
end
use Rake::Pipeline::Middleware, 'Assetfile' # This is the path to your Assetfile
run Rack::Directory.new('public') # This should match whatever your Assetfile's output directory is
You'll have to install the rack and rack-streaming-proxy gems.
You can use Rack::Proxy and then just send the needed requests to the proxy.
if request.path.start_with?("/api")
URI.parse("http://localhost:80#{request.path}")
end