I often write black box tests against my node applications using supertest. The app loads up database fixtures and the black box tests exercise the database strenuously. I'd like to reset the app state between certain tests (so I can run different combinations of tests without having to worry about a particular database state).
The ideal thing would be to be able to reload the app with another:
var app = require(../app.js).app;
But this only happens once when I run mocha (as it should be with require calls). I think I can do it by wrapping my tests in multiple mocha calls from a batch file, but my developers are used to running npm test, and I would like them to keep doing that.
How could I do this?
The require function will basically cache the result and it won't re-run the module. But you can delete the module from the cache:
delete require.cache[require.resolve('../app')];
If that didn't work, you can try resetting the whole cache: require.cache = {}
But that might introduce bugs, because usually modules are developed in a way thinking that they will be only executed once in the whole process runtime.
The best way to fix is to write module with the minimum global state, which means instead of storing the app as a module-level value and then requiring it everywhere, I would make a function that builds the app and is called once and then pass it where it is needed. Then you avoid this problem because you just call that function once per test(originally written by loganfsmyth)For example node http server module is a good example where you can have make several copies of it without conflicting each other. At anytime you can close a server to shut it down.
As for repeating mocha calls, you can have it in your npm test:"test" : "mocha file1 && mocha file2 && mocha file3"
The correct answer is to be found in the above answer, the best thing to do is to build the app in a function. This question is answered here:
grunt testing api with supertest, express and mocha
One can also break the mocha command line up as it says towards the end, but isn't as desirable since it messes up the reporting.
Related
I took over a project where the developers were not fully aware of how Node.js works, so they created code accessing MongoDB with Mongoose which would leave inconsistent data in the database whenever you had any concurrent request reaching the same endpoint / modifying the same data. The project uses the Express web framework.
I already instructed them to implement a fix for this (basically, to use Mongoose transaction support with automatically managed retriable transactions), but due to the size of the project they will take a lot of time to fix it.
I need to put this in production ASAP, so I thought I could try to do it if I'm able to guarantee sequential processing of the incoming requests. I'm completely aware that this is a bad thing to do, but it would be just a temporary solution (with a low count of concurrent users) until a proper fix is in place.
So is there any way to make Node.js to process incoming requests in a sequential manner? I just basically don't want code from different requests to run interleaved, or putting it another way, I don't want non-blocking operations (.then()/await) to yield to another task and instead block until the asynchronous operation ends, so every request is processed entirely before attending another request.
I have an NPM package that can do this: https://www.npmjs.com/package/async-await-queue
Create a queue limited to 1 concurrent user and enclose the code that calls Mongo in wait()/end()
Or you can also use an async mutex, there are a few NPM packages as well.
When deploying a new release, I would like my server to do some tasks before actually being released and listen to http requests.
Let's say that those tasks take around a minute and are setting some variables: until the tasks are done I would like the users to be redirected to the old release.
Basically do some nodejs work before the server is ready.
I tried a naive approach:
doSomeTasks().then(() => {
app.listen(PORT);
})
But as soon as the new version is released, all https request during the tasks do not work instead of being redirect to old release.
I have read https://devcenter.heroku.com/articles/release-phase but this looks like I can only run an external script which is not good for me since my tasks are setting cache variables.
I know this is possible with /check_readiness on App Engine, but I was wondering for Heroku.
You have a couple options.
If the work you're doing only changes on release, you can add a task as part of your dyno build stage that will fetch and store data inside of the compiled slug that will be deployed to virtual containers on Heroku and booted as your dyno. For example, you can run a task in your build cycle that fetches data and stores/caches it as a file in your app that you read on-boot.
If this data changes more frequently (e.g. daily), you can utilize “preboot” to capture and cache this data on a per-dyno basis. Depending on the data and architecture of your app you may want to be cautious with this approach when running multiple dynos as each dyno will have data that was fetched independently, thus this data may not match across instances of your application. This can lead to subtle, hard to diagnose bugs.
This is a great option if you need to, for example, pre-cache a larger chunk of data and then fetch only new data on a per-request basis (e.g. fetch the last 1,000 posts in an RSS feed on-boot, then per request fetch anything newer—which is likely to be fewer than a few new entries—and coalesce the data to return to the client).
Here's the documentation on customizing a build process for Node.js on Heroku.
Here's the documentation for enabling and working with Preboot on Heroku
I don't think it's a good approach to do it this way. you can use an external script ( npm script ) to do this task and then use the release phase. the situation here is very similar to running migrations you can require the needed libraries to the script you can even load all the application to the script without listening to a port let's make it clearer by example
//script file
var client = require('cache_client');
// and here you can require all the needed libarires to the script
// then execute your logic using sync apis
client.setCacheVar('xyz','xyz');
then in packege.json in "scripts" add this script let assume that you named it set_cache
"scripts": {
"set_cache": "set_cache",
},
now you can use npm to run this script as npm set_cache and use this command in Procfile
web: npm start
release: npm set_cache
I'm writing my own custom node.js server. It now handles static pages, AJAX GET, POST and OPTIONS requests (the latter for CORS), but I'm aware that the method I've chosen for running the server side GET and POST scripts is not optimal - the official node.js documentation states that launching numerous child node.js processes is a bad idea, as it's a resource hungry approach. It works, but I'm aware that there's probably a better method of achieving the same result.
So, I alighted upon the VM module. My first thought was that this would solve the problem of cluttering the machine with child processes, and make my server much more scalable.
There's one slight problem. My server side scripts, for tasks such as directory listing & sending the results back to the browser, begin with several require statements to load required modules.
Having finally written the code to read the script file, and pass it to vm.Script(), I now encounter an error:
"ReferenceError: require is not a function"
I've since learned that the reason for this, is that VM launches a bare V8 execution environment for the script, instead of an independent node.js execution environment. To make my idea work, I need VM to provide me with a separate, sandboxed node.js execution environment. How do I achieve this?
My preliminary researches tell me that I need to provide the VM execution environment with its own separate copy of the node.js globals, so that require functions as intended. Is my understanding as just provided correct? And if so, what steps do I need to take to perform this task?
My preliminary researches tell me that I need to provide the VM execution environment with its own separate copy of the node.js globals, so that require functions as intended
That's correct for runInNewContext, which doesn't share the globals with the "parent" context (as opposed to runInThisContext).
To provide the ability to require in your script, you can pass it as a function. The same goes for other locals, like console:
const vm = require('vm');
let sandbox = {
require,
console
};
vm.runInNewContext(`
let util = require('util');
console.log(util.inspect(util));
`, sandbox);
Instead of passing require directly, you can also pass a function that—say—implements module whitelisting (so you can control which modules the scripts are allowed to load).
I have serious issue with custom foxx application.
About the app
The application is customized algorithm for finding path in graph. It's optimized for public transport. On init it loads all necessary data into javascript variable and then it traverse through them. Its faster then accessing the db each time.
The issue
When I access through api the application for first time then it is fast eg. 300ms. But when I do absolutely same request second time it is very slow. eg. 7000ms.
Can you please help me with this? I have no idea where to look for bugs.
Without knowing more about the app & the code, I can only speculate about reasons.
Potential reason #1: development mode.
If you are running ArangoDB in development mode, then the init procedure is run for each Foxx route request, making precalculation of values useless.
You can spot whether or not you're running in development mode by inspecting the arangod logs. If you are in development mode, there will be a log message about that.
Potential reason #2: JavaScript variables are per thread
You can run ArangoDB and thus Foxx with multiple threads, each having thread-local JavaScript variables. If you issue a request to a Foxx route, then the server will pick a random thread to answer the request.
If the JavaScript variable is still empty in this thread, it may need to be populated first (this will be your init call).
For the next request, again a random thread will be picked for execution. If the JavaScript variable is already populated in this thread, then the response will be fast. If the variable needs to be populated, then response will be slow.
After a few requests (at least as many as configured in --server.threads startup option), the JavaScript variables in each thread should have been initialized and the response times should be the same.
I am having a challenge running the self tests for the Intern.
I have modified the configuration of intern/tests/selftest.intern to point at my local host and I am running the following command line:
node runner config=intern/tests/selftest.intern
I connect to SauceLabs and the tests start, but all of them fail after about 120 seconds. Looking at the output, once the tests are bootstrapped, I see that the initial pages load, but it attempts to fetch the following URL:
http://[myhost]:9000/intern-selftest/tests/all.js
To which a 404 is returned.
When running the self tests, there are two points to keep in mind:
There should theoretically be two copies of Intern when self testing: one that is being tested, and one that is "known" to be good, used to actually do the testing. The idea is that we are testing a new version of Intern with a known good version of itself.
The copy of Intern that is being tested should be named intern-selftest. Check out what happens on TravisCI when the self tests run, specifically noting two separate clones of Intern and the mv intern intern-selftest on line 40.