How do you run the Intern's self tests with SauceConnect? - intern

I am having a challenge running the self tests for the Intern.
I have modified the configuration of intern/tests/selftest.intern to point at my local host and I am running the following command line:
node runner config=intern/tests/selftest.intern
I connect to SauceLabs and the tests start, but all of them fail after about 120 seconds. Looking at the output, once the tests are bootstrapped, I see that the initial pages load, but it attempts to fetch the following URL:
http://[myhost]:9000/intern-selftest/tests/all.js
To which a 404 is returned.

When running the self tests, there are two points to keep in mind:
There should theoretically be two copies of Intern when self testing: one that is being tested, and one that is "known" to be good, used to actually do the testing. The idea is that we are testing a new version of Intern with a known good version of itself.
The copy of Intern that is being tested should be named intern-selftest. Check out what happens on TravisCI when the self tests run, specifically noting two separate clones of Intern and the mv intern intern-selftest on line 40.

Related

Doing tasks before heroku nodejs server is ready

When deploying a new release, I would like my server to do some tasks before actually being released and listen to http requests.
Let's say that those tasks take around a minute and are setting some variables: until the tasks are done I would like the users to be redirected to the old release.
Basically do some nodejs work before the server is ready.
I tried a naive approach:
doSomeTasks().then(() => {
app.listen(PORT);
})
But as soon as the new version is released, all https request during the tasks do not work instead of being redirect to old release.
I have read https://devcenter.heroku.com/articles/release-phase but this looks like I can only run an external script which is not good for me since my tasks are setting cache variables.
I know this is possible with /check_readiness on App Engine, but I was wondering for Heroku.
You have a couple options.
If the work you're doing only changes on release, you can add a task as part of your dyno build stage that will fetch and store data inside of the compiled slug that will be deployed to virtual containers on Heroku and booted as your dyno. For example, you can run a task in your build cycle that fetches data and stores/caches it as a file in your app that you read on-boot.
If this data changes more frequently (e.g. daily), you can utilize “preboot” to capture and cache this data on a per-dyno basis. Depending on the data and architecture of your app you may want to be cautious with this approach when running multiple dynos as each dyno will have data that was fetched independently, thus this data may not match across instances of your application. This can lead to subtle, hard to diagnose bugs.
This is a great option if you need to, for example, pre-cache a larger chunk of data and then fetch only new data on a per-request basis (e.g. fetch the last 1,000 posts in an RSS feed on-boot, then per request fetch anything newer—which is likely to be fewer than a few new entries—and coalesce the data to return to the client).
Here's the documentation on customizing a build process for Node.js on Heroku.
Here's the documentation for enabling and working with Preboot on Heroku
I don't think it's a good approach to do it this way. you can use an external script ( npm script ) to do this task and then use the release phase. the situation here is very similar to running migrations you can require the needed libraries to the script you can even load all the application to the script without listening to a port let's make it clearer by example
//script file
var client = require('cache_client');
// and here you can require all the needed libarires to the script
// then execute your logic using sync apis
client.setCacheVar('xyz','xyz');
then in packege.json in "scripts" add this script let assume that you named it set_cache
"scripts": {
"set_cache": "set_cache",
},
now you can use npm to run this script as npm set_cache and use this command in Procfile
web: npm start
release: npm set_cache

Using The VM Module To Run Node.js Scripts: "ReferenceError: require is not a function"

I'm writing my own custom node.js server. It now handles static pages, AJAX GET, POST and OPTIONS requests (the latter for CORS), but I'm aware that the method I've chosen for running the server side GET and POST scripts is not optimal - the official node.js documentation states that launching numerous child node.js processes is a bad idea, as it's a resource hungry approach. It works, but I'm aware that there's probably a better method of achieving the same result.
So, I alighted upon the VM module. My first thought was that this would solve the problem of cluttering the machine with child processes, and make my server much more scalable.
There's one slight problem. My server side scripts, for tasks such as directory listing & sending the results back to the browser, begin with several require statements to load required modules.
Having finally written the code to read the script file, and pass it to vm.Script(), I now encounter an error:
"ReferenceError: require is not a function"
I've since learned that the reason for this, is that VM launches a bare V8 execution environment for the script, instead of an independent node.js execution environment. To make my idea work, I need VM to provide me with a separate, sandboxed node.js execution environment. How do I achieve this?
My preliminary researches tell me that I need to provide the VM execution environment with its own separate copy of the node.js globals, so that require functions as intended. Is my understanding as just provided correct? And if so, what steps do I need to take to perform this task?
My preliminary researches tell me that I need to provide the VM execution environment with its own separate copy of the node.js globals, so that require functions as intended
That's correct for runInNewContext, which doesn't share the globals with the "parent" context (as opposed to runInThisContext).
To provide the ability to require in your script, you can pass it as a function. The same goes for other locals, like console:
const vm = require('vm');
let sandbox = {
require,
console
};
vm.runInNewContext(`
let util = require('util');
console.log(util.inspect(util));
`, sandbox);
Instead of passing require directly, you can also pass a function that—say—implements module whitelisting (so you can control which modules the scripts are allowed to load).

Github sharing process failed

Before answering my question please take into consideration that I'm new to Github and how it works. I'm having trouble sharing my project to Github. I'm receiving the errors:
The remote end hung up unexpectedly RPC failed; result=18, HTTP code = 200
The sharing process takes very long and it stops with this error. The project gets created on Github with no code. Other forums relating to this talk about GitBash. Note this is done through android studio.
As mentioned here
Specifically, the 'result=18' portion. This is the error code coming from libcurl, the underlying library used in http communications with Git. From the libcurl documentation, a result code of 18 means:
CURLE_PARTIAL_FILE (18)
A file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that doesn't match the previously given size.
You could try and increase the http buffer:
git config --global http.postBuffer 524288000
But first, make sure your local repo didn't include large binaries.
To be sure, check your .gitignore

Uninitialized Constant on custom Puppet Function

I've got a function that I'm trying to run on my puppetmaster on each client run. It runs just fine on the puppetmaster itself, but it causes the agent runs to fail on the nodes because of the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: uninitialized constant Puppet::Parser::Functions::<my_module>
I'm not really sure why. I enabled debug logging on the master via config.ru, but I see the same error in the logs with no more useful messages.
What other steps can I take to debug this?
Update:
Adding some extra details.
Puppet Community, with Foreman connected Puppetmaster running on Apache2 with Passenger / Rack
Both client and master are running Puppet 3.7.5
Both client and master are Ubuntu 14.04
Both client and master are using Ruby 1.9.3p484 (2013-11-22 revision 43786) [x86_64-linux]
Pluginsync is enabled
The custom function works fine on the puppetmaster when run run as part of the puppetmaster's manifest (it's a client of itself) or using puppet apply directly on the server.
The function is present on the clients, and when I update for debugging purposes I do see the file appear on the client side.
I cannot paste the function here unfortunately because it is proprietary code. It does rely on the aws-sdk, but I have verified that that ruby gem is present on both the client and the master side, with the same version in both places. I've even tried surrounding the entire function with:
begin
rescue LoadError
end
and have the same result.
This is embarrassingly stupid, but it turns out I had somehow never noticed that I hadn't actually included this line in my function:
require 'aws-sdk'
And so the error I was receiving:
uninitialized constant Puppet::Parser::Functions::Aws
Was actually referring to the AWS SDK being missing, and not a problem with the puppet module itself (which was also confusingly named aws), which is how I was interpreting it. Basically, I banged my head against the wall for several days over a painfully silly mistake. Apologies to all who tried to help me :)

Resetting a Node application to a known state when testing with supertest

I often write black box tests against my node applications using supertest. The app loads up database fixtures and the black box tests exercise the database strenuously. I'd like to reset the app state between certain tests (so I can run different combinations of tests without having to worry about a particular database state).
The ideal thing would be to be able to reload the app with another:
var app = require(../app.js).app;
But this only happens once when I run mocha (as it should be with require calls). I think I can do it by wrapping my tests in multiple mocha calls from a batch file, but my developers are used to running npm test, and I would like them to keep doing that.
How could I do this?
The require function will basically cache the result and it won't re-run the module. But you can delete the module from the cache:
delete require.cache[require.resolve('../app')];
If that didn't work, you can try resetting the whole cache: require.cache = {}
But that might introduce bugs, because usually modules are developed in a way thinking that they will be only executed once in the whole process runtime.
The best way to fix is to write module with the minimum global state, which means instead of storing the app as a module-level value and then requiring it everywhere, I would make a function that builds the app and is called once and then pass it where it is needed. Then you avoid this problem because you just call that function once per test(originally written by loganfsmyth)For example node http server module is a good example where you can have make several copies of it without conflicting each other. At anytime you can close a server to shut it down.
As for repeating mocha calls, you can have it in your npm test:"test" : "mocha file1 && mocha file2 && mocha file3"
The correct answer is to be found in the above answer, the best thing to do is to build the app in a function. This question is answered here:
grunt testing api with supertest, express and mocha
One can also break the mocha command line up as it says towards the end, but isn't as desirable since it messes up the reporting.

Resources