Speed up Meteor test refresh - node.js

When testing a Meteor app, I notice that Meteor does a lot of repeated tasks, such as
Downloading missing packages (several seconds)
It seems that it would be more efficient to check for updated package.json and .meteor/versions before "downloading missing packages", especially when the only changes are within unit tests or other application-specific code (i.e. no new imports).
Building web.cordova (which I do not use)
I am pretty sure that specifying the target for testing is possible, so the project is only build for web.browser, for example.
If there are errors, everything is built and executed three times (and failing three times).
When a test fail, why does it have to try again with the exact same code? Is there any use case where this makes sense?
Right now, every time a test module changes, it's several seconds before the tests are ran again because of all these tasks. Is there any way to optimize this and make it more efficient and faster?

Related

Tracing Node.js program execution

I'd like to trace the execution path of an arbitrary Node.js program.
Specifically, I'd like to run a program (server or script), and have some sort of block-level (function call, loop, if statement) trace of the execution.
Constraints
Output must contain files / lines / hit count for all lines during execution
No code minification. Istanbul is great but I want to keep the code that is executed in the end as readable as possible.
For long-running processes (servers, for example), I want to be able to see "current" line coverage (or as up-to-date as possible)
I don't want to lose any coverage data, so while Profiling would give me some hints as to lines hit, it's not really code coverage.
Things I don't care about
Exactly how the coverage is read. For example, it could be output to a file, it could be read via the code, etc.
Coverage format
Thing's I've investigated so far:
Using NODE_V8_COVERAGE:
I found that if I set the NODE_V8_COVERAGE environment variable to a directory, coverage data will be output to that directory when the program exits (here's a blog post on the creation of this feature).
The problem that I'm facing here is that I'm not sure there's a way to trigger the generation of these reports before the program terminates.
Using inspector
I have also been experimenting with Node.js inspector. I found a useful CPU profiler here. This could end up being helpful, but this profiler works by sampling, not as a hook into the language. As a result, I only get line numbers / counts for parts of the code that were slow.
I also tried using Profiler.startPreciseCoverage, thinking that somehow this might give me every line that was executed (didn't find the documentation to be clear on what this does really). It didn't seem to be any more useful
Using Istanbul
I would like to avoid instrumenting code if possible.
Question
It seems like my options are limited, but at the same time this is only a result of my Googling for an hour or two.
Is there a better way to capture line coverage with the constraints listed above?
There's a pull request pending for Node.js to add functionality to programmatically start/stop/write V8 coverage information. If you are adventurous, you could use git to get the version of Node.js you want to use, apply the commits from the patch, and compile a Node.js binary.
If you clone the Node.js repository, the various versions of Node.js are tagged. So you can get the code for Node.js 12.19.0 by checking out the v12.19.0 tag.
You can cherry-pick the commits from the pull request normally, or you could use curl -L https://github.com/nodejs/node/pull/33807.patch | git am to apply the commits as patches.
Instructions for compiling/building the Node.js binary can be found at https://github.com/nodejs/node/blob/master/BUILDING.md#building-nodejs-on-supported-platforms.
More long term, you could chime in on the pull request on whether it meets your needs or not and hopefully get it going again. It seems to have stalled.

VSTest: Order the execution of test assemblies

Our codebase has more than 100 projects with tests in each. Some test assemblies take more time while some other assemblies are taking less time for the execution of the tests.
The Azure DevOps Server is running our whole test suit in parallel, which makes it really fast.
But the problem is, that the long running tests are started in the middle of the testrun, which has the effect, that the whole testrun will be longer.
Is there a way, to influence the order of how and when the test assemblies are started? I want to start the long running test assemblies first and after that the fast test assemblies.
Since you are running the Test in parallel, you could try to use the Based on past running time of tests option in Visual Studio Test task.
According to this doc about Parallel test:
This setting considers past running times to create slices of tests so that each slice has approximately the same running time. Short-running tests will be batched together, while long-running tests will be allocated to separate slices.
This option allows tests to be run in groups based on running time. Finally , each group will be completed in a similar time.
Hope this helps.
We have achieved this by arranging the project-folders so they sort to give the longest running test assemblies first. You can see the order that VSTest finds the assemblies in the Azure DevOps output. From there you can rename folder to affect the order.
It would be nice if there was another way to effect this.

Converting existing cypress tests to cucumber style bdd using cypress-cucumber-preprocessor. Second scenario is not picked up

We have an existing application, the tests are written in cypress. We now want to integrate a cucumber style feature which will internally run using cypress. We used cypress-cucumber-preprocessor for the same. I followed the steps given here on the github page. The problem I'm facing now, is while running tests, it shows both the scenarios, but runs only one. Shows a green tick mark next to it, but doesn't start the second one, and the clock keeps on ticking. On clicking the second scenario in the cypress launcher it says - no commands were issued in this test.
What have I tried:
I tried to duplicate the same scenario twice in the same feature file. It still runs only first one and does not move to the next one.
I moved both different scenarios in two different feature files. It runs both of them successfully.
I tried to run the example repo (cypress-cucumber-example) locally with n number of scenarios. That works seamlessly.
Some observations:
While the first test is run I ran chrome console, and saw some errors due to some network calls failing. But these calls were made (with same errors) even when I was using only cypress and hadn't integrated with cucumber, and all tests were passing. Is it because of some magic cucumber is bringing along with it? Read somewhere default cucumber waits for a test is 60 seconds, I waited for maximum 170 seconds, and then stopped the suite. At the end all I get is one scenario green and other not even started.
It took me quite a long time, but I actually figured out what the issue was. I had an enter key after Feature: in my feature file. The ide didn't raise it as any problem and all was good. I was just comparing successful runs against this issue and saw that the feature name is not appearing in the UI, and hence took away the \n. It works like a charm now. Wondering what a small enter key can do.

RequireJS: To Bundle or Not to Bundle

I'm using RequireJS for my web application. I'm using EmberJS for the application framework. I've come to a point where, I think, I should start bundling my application into a single js file. That is where I get a little confused:
If I finally bundle everything into one file for deployment, then my whole application loads in one shot, instead of on demand. Isn't bundling contradictory to AMD in general and RequireJS in particular?
What further confuses me, is what I found on the RequireJS website:
Once you are finished doing development and want to deploy your code for your end users, you can use the optimizer to combine the JavaScript files together and minify it. In the example above, it can combine main.js and helper/util.js into one file and minify the result.
I found this similar thread but it doesn't answer my question.
If I finally bundle everything into one file for deployment, then my whole application loads in one shot, instead of on demand. Isn't bundling contradictory to AMD in general and RequireJS in particular?
It is not contradictory. Loading modules on demand is only one benefit of RequireJS. A greater benefit in my book is that modularization helps to use a divide-and-conquer approach. We can look at it in this way: even though all the functions and classes we put in a single file do not benefit from loading on demand, we still write multiple functions and multiple classes because it helps break down the problem in a structured way.
However, the multiplicity of modules we create in development do not necessarily make sense when running the application in a browser. The greatest cost of on-demand loading is sending multiple HTTP requests over the wire. Let's say your application has 10 modules and you send 10 requests to load it because you load these modules individually. Your total cost is going to be the cost you have to pay to load the bytes from the 10 files (let's call it Pc for payload cost), plus an overhead cost for each HTTP request (let's call it Oc, for overhead cost). The overhead has to do with the data and computations that have to occur to initiate and close these requests. They are not insignificant. So you are paying Pc + 10*Oc. If you send everything in one chunk you pay Pc + 1*Oc. You've saved 9*Oc. In fact the savings are probably greater because (since compression is often used at both ends to reduce the size of the data transmitted) compression is going to provide greater benefits if the entire data is compressed together than if it is compressed as 10 chunks. (Note: the above analysis omits details that are not useful to cover.)
Someone might object: "But you are comparing loading all the modules in separately versus loading all the modules in one chunk. If we load on demand then we won't load all the modules." As a matter of fact, most applications have a core of modules that will always be loaded, no matter what. These are the modules without which the application won't work at all. For some small applications this means all modules, so it make sense to bundle all of them together. For bigger applications, this means that a core set of modules will be used every single time the application runs, but a small set will be used only on occasion. In the latter case, the optimization should create multiple bundles. I have an application like this. It is an editor with modes for various editing needs. A good 90% of the modules belong to the core. They are going to be loaded and used anyway so it makes sense to bundle them. The code for the modes themselves is not always going to be used but all the files for a given mode are going to be needed if the mode is loaded at all so each mode should be its own bundle. So in this case a model with one core bundle and a series of mode bundles makes sense to a) optimize the deployed application but b) keep some of the benefits of loading on demand. That's the beauty of RequireJS: it does not require to do one or the other exclusively.
While developing you want to have single-focused, small files. This causes their number to increase. When running in production, many HTTP requests really harm performance. Then again you do not want to load the entire application upfront - this is also not optimal.
To address this, I have created a small project in GitHub, require-lazy, you can call it plugin to the builder - r.js. It can lazy load parts of your application with a simple syntax and then create separately donloadable bundles during the build process; so if your application consists of 2 views that need to be independently loaded, require-lazy will (ideally) build 3 js files: (1) the bootstrap code and common libraries, (2) view 1 with all its private scripts and (3) view 2 with all its private scripts.
Lazy loading is simply defined as:
define(["lazy!view1"], function(view1) { .... });
And view1 must be accessed with a promise:
view1.get().done(function(realView1) {
...
});
The project is available through npm, the build process through grunt and there is a bower component.
Comments are more than welcome.

Cucumber: Each feature passes individually, but not together

I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.

Resources