Is it possible to configure detox to only run a subset of the matched *.spec.js files? - jestjs

As far as I can tell from the Detox docs, issues, and StackOverflow questions, there is no way to configure Detox so it only runs a subset of the matched tests (*.spec.js).
Does anyone know how to do this? I want to ask on here before I file an issue on the repo.
Most of the time it's desirable to simply run all matched tests . But in certain scenarios, it would be nice to only run a subset.
For example: I want to use Jest for 1) Acceptance tests + PR gating and 2) Traversing the app and generating screenshots of the various screens. Use case 1 is fast and lightweight. Use case 2 is expensive and will take a long time.
For each use case, I only want to run the tests for that use case. Does any know how to do this? I can think of several hacky approaches (file renaming, conditional logic in tests that keys on env variables, etc...), but I think this should be a supported thing.

Related

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

Is there a generic way to consume my dependency's grunt build process?

Let's say I have a project where I want to use Lo-Dash and jQuery, but I don't need all of the features.
Sure, both these projects have build tools so I can compile exactly the versions I need to save valuable bandwidth and parsing time, but I think it's quite uncomfortable and ugly to install both of them locally, generate my versions and then check them it into my repository.
Much rather I'd like to integrate their grunt process into my own and create custom builds on the go, which would be much more maintainable.
The Lo-Dash team offers this functionality with a dedicated cli and even wraps it with a grunt task. That's very nice indeed, but I want a generic solution for this problem, as it shouldn't be necessary to have every package author replicate this.
I tried to achieve this somehow with grunt-shell hackery, but as far as I know it's not possible to devDependencies more than one level deep, which makes it impossible even more ugly to execute the required grunt tasks.
So what's your take on this, or should I just move this over to the 0.5.0 discussion of grunt?
What you ask assumes that the package has:
A dependency on Grunt to build a distribution; most popular libraries have this, but some of the less common ones may still use shell scripts or the npm run command for general minification/compression.
Some way of generating a custom build in the first place with a dedicated tool like Modernizr or Lo-Dash has.
You could perhaps substitute number 2 with a generic one that parses both your source code and the library code and uses code coverage to eliminate unnecessary functions from the library. This is already being developed (see goldmine), however I can't make any claims about how good that is because I haven't used it.
Also, I'm not sure how that would work in a AMD context where there are a lot of interconnected dependencies; ideally you'd be able to run the r.js optimiser and get an almond build for production, and then filter that for unnecessary functions (most likely Istanbul, would then have to make sure that the filtered script passed all your unit/integration tests). Not sure how that would end up looking but it'd be pretty cool if that could happen. :-)
However, there is a task especially for running Grunt tasks from 'sub-gruntfiles' that you might like to have a look at: grunt-subgrunt.

How to manage multiple tfs gated checkin build definitions

We currently have 2 solutions that share several projects between them, as well as have some projects that are unique to each of them. We currently have a build definition for each of these solutions set to Gated Checkin.
Unfortunately, it seems that having multiple definitions with gated checkins set means that if I make a change to one of the shared projects, it only runs one definition. In a perfect world, I want it to build both solutions in this circumstance.
I know that I could just create a single build definition that builds both solutions, and this will work great in the scenario in question, but then if I am modifying a project that it unique to a solution, it will still build both solutions, ugh.
Is there a way to configure our builds such that we get the best of both worlds? I would like the consistency of insuring shared code correctly works on both solutions, but I also would like builds to not take double the time for changes that affect only one solution or another (by far our most common use case).
Or am I just stuck with the tradeoff of one or the other?
The basic problem with your current situation is "how to identify the change"? Whether it's the common project or the unique project that was modified. I dont think there is any EASY means of identifying this at the time of building the code.
One option which you is NOT THE BEST solution would be to separate out the common projects into another solution which compiles and put the DLL's to a common location which the unique solutions use. This way you can have 3 independent gated- checkins, if there is change to common solution you compile both unique solutions within the same build definition. If not you compile the common and the one unique solution in their own build def.

How to iterate over a cucumber feature

I'm writing a feature in cucumber that could be applied to a number of objects that can be programmaticaly determined. Specifically, I'm writing a smoke test for a cloud deployment (though the problem is with cucumber, not the cloud tools, thus stack overflow).
Given a node matching "role:foo"
When I connect to "automatic.eucalyptus.public_ipv4" on port "default.foo.port"
Then I should see "Hello"
The given does a search for nodes with the role foo does and the automatic.eucalyptus... And port come from the node found. This works just fine... for one node.
The search could retun multiple nodes in different environments. Dev will probably return one, test and integration a couple, and prod can vary. The given already finds all of them.
Looping over the nodes in each step doesn't really work. If any one failed in the When, the whole thing would fail. I've looked at scenarios and cucumber-iterate, but both seem to assume that all scenarios are predefined rather than programmatically looked up.
I'm a cuke noob, so I'm probably missing something. Any thoughts?
Edit
I'm "resolving" the problem by flipping the scenario. I'm trying to integrate into a larger cluster definition language to define repeatedly call the feature by passing the info as an environment variable.
I apologize in advance that I can't tell you exactly "how" to do it, but a friend of mine solved a similar problem using a somewhat unorthodox technique. He runs scenarios that write out scenarios to be run later. The gem he wrote to do this is called cukewriter. He describes how to use it in pretty good detail on the github page for the gem. I hope this will work for you, too.

In vows, is there a `beforeEach` / `setup` feature?

Vows has an undocumented teardown feature, but I cannot see any way to setup stuff before each test (a.k.a. beforeEach).
One would think it would be possible to cheat and use the topic, but a topic is only run once (like teardown), whereas I would like this to be run before each test. Can this not be done in vows?
You can create a topic that does the setup, and the tests come after that. If you want it to run multiple times, create a function and have multiple topics that call that function.
It is a bit convoluted because it is not explicit, you should definitely consider mocha not only because it is actively maintained, but it makes tests easier to read than what you end up with when using vows.

Resources