grunt-protractor-runner Option.args mochaOpts - grunt-plugins

I am having hard time with passing arguments to the grunt-protractor-runner plug-in. I am trying to tag some of my tests and then call that specific test. So, my command for running test's is:
grunt e2e:cert:ed:regression_part_3
But, what I am trying to accomplish is something like this:
grunt protractor --mochaOpts={tags:"myTag"} e2e:cert:ed:regression_part_3
But that doesn't seem to work. Any ideas ?

Related

TEST.AI use in node.js mocha test

I want insert test.ai abilities to my mocha test I try use test-ai-classifier-client
according to https://github.com/testdotai/classifier-client-node, I tried this code in folder test/rpc-e2e-specs.js
but always get :
TypeError: ClassifierClient is not a constructor
I dont understand the using maybe some one help me how to use TEST.AI in this way or another way
Thank,

Using AfterEach with different file module in NodeJS getting different values

I'm quite new in using Node JS, and I have been working on a test script that take screenshots whenever a test fails. And I'm trying to do this without the use of Jasmine reporter. I tried to use this approach instead Check if test failed in 'afterEach' of Jest without jasmine, however, I'm working with different files I have a file fail_test.spec.js that is used as my main file, and a test_fail1.js as another testscript file. Here is what happening, my test on fail_test.spec.js works fine with the use of AfterEach, just like in the link, it gives me "true" value if the test passed and "false" value when the test fails and it performs screenshot. The problem is the test_fail1.js is also being check by the AfterEach and it constantly gives of a "false" value even if the test passed. I do intend to use AfterEach with the test_fail1.js and on other tests in the future. So my questions are:
Why does the test_fail1.js only gives of constant "false" value?
Is there any work around with this? Because I just only need to know the status of the test in every testscripts within or with other files (ex.fail_test1.js, fail_test2.js, and so on)

integrating protractor with Jenkins

When I want to run specific test or suites. I run them from terminal.
I've installed jenkins and configured my first free style project.
I added shell command (ex: protractor conf.js --suites A --params.user =A).
Everything works fine. If I want to run multiple suites I must edit my shell command inside jenkins. Is there any workaround?, like checkboxes, so I can check which suites I want to run.
Also I want to know about extensible parameters. I want to select which parameters I want to run. Instead of putting command protractor conf.js --params.user=oneuser I want to be able to choose it from GUI.
Look into parameterized builds.
"First, you need to define parameters for your job by selecting "This build is parameterized", then using the drop-down button to add as many parameters as you need."
"String parameters are exposed as environment variables of the same name. Therefore, a builder, like Ant and Shell, [or protractor] can use the parameters."
So if you make "protractorSuites" a string parameter, you can reference it like:
protractor conf.js --suites ${protractorSuites} --params.user =A
Then when you "Build with parameters" you can supply the appropriate suite.

GGTS can't run Groovy Shell

I'm using the Groovy Grails Tool Suite to practice Groovy. I want to run a Groovy Shell, but when I create a new shell and try to run it, I get this error:
Could not find $jarName on the class path. Please add it manually
What does this mean, and how do I resolve this?
I believe this is happening because JLine can't be found on your classpath. I submitted a PR to make the error message in this case actually useful.
I had a similar problem with this exact same message, but the reason was that I was attempting to run the script without specifying which script to run. Ensure you have the script open in the editing window and trying running it again - that got rid of the message for me.

Is there a way to run a single cucumber feature file on autotest?

I'd like to run just a single cucumber feature file on autotest. I'd like the test to be run, report failures, then run again as soon as I save a change to my code base. Anyone know a way to do this?
--Jack
I found a solution myself:
Watchr - https://github.com/mynyml/watchr
It watches whenever you save specified files and runs specified tests at that point. Uses pattern matching.

Resources