I want to test natively implemented es6 features.
If a browser does not support that feature, I would like the test not to show in the browserscope graph.
Will this "just work" (jsperf cancel the tests and not show in graph) or is there a special way to cancel jsperf tests?
I'll try this, naturally, but I wonder if there is a "jsperf way" to manage this.
Related
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I want to have an option on the cucumber report to mute/hide scenarios with a given tag from the results and numbers.
We have a bamboo build that runs our karate repository of features and scenarios. At the end it produces nice cucumber html reports. On the "overview-features.html" I would like to have an option added to the top right, which includes "Features", "Tags", "Steps" and "Failures", that says "Excluded Fails" or something like that. That when clicked provides the same exact information that the overview-features.html does, except that any scenario that's tagged with a special tag, for example #bug=abc-12345, is removed from the report and excluded from the numbers.
Why I need this. We have some existing scenarios that fail. They fail due to defects in our own software, that might not get fixed for 6 months to a year. We've tagged them with a specified tag, "#bug=abc-12345". I want them muted/excluded from the cucumber report that's produced at the end of the bamboo build for karate so I can quickly look at the number of passed features/scenarios and see if it's 100% or not. If it is, great that build is good. If not, I need to look into it further as we appear to have some regression. Without these scenarios that are expected to fail, and continue to fail until they're resolved, it is very tedious and time consuming to go through all the individual feature file reports and look at the failing scenarios and then look into why. I don't want them removed completely as when they start to pass I need to know so I can go back and remove the tag from the scenario.
Any ideas on how to accomplish this?
Karate 1.0 has overhauled the reporting system with the following key changes.
after the Runner completes you can massage the results and even re-try some tests
you can inject a custom HTML report renderer
This will require you to get into the details (some of this is not documented yet) and write some Java code. If that is not an option, you have to consider that what you are asking for is not supported by Karate.
If you are willing to go down that path, here are the links you need to get started.
a) Example of how to "post process" result-data before rendering a report: RetryTest.java and also see https://stackoverflow.com/a/67971681/143475
b) The code responsible for "pluggable" reports, where you can implement a new SuiteReports in theory. And in the Runner, there is a suiteReports() method you can call to provide your implementation.
Also note that there is an experimental "doc" keyword, by which you can inject custom HTML into a test-report: https://twitter.com/getkarate/status/1338892932691070976
Also see: https://twitter.com/KarateDSL/status/1427638609578967047
I was trying to repro a tutorial about the creation of an Excel Add-in when something get wrong with the IntelliSense of Visual Studio. I was writing this code:
function updateStocks() {
Excel.run(function (ctx) {
var **range** = ctx.workbook.names.getItem("Stocks")
At this step, everything was fine, but after the getItem, I have tried to add .getRange() at which point the IntelliSense was not able to understand anything related to my variable range.
Screenshot
What is really "funny" is the fact that even if the properties are not displayed, when I write the code of the tutorial manually, the code is executed without mistake.
Why does this behavior occur and how can I correct it?
Are you able to see IntelliSense for other types within that .run? I.e., do you have everything up to the point where you get a Range from a named item? If you were to obtain the range differently (e.g., context.workbook.getSelectedRange()), do you get IntelliSense then?
This might be related to an issue (now fixed) where the CDN accidentally had the namedItem.getRange method removed (it was the only one that was affected, and we've put in measures to ensure we catch those in the future). See "Can't get range from a defined name". The CDN has been patched a couple weeks ago, but the JS IntelliSense file ("VSDOC") probably hadn't. If that's the case, it's a point-in-time issue that should resolve itself very very soon, as new deployments of the CDN will have the getRange method both in the VSDOC and everywhere else.
FWIW, you may still run into limitations of the JS IntelliSense engine (there are plenty, unfortunately: for example, trying to pass values across Promises, or passing in API objects as parameters to functions). The only true good workaround for this is using TypeScript, which allows you to declaratively assert to the compiler/IntelliSense-engine that "I know this type is an Excel.Range!") -- and offers a number of other goodies, async/await being a very major one. I personally believe that if you really want a "premier" Office.js-coding experience, TypeScript is the way to go. To that end, I describe how to use TypeScript in my book, "Building Office Add-ins using Office.js" (full disclosure, I am the author; but I've had many readers comment on how helpful of a resource it's been to them). The book is very much TypeScript-oriented, IntelliSense being one of the reasons (and async/await and let being the primary others) -- though I also offer an Appendix where I describe the JavaScript-only way of accomplishing the same Office.js tasks. It takes only a small amount of effort to get started with using TypeScript, and once you do, I don't think you'll look back.
We use Hudson (well, Jenkins now) for CI. I have just started a project based on Node.js, and am investigating Expresso and Gently (testing and mocking). I really like the fact that Expresso works with node-jscoverage to generate code coverage reports.
Has anybody started a project to display Expresso and node-jscoverage reports (or JScoveage) in Hudson? Failing that, is there some documantation on what kind of output Hudson is expecting, short of inventing an entire new plugin?
In summary, I'm looking for two types of output here. Test results (like junit) and coverage reports (like Cobertura).
Do Expresso and node-jscoverage produce xml output?
If so,
Jenkins and cFix unit testing (C++)
Following the above, you could convert your xml output to formats that junit understand (for tests, using an XSLT), and convert your coverage xml output into cobertura format (again, using XSLT).
See this also:
http://www.van-porten.de/2009/05/cunit-tests-in-hudson/
for a sample XSLT.
You could try the xUnit plugin. On their wiki page it says that it can handle txt and csv files using custom style sheets. In theory this should work for your test reports t least. I have never tried that though.
As for coverage I am not aware of any plugins that can deal with arbitrary coverage tools.
If the HTML reports the tools produce are usable you could use the HTML Publisher plugin to link those reports in your job, and make them accessible from Jenkins. Not as nice an integration as a test tool plugin can provide, but depending on your expectations it might be enough.
Otherwise you will probably be forced to write a custom plugins. You could also try a request on the Jenkins Mailing List, maybe someone is working on such a plugin already.
Is there a way to get some help from RubyMine's code completion when using Capybara in Cucumber's step definitions? I'm new to Capybara, so not having to check the reference site all the time would be really helpful.
The best I can get at the moment is by explicitly calling Session.new, something like:
session = Capybara::Session.new(:rack_test, my_app)
This way Ctrl+Space after session. shows me methods from Capybara::Session (only) so at least I know it's somehow reachable. But that's not how I really use Capybara in my step definitions. I thought that helping the type inference engine by manually annotating page could do the trick, but I suppose all this DSL magic is too much to handle.
So basically, is it somehow possible to have
page.<Ctrl+Space>
pop up with all the exposed DSL methods? RubyMine API maybe? Or, as an alternative, some other way to bring the reference docs closer (I don't think RubyMine supports external docs in the IDE yet)?
As of RubyMine 8.0.3 the answer is no, RubyMine does not complete Capybara methods following page. in Cucumber step definitions, at least not when Capybara is included in Cucumber via the cucumber-rails gem. I don't see a feature request in the RubyMine issue tracker; someone could add one if they like.
Note that cucumber-rails, at least, includes the Capybara DSL in the Cucumber world, so you don't need to type page. in front of Capybara methods. You can just call visit, fill_in, etc. as self methods. I wouldn't want unnecessary page. in my step definitions just for the sake of RubyMine completion.
Unfortunately, RubyMine also doesn't include Capybara methods in the list of names it completes when you invoke completion in a step definition before you type anything. It does include Capybara methods in the list of names it completes when you invoke completion twice (all names in all available code), but since that list is so long it's only helpful if you already know the method you want already, or at least a correct prefix.
Finally, found a solution.
I am using Cucumber with Capybara and I included all the matchers I wanted to code complete in /features/support/spec/spec_helper.rb. Cucumber auto-loads everything in this file. I bet there are other places you can include these statements if you aren't using cucumber.
# Needed for RubyMine code completion
include Capybara::Node::DocumentMatchers
include Capybara::Node::Matchers
include Capybara::SessionMatchers
include Capybara::RSpecMatchers
include Capybara::RSpecMatcherProxies
For your specific case:
include Capybara::DSL
Then