Finding orphaned SpecFlow steps - cucumber

SpecFlow has the ability to generate a StepDefinitionReport. Unfortunately, it doesn't seem to list steps for which there is code, but the step is not actually used in any *.feature file. SpecFlow source code doesn't look like it's actually parsing the C# code, only the *.feature files, so it will never report a step with 0 uses.
Is there any other tool out there that will report orphaned steps? We have several hundred steps and multiple feature files that I'd rather not have to crawl them manually to find orphans.

I just tried the StepDefinitionReport with a trivial example in 5 minutes and it does report the orphaned steps. There must be another problem in your case. Also in the source code you can find the place where it collects the bindings: https://github.com/techtalk/SpecFlow/blob/master/TechTalk.SpecFlow.Reporting/StepDefinitionReport/StepDefinitionReportGenerator.cs#L38

Related

Possible to Control the Order of Cucumber Feature Files/Tags

The first part I must state is that this is not because certain tests must run before others. We have to run full regressions every week, that generates a singular report and not lots of smaller reports (I asked and management doesn't want this), 11k scenarios and we tag each scenario with the release they are associated with. We would love to be able to run the scenarios in specific orders depending on what occurred during the release so that we aren't wasting time waiting for the 100th from the last test to fail only to have to start from scratch again, or at the very least run certain files first.
I know there is the solution of just renaming the folders/files each release, which is what I am doing, but it is extremely tedious and I would like to just change something in our Java runner.
There are similar questions from years ago, so I am hoping some hack or feature has been added that I just can't seem to find.

What's the real use of the management/commands directory in the Django Application?

In the documentation, it is written that it can be used for writing custom Django-admin commands. But my question is why do we need to write custom Django admin commands? The given example in the official documentation is a bit dry to me. I would be really grateful if someone give real-world examples from which I can connect its real-life use.
Django doc on management/commands:https://docs.djangoproject.com/en/2.2/howto/custom-management-commands/
I mainly use it from Cron / Scheduled Tasks..
Some potential examples would be:
Sending out Reports/Emails
Running Scripts to Update+Sync some Values
Updating the Cache
Any large update to values- save it to a command to run on the Prod Env
I make it + test it locally, but then I don't want to Copy+Paste it in a SSH terminal cause it sometimes gets all sorts of messed up in the paste.
I also have a management command dothing that sets up the entire project.. runs migrations, collects static, imports db, creates test users, creates required folder structures, etc.
I also have a couple of commands that I use, that I haven't made into Views.. Little tools to help me validate and clean data, spits out a representation of it
Django scheduled operations and report generation from cron is the obvious one.
Another I use is for loading data into the DB from csv files. It's easy in the management command environment to handle bad rows. I write the original csv row into an exceptions file (with a error-description column appended) and can then look at it and decide what to do about these rows. Sometimes, just a trivial edit and feed it through the management command again. It's possible to do the same via a view, but extra work for IMO no gain.

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

How to write feature file and when to convert them to step definition to adapt to a changing business requirement?

I am working on a BDD web development and testing project with other team members.
On top we write feature files in gherkin and run cucumber to generate step functions. At bottom we write Selenium page models and action libraries scripts. The rest is just fill in the step functions with Selenium script and finally run cucumber cases.
Sounds simple enough.
The problem comes starting when we write feature files.
Problem 1: Our client's requirement keeps changing every week as the project proceed, in terms of removing old ones and adding new ones.
Problem 2: On top of that, for some features, detailed steps keep changing too.
The problem gets really bad if we try to generate updated step functions based on updated feature file every day. There are quite some housecleaning to do to keep step functions and feature files in sync.
To deal with problem 2, I remembered that one basic rule in writing gherkin feature file is to use business domain language as much as possible. So I tried to persuade the BA to write the feature file a little more vague, and do not include too many UI specific steps in it, so that we need not to modify feature files/step functions often. But she hesitate 'cause the client's requirement document include details and she just try to follow.
To deal with problem 1, I have no solution.
So my question is:
Is there a good way to write feature file so that it's less impacted by client's requirement change? Can we write it vague to omit some details that may change (this way at least we can stabilize the step function prototype), and if so, how far can we go?
When is a good time to generate the step definitions and filling in the content? From the beginning, or wait until the features stabilize a little? How often should we do it if the feature keep changing? And is there a convenient way to clean the outdated step functions?
Any thoughts are appreciated.
Thanks,
If your client has specific UI requirements for which you are contracted to provide automated tests, then you ought to be writing those using actual test automation tools. Cucumber is not a test automation tool. If you attempt to use it as such, you are simply causing yourself a lot of pain for naught.
If, however, you are only contracted to validate that your application complies with the business rules provided by your client, during frequent and focused discovery sessions with them, then Cucumber may be able to help you.
In either case, you are going to ultimately fail, if there's no real collaboration with your client. If they're regularly throwing new business rules, or new business requirements over a transome through which you have limited or no visibility, then you are in a no-win situation.

Find all classes involved in a method call

I have a .NET 4.0 C# Solution with a single .csproj (Library) having several thousand files.
I want to extract out a small subset of the functionality from the thousands of files.
e.g. I want to extract the functionality of the MyLibrary.RelevantMethod() method into another library.
The aim is to create a new .csproj with the bare minimum class files needed to achieve this functionality.
i have a Program.cs which invokes the functionality and i can navigate through the flow to find all classes involved. Just that there are too many. (still a small subset of all classes)
Solutions tried:
the usual brute force of going through the flow from the method (F12) and copying over every class file and associated files needed for it to compile. this is taking a lot of time, but i know that if i keep at it, it'll be done. so that is what i am doing right now.
other option was to copy over the whole project and eliminate folders of classes based on instinct/name space references, build to verify and keep at it. this got nasty because a subset of classes in a folder were needed.
the vs 2013 code-map graphs became unmanageable in 3 drill downs. sequence diagrams became too complex as well.
Call hierarchy seemed to be the most promising showing all the classes involved visually but there is still the manual task of drilling through and copying the classes.
while i manually continue extracting the class one-by-one using the call hierarchy, is there a faster way or a more automated way (semi works as well) to determine all the classes involved in a method call in C#?
if i can get the list, i can do a search on the physical folders nesting the .cs. files (every class has an equivalent .cs file) and just copy them over.
You can find all classes involved in a method call with the Runtime Flow tool (developed by me). From the Runtime Summary window you can also copy these classes to the Clipboard for the selected module or a namespace.

Resources