I am now working on a project where we are using cucumber-jvm to drive acceptance tests.
On previous projects I would create internal DSLs in groovy or scala to drive acceptance tests. These DSLs would be fairly simple to use such that even a non-techie would be able to write tests with a little bit of guidance.
What I see is that BDD adds another layer of indirection and semantic sugar to the tests, but I fail to see the value-add, especially if the non-techies can use an internal DSL.
In the case of cucumber, stepDefs seem to scatter the code that drives any given test over several different classes, making the test code difficult to read and debug outside the feature file. On the other hand putting all the code pertaining to one test in a single stepDef class discourages re-use of stepsDefs. Both outcomes are undesirable, leaving me asking what is the use of natural language worth all this extra, and unintuitive indirection?
Is there something I am missing? Like a subtle philosophical difference between ATDD and BDD? Does the former imply imperative testing whereas the latter implies declarative testing? Do these aesthetic differences have intrinsic value?
So I am left asking what is the value add to justify the deterioration in the readability of the actual code that drives the test. Is this BDD stuff actually worth the pain? Is the value add more than just aesthetic?
I would be grateful if someone out there could come up with a compelling argument as to why the gain of BDD surpasses the pain of BDD?
What I see is that BDD adds another layer of indirection and semantic sugar to the tests, but I fail to see the value-add, especially if the non-techies can use an internal DSL.
The extra layer is the plain language .feature file and at the point of creation it has nothing to do with testing, it has to do with creating the requirements of the system using a technique called specification by example to create well defined stories. When written properly in the business language, specification by example are very powerful at creating a shared understanding. This exercise alone can both reduce the amount of rework and can find defects before development starts. This exercise is otherwise known as deliberate discovery.
Once you have a shared understanding and agreement on the specifications, you enter development and make those specifications executable. Here is where you would use ATDD. So BDD and ATDD are not comparable, they are complimentary. As part of ATDD, you drive the development of the system using the behaviour that has been defined by way of example in the story. the nice thing you have as a developer is a formal format that contains preconditions, events, and postconditions that you can automate.
Here on, the automated running of the executable specifications on a CI system will reduce regression and provide you with all the benefits you get from any other automated testing technique.
These really interesting thing is that the executable specification files are long-lived and evolve over time and as you add/change behaviour to your system. Unlike most Agile methodologies where user stories are throw-away after they have been developed, here you have a living documentation of your system, that is also the specifications, that is also the automated test.
Let's now run through a healthy BDD-enabled delivery process (this is not the only way, but it is the way we like to work):
Deliberate Discovery session.
Output = agreed specifications delta
ATDD to drive development
Output = actualizing code, automated tests
Continuous Integration
Output = report with screenshots is browsable documentation of the system
Automated Deployment
Output = working software being consumed
Measure & Learn
Output = New ideas and feedback to feed the next deliberate discover session
So BDD can really help you in the missing piece of most delivery systems, the specifications part. This is typically undisciplined and freeform, and is left up to a few individuals to hold together. This is how BDD is an Agile methodology and not just a testing technique.
With that in mind, let me address some of your other questions.
In the case of cucumber, stepDefs seem to scatter the code that drives any given test over several different classes, making the test code difficult to read and debug outside the feature file. On the other hand putting all the code pertaining to one test in a single stepDef class discourages re-use of stepsDefs. Both outcomes are undesirable, leaving me asking what is the use of natural language worth all this extra, and unintuitive indirection?
If you make the stepDefs a super thin layer on top of your automation testing codebase, then it's easy to reuse the automation code from multiple steps. In the test codebase, you should utilize techniques and principles such as the testing pyramid and the shallow depth of test to ensure you have a robust and fast test automation layer. What's also interesting about this separation is that it allows you to ruse the code between your stepDefs and your unit/integration tests.
Is there something I am missing? Like a subtle philosophical difference between ATDD and BDD? Does the former imply imperative testing whereas the latter implies declarative testing? Do these aesthetic differences have intrinsic value?
As mentioned above, ATDD and BDD are complimentary and not comparable. On the point of imperative/declarative, specification by example as a technique is very specific. When you are performing the deliberate discovery phase, you always as the question "can you give me an example". In that example, you would use exact values. If there are two values that can be used in the preconditions (Given) or event (When) steps, and they have different outcomes (Then step), it means you have two different scenarios. If the have the same outcome, it's likely the same scenario. Therefore as part of the BDD practice, the steps need to be declarative as to gain the benefits of deliberate discovery.
So I am left asking what is the value add to justify the deterioration in the readability of the actual code that drives the test. Is this BDD stuff actually worth the pain? Is the value add more than just aesthetic?
It's worth it if you are working in a team where you want to solve the problem of miscommunication. One of the reasons people fail with BDD is because the writing and automation of features is lefts to the developers and the QA's, and the artifacts are no longer coherent as living specifications, they are just test scripts.
Test scripts tell you how a system does a particular thing but it does not tell you why.
I would be grateful if someone out there could come up with a compelling argument as to why the gain of BDD surpasses the pain of BDD?
It's about using the right tool for the right job. Using Cucumber for writing unit tests or automated test scripts is like using a hammer to put a screw into wood. It might work, but it's never pretty and it's always painful!
On the subject of tools, your typical business analyst / product owner is not going to have the knowledge needed to peek into your source control and work with you on adding / modifying specs. We created a commercial tool to fix this problem by allowing your whole team to collaborate over specifications in the cloud and stays in sync (realtime) with your repository. Check out Simian.
I have also answered a question about BDD here that may be of interest to you that focuses more on development:
Should TDD and BDD be used in conjunction?
Cucumber and Selenium are two popular technologies. Most of the organizations use Selenium for functional testing. These organizations which are using Selenium want to integrate Cucumber with selenium as Cucumber makes it easy to read and to understand the application flow. Cucumber tool is based on the Behavior Driven Development framework that acts as the bridge between the following people:
Software Engineer and Business Analyst.
Manual Tester and Automation Tester.
Manual Tester and Developers.
Cucumber also benefits the client to understand the application code as it uses Gherkin language which is in Plain Text. Anyone in the organization can understand the behavior of the software. The syntax's of Gherkin is in the simple text which is readable and understandable.
Related
For the last few months, I was working on the backend (REST API) of a quite big project that we started from scratch. We were following BDD (behavior-driven-development) standards, so now we have a large amount of tests (~1000). The tests were written using chai - a BDD framework for Node.JS, but I think that this question can be expanded to general good practices when writing tests.
At first, we tried to avoid code redundancy as much as possible and it went quite well. As the number of lines of code and people working on the project grew it was becoming more and more chaotical, but readable. Sometimes minor changes in the code that could be applied in 15 minutes caused the need to change e.g. mock data and methods in 30+ files etc which meant 6 hours of changes and running tests (extreme example).
TL:DR
We want to refactor now these BDD tests. As an example we have such a function:
function RegisterUserAndGetJWTToken(user_data, next: any){
chai.request(server).post(REGISTER_URL).send(user_data).end((err: any, res: any) => {
token = res.body.token;
next(token);
})
}
This function is used in most of our test files. Does it make sense to create something like a test-suite that would contain this kind of functions or are there better ways to avoid redundancy when writing tests? Then we could use imports like these:
import {RegisterUserAndGetJWTToken} from "./test-suite";
import {user_data} from "./test-mock-data";
Do you have any good practices that you can share?
Are there any npm packages that could be useful (or packages for
other programming languages)?
Do you think that this approach has also downsides (like chaos when
there would be multiple imports)?
Maybe there is a way to inject or inherit the test-suite for
each file, to avoid imports and have it by default in each file?
EDIT: Forgot to mention - I mean integration tests.
Thanks in advance!
Refactoring current test suite
Your principle should be raising the level of abstraction in the tests themselves. This means that a test should consist of high-level method calls, expressed in domain language. For example:
registerUser('John', 'john#smith.com')
lastEmail = getLastEmailSent()
lastEmail.receipient.should.be 'john#smith.com'
lastEmail.contents.should.contain 'Dear John'
Now in the implementation of those methods, there could be a lot of things happening. In particular, the registerUser function could do a post request (like in your example). The getLastEmailSent could read from a message queue or a fake SMTP server. The thing is you hide the details behind an API.
If you follow this principle, you end up creating an Automation Layer - a domain-oriented, programmatic API to your system. When creating this layer, you follow all the good design principles, like DRY.
The benefit is that when a change in the code happens, there will be only one place to change in the test code - in the Automation Layer, and not in the test themselves.
I see that what you propose (extracting the RegisterUserAndGetJWTToken and test data) is a good step towards creating an automation layer. I wouldn't worry about the require calls. I don't see any reason for not being explicit about what our test depends on. Maybe at a later stage some of those could be gathered in larger modules (registration, emailing etc.).
Good practices towards a maintainable test suite
Automate at the right level.
Sometimes it's better to go through the UI or REST, but often a direct call to a function will be more sensible. For example, if you write a test for calculating taxes on an invoice, going through the whole application for each of the test-cases would be an overkill. It's much better to leave one end-to-end test see if all the pieces act together, and automate all the specific cases at the lowest possible level. That way we get both good coverage, as well as speed and robustness of the test-suite.
The guiding principle when writing a test is readability.
You can refer to this discussion for a good explanation.
Treat your test helper code / Automation Layer with the same care as you treat your production code.
This means you should refactor it with great care and attention, following all the good design principles.
I'm new to cucumber as a testing suite. I notice that as I build out feature and write steps. Lets say as a bad example (since I'm working backwards) I write a bunch of stuff for creating posts that require a User.
I end up writing a bunch of User based steps (log in process etc) in a feature set mainly dedicated to Post features.
Is it best practice to later move steps into the appropriate feature set as tests get more complicated and features get added?
You have two parts to consider here.
Organize scenarios so they make sense. That is to place them in the proper feature files.
Organize the implementation of the steps so they make sense. That is, implement the steps in the proper source code files.
Your question boils down to "What makes sense in my context?".
It depends on your stakeholders, do they want all user facing scenarios in the same feature file or are they more interested in business facing scenarios that sometimes involve users? Organize the scenarios so your stakeholders are happy.
How should you organize the steps then? It depends on your developers and your ability to share state between step definitions that are implemented in different source code files.
My approach would probably be to start out small and let the suite grow. This would initially not involve sharing state between different classes during runtime. When the suite feels to large to handle, divide it in two parts that are as coherent as you can make them. When this gets to large, repeat the division again. You will, hopefully, end up with something that works well in your context.
Remember that your context and your product is unique. It probably deserves a unique solution that your team feel they can maintain.
Understandability and therefore manintainability is the most important property I can think of regarding the executable specification you are building.
I've been looking into Spock and I've had experience with FitNesse. I'm wondering how would people choose one over the other - if they appear to be addressing the same or similar problem space.
Also for the folks who have been using Spock or other groovy code for tests, do you see any noticeable performance degradation? Tests are supposed to give immediate feedback - as we know that if the tests take longer to run, the developer tends to run them less frequently - so I'm wondering if the reduction in speed of test execution has had any impact in the real world.
Thanks
I am no FitNesse guy, so please take what I say with a grain of salt. To me it seems what FitNesse is trying to do is to provide a programming language independent environment to specify tests. They use it to have a more visual interface with the programmer. In Spock a Groovy ast transform is used to transform the table into a groovy program.
Since you basically stay in a programming language it is in Spock more easy to realize more complicated test setups. As a result you often seem to have to write fixture code in FitNesse.
I personally don't need a test execution button, I like the direct approach. I like not having to take of even more classes, only to enable testing and I like looking at the code directly. For example I want to just execute my test from the command line, not from a web interface. That is surely possible in FitNesse too, but as a result the whole visual thing FitNesse is trying to give the user is just ballast for me. That's why I would choose Spock over FitNesse.
The advantage of the language agnostic approach is of course, that a lot of test specifications can be used for Java and for .Net. so if that is a requirement for you, you may want to judge different. It usually is not to me.
As for performance, I would not worry too much about that part.
Using the right language for the job is the key - this is the comment I read in SO and I also belive thats the right thing to do. Because of this we ended up using different languages for different parts of the project - like perl, VBA(Excel Macros), C# etc. We have three to four languages currently in use inside the project. Using the right language for the job has made it immensly more easy to do automate a job, but of late people are complaining that any new person who has to take over the project will have to learn so many different languages to get started. Also it is difficult to find such kind of person. Please note that this is a one to two person working on the project maximum at a given point of time. I would like to know if the method we are following is right or should we converge to single language and try to use it across all the job even though another language might be better suited for it. Your experenece related to this would also help.
Languages used and their purpose:
Perl - Processing large text file(log files)
C# with Silverlight for web based reporting.
LabVIEW for automation
Excel macros for processing data in excel sheets, generating graphs and exporting to powerpoint.
I'd say you are doing it right. Almost all the projects I've ever worked on have used multiple languages, without any problems. I think you may be overestimating the difficulty people have picking up new languages, particularly if they all use the same paradigm. If your project were using Haskell, Smalltalk, C++ and assembler, you might have difficulties, I must admit :-)
Using a variety languages vs maintainability is simply another design decision with cost-vs-benefits trade-offs that, like any design decision, need to considered carefully. While I like using the "absolute best" tool for the task (whatever "absolute best" means), I wouldn't necessarily use it without considering other factors such as:
do we have sufficient skill and experience to implement successfully
will we be able to find the necessary resources to maintain it
do we already use the language/tech in our system
does the increase in complexity to the overall system (e.g. integration issues, the impact on build automation) outweigh the benefits of using the "absolute best" language
is there another language that we already use and have experience in that is usable in lieu of the "absolute best" language
I worked a system with around a dozen engineers that used C++, Java, SQL, TCL, C, shell scripts, and just a touch of Perl. I was proud that we used the "best language" where they made sense, but, in one case, using the "best language" (TCL) was a mistake - not because it was TCL - but rather because we failed to observe the full costs-vs-benefits of the choice:*
we had only 1 engineer deeply familiar with TCL - the original engineer who refused to use anything but TCL for a particular target component - and then that engineer left the project
this target component was the only part of the system to use TCL and it was small relative to the other components in the system
the component could have been also implemented in another language we already used that we had plenty of experience in (C or C++) with some extra effort
the component appeared deceptively simple, but in reality had subtle corner cases that bit us in production (not something we could have known then, but always something to consider as a possibility later)
we had to implement special changes to the nightly build because the TCL compiler (yes, we compiled the TCL code into an executable) wouldn't work unless it could first throw its logo up on the screen - a screen which wasn't available during a cron-initiated automated nightly build. (We resorted to using xvfb to satisfy it.)
when bugs turned up, we had difficult time finding engineers who wanted to work on it
when problems with this component cropped up only after sustained load in the field, we lacked the experience and deep understanding of the TCL execution engine to easily debug it in the field
finally, the maintenance and sustainment team, which is a much smaller team with fewer resources than the main development team, had one more language that they needed training and experience in just to support this relatively small component
Although there were a number of things we could have done to head-off some of the issues we hit down the road (e.g. put more emphasis on getting TCL experience earlier, running better tests to detect the issues earlier, etc), my point is that the overall cost-vs-benefit didn't justify using TCL to code that single component. Again, it was not a mistake to use TCL because it was TCL (TCL is a fine language), but rather it was a mistake because we failed to give full consideration to the cost-vs-benefits.
As a Software Engineer it's your job to learn new languages if you need to. I would say you should go with the right tool for the job.
It's like sweeping the floor with an octopus. Yeah, it gets the job done... kind of... but it's probably not the best tool for the job. You're better off using a mop.
Another option is to create positions geared towards working in specific languages. So you can have a C# developer, a Perl developer, and a VBA developer who will only work with that language. It's a bit more overhead, but it is a workable solution.
Any modern software project of any scope -- even if it's a one-person job -- requires more than one language. For instance, a web project usually requires Javascript, a backend language, and a DB query language (though any of these might be created by the backend language). That said, there's a threshold that is easily reached, and then it would be very hard to find new developers to take over projects. What's the limit? Three languages? Four? Let's say: five is too many, but one would be too few for any reasonably complex project.
Using the right language for the right job is definitely appropriate - I am mainly a web programmer, and I need to know server-side programming (Rails, PHP + others), SQL, Javascript, jQuery, HTML & CSS (not programming languages strictly, but complex things I need to know) - it would be impossible for me to do all of that in a single language or technology.
If you have a team of smart developers, picking up new languages will not be a problem for them. In fact they probably will be eager to do so. Just make sure they are given adequate time (and mentoring) to learn the new language.
Sure, there will be a learning curve to implementing production code if there is a new language to learn, but you will have a stronger team member for it.
If you have developers in your teams who strongly resist learning new languages, unless there is a very good reason (e.g they are justifiably adamant that they are being asked to used a different language when it is not appropriate to do so) - then they are not the sort of developers I'd want in my teams.
And don't bother trying to hire people who know all the languages you use. Hire smart programmers (who probably know at least one language you use) - they should pick up the other languages just fine.
I would be of the opinion, that if I a programmer on my team wanted to introduce a second (or third) language into a project, that there better be VERY VERY good reason to do so; as a project manager I would need to be convinced that the cost of doing so, more than offset the problems. And it would take a lot of convincing.
Splitting up project into multiple languages makes it very expensive to hire the right person(s) to take over that project when it needs maintenance. For small and medium shops it could be a huge obstacle.
Edit: I am not talking about using javascript and c# on the same project, I am talking about using C# for most of the code, F# for a few parts and then VB or C++ for others - there would need to be a compelling reason.
KISS: 'Keep it simple stupid' is a good axiom to follow in most cases.
EDIT: I am not completely opposed to adding languages, but the burden of proof is on the person to who wants to do it. KISS (to me) applies to not only getting the project/product done and shipped, but also must take into account the lifetime maint. and support requirements. Lots of languages come and go, and programmers are attracted to new languages like a moth to a light. Most projects I have worked on, I still oversee and/or support 5 or 10 years later - last thing I want to see is some long forgotten and/or orphaned language responsible for some key part of an application I need to support.
My experience using C++ and Lua was that I wrote more glue than actual operational code and for dubious benefit.
I'd start by saying the issue is whether you are using the right paradigm for the job?
Suppose you know how to do object oriented programming in C#. I don't think the leap to Java is all that great . Although you'd have to familiarize yourself with libraries and syntax, the idea is pretty similar.
If you have procedural parts to your project, such as parsing files and various data transformations, your Perl/Excel Macros seem pretty similar.
But to address your issues, I'd say above all, you'll need clarity in code. Since your staff are using several languages, they won't be familiar with all languages to an equal degree. So make sure of this:
1) Syntactic sugar is explained in comments. Sugars are pretty language specific and may not be obvious to a new reader. For instance, in VBA I seem to remember there are default properties, so that TextBox1 = "Hello" is what you'd write instead of TextBox1.Text = "Hello". This can be confusing. Also, things like LINQ statements can have un-obvious meanings. So make sure people have comments to read.
2) Where two components from different languages have to work together, make excruciatingly specific details about how that happens. For instance, I once had to write a C component to be called from Excel VBA. There's quote a few steps and potential errors in doing this, especially as far as compiler flags. Make sure both sides know how their side of the interaction occurs.
Finally, as far as hiring people goes, I think you have to find people who aren't married to a specific language. To put it vaguely, hire an intelligent person who sees business issues, not code. He'll learn the lingo soon enough.
I work on a medium/large company that follows what I think are some good practices for development, maybe not the best ones but good enough.
We've some development resource that get implemented on the basis of "do, test, if it useful the use, else throw away". I've found that most of the so called best practices are, sometimes, ideally great but unfeasible or even harmful in real life.
For example, we use to have a dotproject website for our team. The idea was to track tasks, update progress and so on. We used the "do, test, if" with it and the result was... we threw it away and just keep the forum that was extremly useful to communicate between us and keep track of conclusions of meetings and TO DO lists... Tracking each task on the other hand proved to be both time consuming and unrealistic.
Firs of all nobody was doing it, it didn't take a lot of time but developers hated it and make them unhappy because they had to remember updating every task, the estimates for the times of the task proved to be unrealistic most of the time.
So my question is, What development techniques have you tried and found useful/unuseful?
I mean that as in real life, not some theoretical best practice, but as a hands on experience. I'm looking to explore new techniques (or tools or whatever) and I'd like opinions on what to do next. Our current status:
Internal issue tracking system (Useful)
Semiautomated builds (every developer has to maintain the equivalent of a makefile in order for the system to be able to make them).
NO automated testing. Test are performed by a test team. We have integration tests and wide system tests.
Two test labs, one for the Test team, the other for the developers (in case they need to perform test that involves more than one machine or to test out of the development machine)
No unit testing in general. Some libraries have them but usually the developer test its units as he wants.
Full Specification using DOORs.
Test protocols. Formal, written in Word.
Source control (Clear Case). Usually everything is done in the main branch, whenever we ship a version it gets labeled, if needed a branch is made for fixes to that version.
Note: When you can (if you don't mind :P), could you try to justify your proposal? How and why was it useful? How did improve your work?
One of the most useful things we introduced was a project Wiki, an extremely useful dumping ground for all the little titbits of information floating around in peoples head but too trivial to record in a full document.
Having been involved in all sorts of development teams with different methodologies, my experience is that most of the agile principles tend to work great. It generally makes development more fun, since everyone is more engaged. In the larger development environments the basic principle of co-locating all team members brings great benefits, especially when you have separate information analysts and testers next to developers. Have information analysts, testers and developers work together per feature. This creates the best flow of information, with as little overhead as possible. You can take this even further to a Lean development process.
In general, the things that gave us the greatest benefits were the things that lowered the barriers of communication. In a practical sense, a company wiki helped out a great deal as well, lowering the barrier for sharing information. A good bug/features/RFC tracking tool also helped a great deal to have a joint understanding amongst stakeholders of what direction the project is heading. And such a tracking tool does not just have to be internal: lower the barriers to your customers as well. This also helps a great deal in managing expectations.
I feel I'm just getting started here. Others will no doubt come with more suggestions.
Pascal.
Follow this link...
And personally, as a developer I prefer to focus on things that improve my performance. I don't mind checking some bug reporting site to check if new bugs are reported, but I need to be able to view it quickly without having to go through a few dozens of pages or dozens of clicks.
I don't mind writing technical designs before writing code either, as long as I have the tools to write it properly. These tools must be created to increase performance with a minimum of fluffy features that no one uses. For example, in the past I've used Enterprise Architect to create UML models before writing code. It worked fine but the application has some flaws and lots of features that I don't need. When I discovered Altoma UModel, I quickly changed to a much leaner tool for UML generation that offered me exactly what I need. Nothing more, nothing less.
Basically, you have to keep people focused on the final goal. And the final goal is creating some product to be used by your users. Many development teams get lost because they focus on other things instead. None of your users will care about how you created the thing they use. They just need your project to reach their goals.
Thus, the best practice is that which makes your team the most comfortable, including any new team member that would join halfway of any project.
Personally, I'm a very big fan of Automatic Testing (both unit tests and integration tests). In my view it's like a safety net - developers feel safer when changing code when they know there's a test harness that could catch them if the break something. This allows you faster introduction of new features, but also removes the 'fear' of refactoring, for example.
As management for our team: SCRUM