I have always thought in "Tests" as testing the role thing. I have seen that in many (not to say all) cases, tests are performed just with logic, not considering the "role" thing. Small pieces of code being tested (unit tests).
In my mind, a test should check if the code is OK, and in order to do that, using a real "HTTP request" is the best way of achieving it (for a WebServer).
However, I have noticed that this practice is not well "recommended", I can't say why, but in most Frameworks that I have used, you couldn't test it with real requests, without hacking into something.
Is it something TERRIBLY BAD, or just "ok"? I mean, why mocking up something that can be tested for real...
I'm not sure this is covered by other questions, but if so, I couldn't find it.
Related
I have a reasonably complex structure of data types and records, which is not so easy to make sense of by just looking at production code for someone not familiar with the codebase.
To make sense of it, I created this kind of dummy functions that have two advantages: 1. they're checked at compile-time and 2. they serve as some kind of documentation, showcasing an example of how the overall structure of data types and records mixes together:
-- benefit 1: a newcomer can quickly make sense of the type system
-- benefit 2: easier to keep track of how the types evolve because of compile-time checking
exampleValue1 :: ApiResponseContent
exampleValue1 = ApiOnlineResponseContent [
OnlineResultRow (EntityId 10) [Just (FvInt 1), Just (FvFloat 1.5), Nothing],
OnlineResultRow (EntityId 20) [Just (FvInt 2), Nothing, Just (FvBool True)]
]
The only thing that bothers me a bit is that they feel a bit awkward put within the production code, as they're clearly dead code. However they're not tests either, they're just compiled-time-checked examples of how values can be assembled together from complex nested types. Therefore, they clearly don't belong to production code, but they don't quite belong to tests either.
Is it common practice to have this kind of compile-time examples? And where should they be placed within the codebase?
Have you considered including the examples as part of Haddock documentation?
Otherwise, there's a school of thought that try to reframe tests as examples. You can find a brief mention of this in Gerard Meszaros's work on unit testing. Dan North has also repeatedly used similar language, but it can often be difficult to track down his many iterations of such ideas. He tends to 'think in public' - here's one such reflection:
"I call this development, using example-guided design"
I understand that the kind of example being asked about isn't quite like that. Still, I think that it better belongs with test code than with production code.
If you put the examples in the production code, you essentially make it part of the API of your library (if you're shipping a library). This means that changing the examples would constitute a breaking change. That doesn't seem right to me.
In the BDD/DDD community, there's a lot of emphasis on tests as examples, also in the sense that automated tests serve as documentation. If Haddock documentation isn't an option, I'd consider putting the examples in the test code, as documentation. I sometimes do that by simply putting such 'vacuous tests' in a file called examples, perhaps with a little comment at the top to explain that the code in that file assists learning rather than verify behaviour.
It dodges the risk of introducing redundant breaking changes in the production code, and seems conceptually like a better fit.
I disagree that these aren't tests. Even “does this example compile” could be seen as a test, but probably you could also use them to actually test some functions, while you're at it.
So, put these definitions in your test suite, and use them for unit testing the functions that will actually be dealing with such values.
The main convention I’ve seen for housing such examples is to include ….Tutorial modules, such as Dhall.Tutorial and Clash.Tutorial, or parallel …-tutorial packages such as lens-tutorial.
These contain Haddock documentation, organised so as to read in linear order, alongside example definitions like yours.
With good examples, I find this convention very helpful for understanding and experimenting with a new package in addition to types and reference docs. It’s also fairly discoverable when browsing packages on Hackage, especially if you ensure that the reference and README link to the tutorial for longer explanations.
These modules can be just compiled as basic validation, but having more machine validation for docs is incredibly valuable, so I think it’s even better to link them into the test suite (e.g. hspec) as examples for unit tests (HUnit) or property tests (QuickCheck/hedgehog), or organise them as documentation tests (doctest).
In addition to GHC’s code coverage tools, weeder can help identify examples that aren’t being tested.
For the last few months, I was working on the backend (REST API) of a quite big project that we started from scratch. We were following BDD (behavior-driven-development) standards, so now we have a large amount of tests (~1000). The tests were written using chai - a BDD framework for Node.JS, but I think that this question can be expanded to general good practices when writing tests.
At first, we tried to avoid code redundancy as much as possible and it went quite well. As the number of lines of code and people working on the project grew it was becoming more and more chaotical, but readable. Sometimes minor changes in the code that could be applied in 15 minutes caused the need to change e.g. mock data and methods in 30+ files etc which meant 6 hours of changes and running tests (extreme example).
TL:DR
We want to refactor now these BDD tests. As an example we have such a function:
function RegisterUserAndGetJWTToken(user_data, next: any){
chai.request(server).post(REGISTER_URL).send(user_data).end((err: any, res: any) => {
token = res.body.token;
next(token);
})
}
This function is used in most of our test files. Does it make sense to create something like a test-suite that would contain this kind of functions or are there better ways to avoid redundancy when writing tests? Then we could use imports like these:
import {RegisterUserAndGetJWTToken} from "./test-suite";
import {user_data} from "./test-mock-data";
Do you have any good practices that you can share?
Are there any npm packages that could be useful (or packages for
other programming languages)?
Do you think that this approach has also downsides (like chaos when
there would be multiple imports)?
Maybe there is a way to inject or inherit the test-suite for
each file, to avoid imports and have it by default in each file?
EDIT: Forgot to mention - I mean integration tests.
Thanks in advance!
Refactoring current test suite
Your principle should be raising the level of abstraction in the tests themselves. This means that a test should consist of high-level method calls, expressed in domain language. For example:
registerUser('John', 'john#smith.com')
lastEmail = getLastEmailSent()
lastEmail.receipient.should.be 'john#smith.com'
lastEmail.contents.should.contain 'Dear John'
Now in the implementation of those methods, there could be a lot of things happening. In particular, the registerUser function could do a post request (like in your example). The getLastEmailSent could read from a message queue or a fake SMTP server. The thing is you hide the details behind an API.
If you follow this principle, you end up creating an Automation Layer - a domain-oriented, programmatic API to your system. When creating this layer, you follow all the good design principles, like DRY.
The benefit is that when a change in the code happens, there will be only one place to change in the test code - in the Automation Layer, and not in the test themselves.
I see that what you propose (extracting the RegisterUserAndGetJWTToken and test data) is a good step towards creating an automation layer. I wouldn't worry about the require calls. I don't see any reason for not being explicit about what our test depends on. Maybe at a later stage some of those could be gathered in larger modules (registration, emailing etc.).
Good practices towards a maintainable test suite
Automate at the right level.
Sometimes it's better to go through the UI or REST, but often a direct call to a function will be more sensible. For example, if you write a test for calculating taxes on an invoice, going through the whole application for each of the test-cases would be an overkill. It's much better to leave one end-to-end test see if all the pieces act together, and automate all the specific cases at the lowest possible level. That way we get both good coverage, as well as speed and robustness of the test-suite.
The guiding principle when writing a test is readability.
You can refer to this discussion for a good explanation.
Treat your test helper code / Automation Layer with the same care as you treat your production code.
This means you should refactor it with great care and attention, following all the good design principles.
We have a couple of very very slow JUnit tests that make heavy use of mocking, including Mocking of static functions. Single Tests take 20-30 secs, the whole "mvn test" takes 25 minutes.
I want to analyze where the time is wasted but have little experience in profiling.
I assume that the initialization of the dependent mock-objects takes much too long.
Two questions:
1) How can I quickly get numbers in which methods the time is wasted? I need no complex power-user tool, just something basic to get the numbers. (evidence that the kind of mocking we do is evil)
2) Do you have ideas what design-flaws can produce such bad timings? We test JSF-backing beans that should call mocked services. Perhaps there might be some input-validation or not-refactored business logic in the backing beans, but that cannot be changed (pls dont comment on that ;-) )
ad 2) For example one test has about 30 (!) classes to be prepared for test with #PrepareForTest. This cannot be good, but I cannot explain why.
Here is my input on this:
Try using something simple like the Apache Commons StopWatch class. I find that this is an easy way to spot bottle necks in code, and usually when you find what the first bottlneck is then the rest of them are easier to spot. I almost never waste my time trying to configure an overly complicated profiling tool.
I think it is odd that you have such performance flaws in fully mocked unit tests. If I were to guess I would say that you are missing one or two mocked components and the database or external web services are actually being called without you knowing about it. Of course I may be wrong, because I don't use PowerMock and I make it a point to never mock any static methods. That is your biggest design flaw right now and the biggest hindrance to providing good test coverage on your code. So what to do? You have 2 options, you can refactor the static methods into class methods that can be more easily mocked. The other option is that you wrap the static methods in a class object wrapper, and then mock the wrapper instead. I typically do this if the static methods are from a third-party library where I do not have the source.
one test has about 30 (!) classes to be prepared for test with #PrepareForTest. This cannot be good, but I cannot explain why. This really sounds like you may also have methods that are doing entirely too much! That is just too many dependencies for a single method in about 99% of cases. More than likely this method can be seperated into seperate more easily testable methods.
Hope this helps.
Before you start developing something useful in Node.js, what's your process? Do you create tests on VowJS, Expresso? Do you use Selenium tests? When?
I'm interested in gaining a nice workflow to develop all my node.js applications similar to Rails (Cucumber, Rspec, Code).
Sorry for the amount of questions.
Let me know how it works out with you.
The first thing I do is to write some documentation or do some wireframes. It helps to visualize what do I want to implement.
Then I code the interface/skeleton of my module/application, without implementations.
Then I add specs and tests using testosterone (although vows and expresso are more popular options) and I make them pass by implementing them.
If you find that a private method needs to be tested (it deals with I/O, has complex logic ...) move it to a another class and test it independently.
Stub your I/O calls as much as you can. Tests will run faster and you will not have to deal with side effects. I recommend gently.
My testing methodology isn't up the snuff as in for example Java/Junit and I should really work more on this(improve). I should really practice TDD more.
I played a little bit with expresso and liked to the fact that you could generate code coverage reports. What I thought was missing was something like #before #beforeclass #after which you can find in java.
I also played a bit with nodeunit which does have setup/teardown. I still like to play a little bit more with this framework.
I don't like the vowjs syntax, but it is very popular BDD framework, so maybe I should use it (more) to get sold like a lot of other users. But for now I am going to dismiss vowjs.
I also played with zombie.js a litle bit which is also pretty cool. I also lately saw another cool testing framework which I can't remember the name, but there are luckily enough options to do testing in node.js.
The only thing I don't like is that the integration with IDE is not up to snuff in my opinion. The IDE I had for Java cannot be compared with what I have found for node.js, but I think with a little bit effort I can make a more useful programming environment. I will try and keep you guys informed about this progress.
P.S: But what I do like a lot is the npm package manager. When you compare it to for example maven you just say wow. I still has some minor bugs because it is still a young project. But still npm is very good in my opinion!
Do they contradict?
Decoupling is something great and quite hard to achieve. However in most of the applications we don't really need it, so I can design highly coupled applications and it almost will not change anything other than obvious side effects such as "you can not separate components", "unit testing is pain in the arse" etc.
What do you think? Do you always try to decouple and deal with the overhead?
It seems to me decoupling and YAGNI are very much complementary virtues. (I just noticed Rob's answer, and it seems like we're on the same page here.) The question is how much decoupling you should do, and YAGNI is a good principle to help determine the answer. (For those who speak of unit testing -- if you need to decouple to do your unit test, then YAGNI obviously doesn't apply.)
I really sincerely doubt the people who say they "always" decouple. Maybe they always do every time they think of it. But I have never seen a program where additional layers of abstraction couldn't be added somewhere, and I sincerely doubt there is a non-trivial example of such a program out there. Everyone draws a line somewhere.
From my experience, I've deoupled code and then never taken advantage of the additional flexibility about as often as I've left code coupled and then had to go back and change it later. I'm not sure if that means I'm well-balanced or equally broken in both directions.
YAGNI is a rule of thumb (not a religion). Decoupling is more or less a technique (also not a religion). So they're not really related, nor do they contradict each other.
YAGNI is about pragmatism. Assume you don't need something, until you do.
Usually assuming YAGNI results in decoupling. If you don't apply that knife at all, you end up assuming that you need to have classes that know all about each other's behavior before you have demonstrated that to be true.
"Decouple" (or "loosely couple") is a verb, so it requires work. YAGNI is a presumption, for which you adjust when you find that it's no longer true.
I'd say they don't. Decoupling is about reducing unnecessary dependencies within code and tightening up accesses through clean, well-defined interfaces. "You ain't gonna need it" is a useful principle which generally advises against over-extensibility and overly broad architecture of a solution where there's no obvious and current use case.
The practical upshot of these is that you have a system where it's much easier to refactor and maintain individual components without inadvertently causing a ripple effect across the entire application, and where there are no unnecessarily complicated aspects to the design - it's as simple as is required to meet the current requirements.
I (almost) always decouple. Every time I did this I found it useful, and (almost) every time I didn't I had to do it later. I've also found it a good way to decrease the number of bugs.
Decoupling for the sake of decoupling can be bad.
Building testable components is very important though.
The hard part of the story is to know when and how much decoupling you need.
If "unit testing is a pain in the arse" then I would say that you do need it. Most of the time decoupling can be achieved with virtually zero cost as well, so why wouldn't you want to do it?
Furthermore, one of my biggest bugbears when working on a new codebase is having to decouple the code before I can start writing unit tests when the introduction of an interface somewhere or use of dependency injection could save alot of time
As your tag says, it's highly subjective. It rests entirely upon your own engineering wisdom to decide what you "ain't gonna need". You may need coupling in one case, but you won't in another. Who is to tell? You, of course.
For a decision so subjective, then, there cannot be a guideline to prescribe.
Well, YAGNI is little more than a bogus simplistic phrase people throw around. Decoupling, however, is a fairly well understood concept. YAGNI seems to imply that one is some sort of psychic. It's just programming by cliche, which is never a good idea. To be honest, there is a case to be made that YAGNI is probably not related to decoupling at all. Coupling is typically "quicker" and "who knows if you are you are going to need a decoupled solution; you aren't gonna change X component anyway!"
YAGNI the mess :) ... really, we don't need to have all the code mixed to go "faster".
The unit tests really help on feeling when it is coupled (given one understand well what is an unit test vs. other types of tests). If you instead do it with the "you can not separate components" mindset, you can easily get to add stuff that you ain't gonna need :)
I would say YAGNI comes in when you start twisting and changing logic all around beyond the actual usage scenarios the current implementation calls for. Lets say you have some code that uses a couple external payment providers that both work with redirects to an external site. It is ok to have a design that keeps everything clean, but I don't think it is ok to start thinking of providers that we don't know if will be ever supported that have plenty of different way of handling the integration and the related workflow.