Python test method for additional method call - python-3.x

I have a situation and i could not find anything online that would help.
my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail and alert the developers to go rectify the difference.
I have a method that calls 4 other methods from other classes. Patching made it real easy for me to determine if a method has been called. However, let's say someone in my team decides to add a 5th method, the test will still pass. Assuming that no other method calls should be allowed inside, is there a way to test in python to make sure no other calls are made? Refer to example.py below:
example.py:
def example():
classA.method1()
classB.method2()
classC.method3()
classD.method4()
classE.method5() # we do not want this method in here, test should fail if it detects a 5th or more method.
Is there anyway to cause the test case to fail if any additional methods are added?

You can easily test (with mock or doing the mocking manually) that example() does not specifically calls classE.method5, but that's about all you can expect - it won't work (unless explicitely tested too) for ie classF.method6(). Such a test would require either parsing the example function's source code or analysing it's bytecode representation.
This being said:
my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail
I'm afraid your understanding is a bit off - it's not about "changing the method", it's about "unexpectedly changing behaviour". IOW you should first test for behaviour (black box testing), not for implementation (white box testing). Now the distinction between "implementation" and "behaviour" can be a bit blurry depending on the context (you can consider that "calling X.y()" is part of the expected behaviour and it sometimes makes sense indeed), but the distinction is still important.
wrt/ your current use case (and without more context - ie why shouldn't the function call anything else ?), I'd personnaly wouldn't bother trying be that defensive and I'd just clearly document this requirement as a comment in the example() function itself so anyone editing this code immediatly knows what he should not do.

Related

Get test result in Spock "cleanup" method

Is it possible in cleanup method in Spock check is feature (or even better - current iteration of feature) passed or failed? In java's JUnit/TestNG/Cucumber it can be done in one line. But what about Spock?
I've found similar questions here:
Find the outcome/status of a test in Specification.cleanup()
Execute some action when Spock test fails
But both seems to be overcomplicated and it was years ago. Is there any better solution?
Thanks in advance
Update: main goal is to save screenshots and perform some additional actions for failed tests only in my geb/spock project
It is not over-complicated IMO, it is a flexible approach to hooking into events via listeners and extensions. The cleanup: block is there to clean up test fixtures, as the name implies. Reporting or other things based on the test result are to be done in a different way.
Having said that, the simple and short answer to your question is: This still is the canonical way to do that. By the way, you didn't tell us what you want to do with the test result in the clean-up block. This kind of thing - explaining how you want to do something but not explaining why (i.e. which problem you are trying to solve) is called the XY problem.

Ordering testcase execution - Pytest

I went through some answers for the same issue on Stack Overflow. Most of the answers suggest to use markers. I want to see whether any other alternative is available. Here is my requirement.
test_file1.py:
test_class 1
test_case1
test_case2
test_case3...so on
test_file2.py:
test_class 1
test_case1
test_casPls help to solve this.e2
test_case3...so on
Like this I have around 30-40 different .py files with different tests that are specific for testing different functionalities. Now I would like to know how to order the test execution. Is there any way that I can trigger the execution from a single file where I have defined my order of execution?
You've reached the point where you need to leverage the unit testing framework to do what you want for you, without you having to write the code to do setting up and tearing down.
I will assume you're using Python's unittest module, otherwise if you're not then it almost certainly has what I'm about to show you (and if it doesn't, it's probably not worth using and you should switch to something better).
As per the docs, you can add a method called setUp(), whereby:
... the testing framework will automatically call for every single test we run.
import unittest
class YourTestClass(unittest.TestCase):
def setUp(self):
# Call code here to be set up
# Save the things into fields with self.whatever = calculate(...)
def someTest(self):
# Make use of self.whatever that your automatically run setUp()
# has provided for you, since the unit testing framework has
# called it before this test is run
Then there's also tearDown() if you need it, which runs code after every unit test.
This means when you implement setUp (and optionally tearDown), it would run like:
test_class1.setUp()
test_class1.test_case1
test_class1.tearDown()
test_class1.setUp()
test_class1.test_case2
test_class1.tearDown()
test_class1.setUp()
test_class1.test_case3
test_class1.tearDown()
...
There's a lot more you can do as well, for that I recommend checking out the docs. However this should do what you want it to based on the comment you provided me.
Now if you have code that is shared among them (like maybe some small parameters change), you should investigate the Distinguishing test iterations using subtests section at the link provided to see how to use subTest() to handle this for you. This supposedly requires Python 3.4+ (which I hope you are using and not something archaic like 2.7).
Finally, if you want to just run something once before all the tests, you can use setUpClass() which will do the same thing as setUp() except it's run once. See the docs for more details.
In short:
Use setUp() if you need to do something before every test
Use setUpClass() if you need to do something once before all the tests
Look into subTest() if you have a lot of tests that are similar and only the data you feed into the tests change, but this is only to make your code cleaner

Time virtualisation on linux

I'm attempting to test an application which has a heavy dependency on the time of day. I would like to have the ability to execute the program as if it was running in normal time (not accelerated) but on arbitrary date/time periods.
My first thought was to abstract the time retrieval function calls with my own library calls which would allow me to alter the behaviour for testing but I wondered whether it would be possible without adding conditional logic to my code base or building a test variant of the binary.
What I'm really looking for is some kind of localised time domain, is this possible with a container (like Docker) or using LD_PRELOAD to intercept the calls?
I also saw a patch that enabled time to be disconnected from the system time using unshare(COL_TIME) but it doesn't look like this got in.
It seems like a problem that must have be solved numerous times before, anyone willing to share their solution(s)?
Thanks
AJ
Whilst alternative solutions and tricks are great, I think you're severely overcomplicating a simple problem. It's completely common and acceptable to include certain command-line switches in a program for testing/evaluation purposes. I would simply include a command line switch like this that accepts an ISO timestamp:
./myprogram --debug-override-time=2014-01-01Z12:34:56
Then at startup, if set, subtract it from the current system time, and indeed make a local apptime() function which corrects the output of regular system for this, and call that everywhere in your code instead.
The big advantage of this is that anyone can reproduce your testing results, without a big readup on custom linux tricks, so also an external testing team or a future co-developer who's good at coding but not at runtime tricks. When (unit) testing, that's a major advantage to be able to just call your code with a simple switch and be able to test the results for equality to a sample set.
You don't even have to document it, lots of production tools in enterprise-grade products have hidden command line switches for this kind of behaviour that the 'general public' need not know about.
There are several ways to query the time on Linux. Read time(7); I know at least time(2), gettimeofday(2), clock_gettime(2).
So you could use LD_PRELOAD tricks to redefine each of these to e.g. substract from the seconds part (not the micro-second or nano-second part) a fixed amount of seconds, given e.g. by some environment variable. See this example as a starting point.

Profiling JUnit tests with PowerMock?

We have a couple of very very slow JUnit tests that make heavy use of mocking, including Mocking of static functions. Single Tests take 20-30 secs, the whole "mvn test" takes 25 minutes.
I want to analyze where the time is wasted but have little experience in profiling.
I assume that the initialization of the dependent mock-objects takes much too long.
Two questions:
1) How can I quickly get numbers in which methods the time is wasted? I need no complex power-user tool, just something basic to get the numbers. (evidence that the kind of mocking we do is evil)
2) Do you have ideas what design-flaws can produce such bad timings? We test JSF-backing beans that should call mocked services. Perhaps there might be some input-validation or not-refactored business logic in the backing beans, but that cannot be changed (pls dont comment on that ;-) )
ad 2) For example one test has about 30 (!) classes to be prepared for test with #PrepareForTest. This cannot be good, but I cannot explain why.
Here is my input on this:
Try using something simple like the Apache Commons StopWatch class. I find that this is an easy way to spot bottle necks in code, and usually when you find what the first bottlneck is then the rest of them are easier to spot. I almost never waste my time trying to configure an overly complicated profiling tool.
I think it is odd that you have such performance flaws in fully mocked unit tests. If I were to guess I would say that you are missing one or two mocked components and the database or external web services are actually being called without you knowing about it. Of course I may be wrong, because I don't use PowerMock and I make it a point to never mock any static methods. That is your biggest design flaw right now and the biggest hindrance to providing good test coverage on your code. So what to do? You have 2 options, you can refactor the static methods into class methods that can be more easily mocked. The other option is that you wrap the static methods in a class object wrapper, and then mock the wrapper instead. I typically do this if the static methods are from a third-party library where I do not have the source.
one test has about 30 (!) classes to be prepared for test with #PrepareForTest. This cannot be good, but I cannot explain why. This really sounds like you may also have methods that are doing entirely too much! That is just too many dependencies for a single method in about 99% of cases. More than likely this method can be seperated into seperate more easily testable methods.
Hope this helps.

Groovy and dynamic methods: need groovy veteran enlightment

First, I have to say that I really like Groovy and all the good stuff it is bringing to the Java dev world. But since I'm using it for more than little scripts, I have some concerns.
In this Groovy help page about dynamic vs static typing, there is this statement about the absence of compilation error/warning when you have typo in your code because it could be a call to a method added later at runtime:
It might be scary to do away with all of your static typing and
compile time checking at first. But many Groovy veterans will attest
that it makes the code cleaner, easier to refactor, and, well, more
dynamic.
I'm pretty agree with the 'more dynamic' part, but not with cleaner and easier to refactor:
For the other two statements I'm not sure: from my Groovy beginner perspective, this is resulting in less code, but in more difficult to read later and in more trouble to maintain (can not rely on the IDE anymore to find who is declaring a dynamic method and who is using one).
To clarify, I find that reading groovy code is very pleasant, I love the collection and closure (concise and expressive way of tackle complicated problem).
But I have a lot of trouble in these situations:
no more auto-completion inside 'builder' using Map (Of Map (of Map))
everywhere
confusing dynamic methods call (you don't know if it is a typo or a
dynamic name)
method extraction is more complicated inside closure (often resulting in code duplicate: 'it is only a small closure after all')
hard to guess closure parameters when you have to write one for a method of a subsystem
no more learning by browsing the code: you have to use text search instead
I can only saw some benefits with GORM, but in this case the dynamic method are wellknown and my IDE is aware of them (so it is more looking like a systematic code generation than dynamic method for me)
I would be very glad to learn from groovy veteran how they can attest of these benefits.
It does lead to different classes of bugs and processes. It also makes writing tests faster and more natural, helping to alleviate the bug issues.
Discovering where behavior is defined, and used, can be problematic. There isn't a great way around it, although IDEs are getting better at it over time.
Your code shouldn't be more difficult to read--mainline code should be easier to read. The dynamic behavior should disappear into the application, and be documented appropriately for developers that need to understand functionality at those levels.
Magic does make discovery more difficult. This implies that other means of documentation, particularly human-readable tests (think easyb, spock, etc.) and prose, become that much more important.
This is somewhat old, but i'd like to share my experience if someone comes looking for some thoughts on the topic:
Right now we are using eclipse 3.7 and groovy-eclipse 2.7 on a small team (3 developers) and since we don't have tests scripts, mostly of our groovy development we do by explicitly using types.
For example, when using service classes methods:
void validate(Product product) {
// groovy stuff
}
Box pack(List<Product> products) {
def box = new Box()
box.value = products.inject(0) { total, item ->
// some BigDecimal calculations =)
}
box
}
We usually fill out the type, which enable eclipse to autocomplete and, most important, allows us to refactor code, find usages, etc..
This blocks us from using metaprogramming, except for Categories which i found that are supported and is detected by groovy-eclipse.
Still, Groovy is pretty good and a LOT of our business logic is in groovy code.
We had two issues in production code when using groovy, and both cases were due bad manual testing.
We also have a lot of XML building and parsing, and we validate it before sending it to webservices and the likes.
There's a small script we use to connect to an internal system whose usage is very restricted (and not needed in other parts of the system). This code i developed using entirely dynamic typing, overriding methods using metaclass and all that stuff, but this is an exception.
I think groovy 2.0 (with groovy-eclipse coming along, of course) and it's #TypeChecked will be great for those of us that uses groovy as a "java++".
To me there are 2 types of refactoring:
IDE based refactoring (extract to method, rename method, introduce variable, etc.).
Manual refactoring. (moving a method to a different class, changing the return value of a method)
For IDE based refactoring I haven't found an IDE that does as good of a job with Groovy as it does with Java. For example in eclipse when you extract to method it looks for duplicate instances to refactor to call the method instead of having duplicated code. For Groovy, that doesn't seem to happen.
Manual refactoring is where I believe that you could see refactoring made easier. Without tests though I would agree that it is probably harder.
The statement at cleaner code is 100% accurate. I would venture a guess that good Java to good Groovy code is at least a 3:1 reduction in lines of code. Being a newbie at Groovy though I would strive to learn at least 1 new way to do something everyday. Something that greatly helped me improve my Groovy was to simply read the APIs. I feel that Collection, String, and List are probably the ones that have the most functionality and I used the most to help make my Groovy code actually Groovy.
http://groovy.codehaus.org/groovy-jdk/java/util/Collection.html
http://groovy.codehaus.org/groovy-jdk/java/lang/String.html
http://groovy.codehaus.org/groovy-jdk/java/util/List.html
Since you edited the question I'll edit my answer :)
One thing you can do is tell intellij about the dynamic methods on your objects: What does 'add dynamic method' do in Groovy/IntelliJ?. That might help a little bit.
Another trick that I use is to type my objects when doing the initial coding and remove the typing when I'm done. For example I can never seem to remember if it's .substring(..) or .subString(..) on a String. So if you type your object you get a little better code completion.
As for your other bullet points, I'd really need to look at some code to be able to give a better answer.

Resources