I went through some answers for the same issue on Stack Overflow. Most of the answers suggest to use markers. I want to see whether any other alternative is available. Here is my requirement.
test_file1.py:
test_class 1
test_case1
test_case2
test_case3...so on
test_file2.py:
test_class 1
test_case1
test_casPls help to solve this.e2
test_case3...so on
Like this I have around 30-40 different .py files with different tests that are specific for testing different functionalities. Now I would like to know how to order the test execution. Is there any way that I can trigger the execution from a single file where I have defined my order of execution?
You've reached the point where you need to leverage the unit testing framework to do what you want for you, without you having to write the code to do setting up and tearing down.
I will assume you're using Python's unittest module, otherwise if you're not then it almost certainly has what I'm about to show you (and if it doesn't, it's probably not worth using and you should switch to something better).
As per the docs, you can add a method called setUp(), whereby:
... the testing framework will automatically call for every single test we run.
import unittest
class YourTestClass(unittest.TestCase):
def setUp(self):
# Call code here to be set up
# Save the things into fields with self.whatever = calculate(...)
def someTest(self):
# Make use of self.whatever that your automatically run setUp()
# has provided for you, since the unit testing framework has
# called it before this test is run
Then there's also tearDown() if you need it, which runs code after every unit test.
This means when you implement setUp (and optionally tearDown), it would run like:
test_class1.setUp()
test_class1.test_case1
test_class1.tearDown()
test_class1.setUp()
test_class1.test_case2
test_class1.tearDown()
test_class1.setUp()
test_class1.test_case3
test_class1.tearDown()
...
There's a lot more you can do as well, for that I recommend checking out the docs. However this should do what you want it to based on the comment you provided me.
Now if you have code that is shared among them (like maybe some small parameters change), you should investigate the Distinguishing test iterations using subtests section at the link provided to see how to use subTest() to handle this for you. This supposedly requires Python 3.4+ (which I hope you are using and not something archaic like 2.7).
Finally, if you want to just run something once before all the tests, you can use setUpClass() which will do the same thing as setUp() except it's run once. See the docs for more details.
In short:
Use setUp() if you need to do something before every test
Use setUpClass() if you need to do something once before all the tests
Look into subTest() if you have a lot of tests that are similar and only the data you feed into the tests change, but this is only to make your code cleaner
Related
I have a situation and i could not find anything online that would help.
my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail and alert the developers to go rectify the difference.
I have a method that calls 4 other methods from other classes. Patching made it real easy for me to determine if a method has been called. However, let's say someone in my team decides to add a 5th method, the test will still pass. Assuming that no other method calls should be allowed inside, is there a way to test in python to make sure no other calls are made? Refer to example.py below:
example.py:
def example():
classA.method1()
classB.method2()
classC.method3()
classD.method4()
classE.method5() # we do not want this method in here, test should fail if it detects a 5th or more method.
Is there anyway to cause the test case to fail if any additional methods are added?
You can easily test (with mock or doing the mocking manually) that example() does not specifically calls classE.method5, but that's about all you can expect - it won't work (unless explicitely tested too) for ie classF.method6(). Such a test would require either parsing the example function's source code or analysing it's bytecode representation.
This being said:
my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail
I'm afraid your understanding is a bit off - it's not about "changing the method", it's about "unexpectedly changing behaviour". IOW you should first test for behaviour (black box testing), not for implementation (white box testing). Now the distinction between "implementation" and "behaviour" can be a bit blurry depending on the context (you can consider that "calling X.y()" is part of the expected behaviour and it sometimes makes sense indeed), but the distinction is still important.
wrt/ your current use case (and without more context - ie why shouldn't the function call anything else ?), I'd personnaly wouldn't bother trying be that defensive and I'd just clearly document this requirement as a comment in the example() function itself so anyone editing this code immediatly knows what he should not do.
I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)
Requirement:
I have 2 testcases and will grow in future. I need a way to run these 2 testcase in multiple environment parallel at runtime.
So I can either make multiple copies of these testcase for multiple environment and add them to empty testsuite and set to run them parallel. All these using groovy script.
Or try a way to run each testcase parallel by some code.
I tried tcase.run(properties,async)
but did not work.
Need help.
Thank you.
This question does not show any research effort; it is unclear and not useful. You are mixing together unrelated things.
If you have a non-Pro installation, then you can parameterize the endpoints. This is accomplished by editing all your endpoints with a SoapUI property, and passing these to your test run. This is explained in the official documentation.
If you have a -Pro license, then you have access to the Environments feature, which essentially wraps the above for you in a convenient manner. Again: consult official documentation.
Then a separate question is how to run these in parallel. That will very much depend on what you have available. In the simplest case, you can create a shell script that calls testrunner the appropriate number of times with appropriate arguments. Official documentation is available. There are also options to run from Maven - official documentation - in which case you can use any kind of CI to run these.
I do not understand how Groovy would play into all this, unless you would like to get really fancy and run all this from junit, which also has official documentation available.
If you need additional information, you could read through SO official documentation and perhaps clarify your answer.
I'm attempting to test an application which has a heavy dependency on the time of day. I would like to have the ability to execute the program as if it was running in normal time (not accelerated) but on arbitrary date/time periods.
My first thought was to abstract the time retrieval function calls with my own library calls which would allow me to alter the behaviour for testing but I wondered whether it would be possible without adding conditional logic to my code base or building a test variant of the binary.
What I'm really looking for is some kind of localised time domain, is this possible with a container (like Docker) or using LD_PRELOAD to intercept the calls?
I also saw a patch that enabled time to be disconnected from the system time using unshare(COL_TIME) but it doesn't look like this got in.
It seems like a problem that must have be solved numerous times before, anyone willing to share their solution(s)?
Thanks
AJ
Whilst alternative solutions and tricks are great, I think you're severely overcomplicating a simple problem. It's completely common and acceptable to include certain command-line switches in a program for testing/evaluation purposes. I would simply include a command line switch like this that accepts an ISO timestamp:
./myprogram --debug-override-time=2014-01-01Z12:34:56
Then at startup, if set, subtract it from the current system time, and indeed make a local apptime() function which corrects the output of regular system for this, and call that everywhere in your code instead.
The big advantage of this is that anyone can reproduce your testing results, without a big readup on custom linux tricks, so also an external testing team or a future co-developer who's good at coding but not at runtime tricks. When (unit) testing, that's a major advantage to be able to just call your code with a simple switch and be able to test the results for equality to a sample set.
You don't even have to document it, lots of production tools in enterprise-grade products have hidden command line switches for this kind of behaviour that the 'general public' need not know about.
There are several ways to query the time on Linux. Read time(7); I know at least time(2), gettimeofday(2), clock_gettime(2).
So you could use LD_PRELOAD tricks to redefine each of these to e.g. substract from the seconds part (not the micro-second or nano-second part) a fixed amount of seconds, given e.g. by some environment variable. See this example as a starting point.
We have a couple of very very slow JUnit tests that make heavy use of mocking, including Mocking of static functions. Single Tests take 20-30 secs, the whole "mvn test" takes 25 minutes.
I want to analyze where the time is wasted but have little experience in profiling.
I assume that the initialization of the dependent mock-objects takes much too long.
Two questions:
1) How can I quickly get numbers in which methods the time is wasted? I need no complex power-user tool, just something basic to get the numbers. (evidence that the kind of mocking we do is evil)
2) Do you have ideas what design-flaws can produce such bad timings? We test JSF-backing beans that should call mocked services. Perhaps there might be some input-validation or not-refactored business logic in the backing beans, but that cannot be changed (pls dont comment on that ;-) )
ad 2) For example one test has about 30 (!) classes to be prepared for test with #PrepareForTest. This cannot be good, but I cannot explain why.
Here is my input on this:
Try using something simple like the Apache Commons StopWatch class. I find that this is an easy way to spot bottle necks in code, and usually when you find what the first bottlneck is then the rest of them are easier to spot. I almost never waste my time trying to configure an overly complicated profiling tool.
I think it is odd that you have such performance flaws in fully mocked unit tests. If I were to guess I would say that you are missing one or two mocked components and the database or external web services are actually being called without you knowing about it. Of course I may be wrong, because I don't use PowerMock and I make it a point to never mock any static methods. That is your biggest design flaw right now and the biggest hindrance to providing good test coverage on your code. So what to do? You have 2 options, you can refactor the static methods into class methods that can be more easily mocked. The other option is that you wrap the static methods in a class object wrapper, and then mock the wrapper instead. I typically do this if the static methods are from a third-party library where I do not have the source.
one test has about 30 (!) classes to be prepared for test with #PrepareForTest. This cannot be good, but I cannot explain why. This really sounds like you may also have methods that are doing entirely too much! That is just too many dependencies for a single method in about 99% of cases. More than likely this method can be seperated into seperate more easily testable methods.
Hope this helps.