TestNG: Run only one method/class "singlethreaded"* - multithreading

I have to test one class for it's output to System.out. To do so I redirect System.out to my OutputStream, let my method output it's stuff and check what my stream has recorded. Afterwards i restore System.out.**
All fine until there are other threads which are run in parallel and write to stdout. So things get mixed up and my tests fail.
Is there a method (preferably via annotations) to tell TestNG to run this test-class/-method without running another test in parallel while keeping the other classes running parallel?
Thanks a lot
*) "singlethreaded" is not the right word here but I don't know a better one to put it in headlines.
**) I know that this is a bit hacky but I can't inject a stream to this class for reasons so ancient only the balrog knows.

Related

Handling Multithreading in XML files for running testcases in parallel

I'm new with multithreading, here is my problem statement,
I have an xml file (TestCase.xml) where each tag resembles a test case something like below,
TestCase.xml
In turn, each main tag has a child-tag that links to another xml(TestStep.xml) which dictates the steps of the test case, it’s TS in the above example.
TestStep.xml
The execution always starts from the TestCase.xml based on the id provided. With this overview, I have 100 test cases in my suite & I want to execute them in parallel, i.e. execute at least 5-6 test cases at the same time. I’m not able to use external plug-ins like Testng, Junit, BDD or mavensurefire etc. After a lot of R&D we have ended up with Multithreading. I would need assistance on how to implement the same.

How do I log to different files from different threads in python?

I have a test suite harness which is used to run test scripts (classes defined therein actually), and as it iterates through the tests, it manipulates the python logger such that the log messages are all output to different files, each associated with its own test (class). This works fine for tests run in a sequential manner where i can control the log handlers in the root logger which enable all log messages (from whatever libraries the test classes may use) to log their messages into the proper test log file.
But what I am really trying to figure out is how to run such tests in parallel (via threading or multiprocessing) such that each thread will have its own log file to place all such messages.
I believe that I still need to manipulate the root logger, because that is the only place both tests and the libraries they use will converge on to do all logging to a common place.
I was thinking that I could add a handler for each thread which would contain a log filter to only log from a particular thread, and that would get me close (haven't tried this yet, but seems possible in theory). And this would possibly be the full solution (if indeed such would work) except for one thing. I cannot tell test writers to not use threads themselves, in their tests. So if they did so, again, this solution would fail. I'm fine with test-internal threads all logging to the one file, but these new threads would fail to log to the file their parent thread is logging to. The filter doesn't know anything about them.
And I could be mistaken, but it seems that threading.Thread objects cannot determine their own parent thread? This precludes a better log handler filter that accepts messages generated in a thread or any of its child/descendant threads. (?)
Any suggestions about how to approach this would be great.
Thanks,
Bruce

Python3 multiprocessing can only test a child process

This is quite complex to explain, and a lot of code involve which I cannot paste here.
But I have a Test application which executes Test cases through designated plugins.
When the app is executed it creates a separate multi process called a writer, this module handles all updates to a web page which holds all running information about the test cases and their state.
For this write I also create an interface (WrIf). This interface hold a Queue to the Writer Thread together with a weakref.proxy().
Now when the Test app starts executing its Test cases it creates new multiprocessor from where it can call the specific plugin. That means that the WrIf is "serialized" to this multi-process.
For every call to the WrIf it makes a check to see if the main Writer thread is still running. however here is where I get a problem. I get the following assert when I try to call is_alive().
assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process
I can expand some on this however I think it will get muddy fast because the app is rather large and somewhat complex.
Regards

How to run parallel fork as single thread in perl?

I was trying to check response messages written in perl which takes requests through Amazon API and returns responses..How to run parallel fork as single thread in perl?. I'm using LWP::UserAgent module and I want to debug HTTP requests.
As a word of warning - threads and forks are different things in perl. Very different.
However the long and short of it is - you can't, at least not trivially - a fork is a separate process. It actually happens when you run -any- external command in perl, it's just by default perl sits and waits for that command to finish and return output.
However if you've got access to the code, you can amend it to run single threaded - sometimes that's as simple as reducing the paralleism with a config parameter. (In fact quite often - debugging parallel code is a much more complicated task than sequential, so getting it working before running parallel is really important).
You might be able to embed a waitpid into your primary code so you've only got one thing running at once. Without a code example though, it's impossible to say for sure.

Cucumber: Each feature passes individually, but not together

I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.

Resources