Difference between test strategy and test plan? - manual-testing

As per lot of articles/google it is very hard to understand or explain the exact difference between test plan & strategy. Recently have gone through one of the interview and seems my answer didn't convince to the interviewer.
So if anyone can help me in answering this to understand the thin line between Test plan & strategy please. Thanks.

Here is the difference attached.

Difference between Test Strategy and Test Plan is as follow:
Test Strategy is written at a high-level mostly by QA Manager which defines the approach of testing. Test Strategy is derived from the Business Requirement Document, it basically sets the standards for testing.
Test Strategy contains the following:
Scope
Business Challenges
Testing approaches
Test deliverables
Bug tracking approaches
Automation
Risks
Test Plan is written by Senior/Lead QA which includes details related to testing i.e. how to test, features to be tested, types of testing. Test Plan is derived from SRS.
Test Plan contains the following:
Objective
Test Environments
Features In/Out Scope
Entry/Exit Criteria
Types of testing
Status

Go by the word. Test Plan means we are planning for particular scope. We have to test on given environment withing given time frame defined in Test Plan. While the strategies will be common at organization level which is high level description about testing, what kind of testing and where etc.

Typically, a Test Strategy is developed by the project manager to define an overall approach to test an application. It outlines what is needed to achieve defined testing objectives. You can have a separate test strategy for manual and automated testingto specify factors such as the scope of what needs to be tested, which tools will be used, what standards will be followed, and how bug tracking will be reported.
There are usually multiple Test Plans that focus on different areas and features of the application to detail the functionality that will be tested, a step-by-step plan of how it will be tested and what acceptance criteria must be met that will pass/fail the viability of the implementation. It should include a timeline of when testing must start and finish, who will be responsible, as well as the detail of the testing environment.

Related

Quarantine Test in GitLab

Gitlab provides option to mute/quarantine flaky tests as mentioned in GitLab Documentation.
I Understand its not the best of practices but do want to explore it.
Some of the examples I was able to find are here - Quarantine tests that are very flaky but in rubi language.
Please help to understand if that can be done in languages like C# (Dotnet) or Java. An example with that will be highly appreciated.
As per Documentation:
If the test cannot be fixed in a timely fashion, there is an impact on the productivity of all the developers, so it should be placed in quarantine by assigning the :quarantine metadata.
I am not able to figure out how we can do that in Tests using NUnit (dotnet) or JUnit(Java)
Document Flaky test is under contributor section must not be read as product documentation intended for users of GitLab.
The page linked to the question describes the guidelines and procedures that the developers of GitLab need to perform when they encounter a flaky test in GitLab's own codebase.
Feature to mute/quarantine flaky test does not appear to be currently planned for implementation.
You can up vote or comment on this issue opened for the same here!.
As a Solution to this with existing resources, You can make a use of allow_failure: the required step/Job in pipeline.
In this case, you will have 2 Jobs :
With Non-Flaky test + allow_failure: false
WIth Flaky Test + allow_failure: true
example 1
example 2
In this way the test cases meant to be flaky are only handle by JOB 2 which is allowed to fail and never blocks the pipeline.
Now as a suggestion to identify/Filter test at Job level: you can use filter at test level with some category attribute. Or Can read it from TextFile/DataSource and filter basis on that.

Unit testing a python 3.x code written for Google AppEngine flexible environment and using Cloud Datastore Api

Is there a way to unit test a code which is using cloud datastore api and written for flexible environment? testbed seems to be tied up with standard environment and it looks like using emulator will require launching/closing emulator process which usually is a flaky way for unit tests.
We end up with end to end testing (launch you tests with real db in dev environment, for example) As we having tenant based application, each test run just creates new tenant and all operations performed in scope of this tenant, so, there should no any inconsistency here. In the other hand, such solution is pretty slow.
The solution above, is just the easiest one, I believe here.
Another option would be to split you code on db dependent parts and business logic part. In this case you will test only business logic part, and mock db dependency. But, as we've investigated such solution, we found that we have a lot of code that have one line of db write operation and 1-3 lines of business logic code. So, splitting such code on different levels would be meaningless for testing and maintenance.
I guess, the last option is more generic relatively previous one, is to mock db. For each module that uses db, before test it you should inject mocked database index, that defines some responses. But in this case it is easy to fall in realization testing, instead of behavioral testing, so again that will mean, that such testing becomes quite ineffective.
I guess, this question is more generic about testing approaches, and not about actually datastore itself.

Test cases documentation compatible with cucumber, test automation and manual tests

I'm working on strategy for my company which provides testing/development services. I implement both web and mobile apps test automation using Selenium/Appium, Junit, Cucumber.
In my company test cases are written in traditional form:
1) Go to X
2) Perform action Y
3) Go to W
4) Perform action Z
Expected result: The application does ... .
But in Cucumber I use behavioral language which more or less describes similar action. I have also read this article: http://markoh.co.uk/posts/three-reasons-to-use-cucumber-for-test-automation and I wonder if we should write all our test cases in Cucumber language. For test automation, it will be just copy&paste to have a feature written. I assume this is web or mobile app with GUI.
Is this a good idea?
Have you hot any experience with such test
cases documentation in long term?
Can manual testers have difficulties in using test cases written is such manner instead of traditional language?
Any input appreciated!
The main advantage in the Cucumber test cases is their reliability. You will not be able to change the test scenario without the code update. Also the Cucumber allows to figure out your common procedures that may be useful even in the manual tests. The test cases are self documented therefore we usually don't have any difficulties in the scenarios reading by any technical personnel. I succesfully used that approach in my previous job and I going to entry it also now. Also I would suggest to use the Cucumber background feature that allows to define the test prerequisites.

How can I still use DDD, TDD in BizTalk?

I just started getting into BizTalk at work and would love to keep using everything I've learned about DDD, TDD, etc. Is this even possible or am I always going to have to use the Visio like editors when creating things like pipelines and orchestrations?
You can certainly apply a lot of the concepts of TDD and DDD to BizTalk development.
You can design and develop around the concept of domain objects (although in BizTalk and integration development I often find interface objects or contract first design to be a more useful way of thinking - what messages get passed around at my interfaces). And you can also follow the 'Build the simplest possible thing that will work' and 'only build things that make tests pass' philosophies of TDD.
However, your question sounds like you are asking more about the code-centric sides of these design and development approaches.
Am I right that you would like to be able to follow the test driven development approach of first writing a unti test that exercises a requirement and fails, then writing a method that fulfils the requirement and causes the test to pass - all within a traditional programing language like C#?
For that, unfortunately, the answer is no. The majority of BizTalk artifacts (pipelines, maps, orchestrations...) can only really be built using the Visual Studio BizTalk plugins. There are ways of viewing the underlying c# code, but one would never want to try and directly develop this code.
There are two tools BizUnit and BizUnit Extensions that give some ability to control the execution of BizTalk applications and test them but this really only gets you to the point of performing more controled and more test driven integration tests.
The shapes that you drag onto the Orchestration design surface will largely just do their thing as one opaque unit of execution. And Orchestrations, pipelines, maps etc... all these things are largely intended to be executed (and tested) within an entire BizTalk solution.
Good design practices (taking pointers from approaches like TDD) will lead to breaking BizTalk solutions into smaller, more modular and testable chunks, and are there are ways of testing things like pipelines in isolation.
But the detailed specifics of TDD and DDD in code sadly don't translate.
For some related discussion that may be useful see this question:
Mocking WebService consumed by a Biztalk Request-Response port
If you often make use of pipelines and custom pipeline components in BizTalk, you might find my own PipelineTesting library useful. It allows you to use NUnit (or whatever other testing framework you prefer) to create automated tests for complete pipelines, specific pipeline components or even schemas (such as flat file schemas).
It's pretty useful if you use this kind of functionality, if I may say so myself (I make heavy use of it on my own projects).
You can find an introduction to the library here, and the full code on github. There's also some more detailed documentation on its wiki.
I agree with the comments by CKarras. Many people have cited that as their reason for not liking the BizUnit framework. But take a look at BizUnit 3.0. It has an object model that allows you to write the entire test step in C#/VB instead of XML. BizUnitExtensions is being upgraded to the new object model as well.
The advantages of the XML based system is that it is easier to generate test steps and there is no need to recompile when you update the steps. In my own Extensions library, I found the XmlPokeStep (inspired by NAnt) to be very useful. My team could update test step xml on the fly. For example, lets say we had to call a webservice that created a customer record and then checked a database for that same record. Now if the webservice returned the ID (dynamically generated), we could update the test step for the next step on the fly (not in the same xml file of course) and then use that to check the database.
From a coding perspective, the intellisense should be addressed now in BizUnit 3.0. The lack of an XSD did make things difficult in the past. I'm hoping to get an XSD out that will aid in the intellisense. There were some snippets as well for an old version of BizUnit but those havent been updated, maybe if theres time I'll give that a go.
But coming back to the TDD issue, if you take some of the intent behind TDD - the specification or behavior driven element, then you can apply it to some extent to Biztalk development as well because BizTalk is based heavily on contract driven development. So you can specify your interfaces first and create stub orchestrations etc to handle them and then build out the core. You could write the BizUnit tests at that time. I wish there were some tools that could automate this process but right now there arent.
Using frameworks such as the ESB guidance can also help give you a base platform to work off so you can implement the major use cases through your system iteratively.
Just a few thoughts. Hope this helps. I think its worth blogging about more extensively.
This is a good topic to discuss.Do ping me if you have any questions or we can always discuss more over here.
Rgds
Benjy
You could use BizUnit to create and reuse generic test cases both in code and excel(for functional scenarios)
http://www.codeplex.com/bizunit
BizTalk Server 2009 is expected to have more IDE integrated testability.
Cheers
Hemil.
BizUnit is really a pain to use because all the tests are written in XML instead of a programming language.
In our projects, we have "ported" parts of BizUnit to a plain old C# test framework. This allows us to use BizUnit's library of steps directly in C# NUnit/MSTest code. This makes tests that are easier to write (using VS Intellisense), more flexible, and most important, easier to debug in case of a test failure. The main drawback of this approach is that we have forked from the main BizUnit source.
Another interesting option I would consider for future projects is BooUnit, which is a Boo wrapper on top of BizUnit. It has advantages similar to our BizUnit "port", but also has the advantage of still using BizUnit instead of forking from it.

Unit/Automated Testing in a workflow system

Do you do automated testing on a complex workflow system like K2?
We are building a system with extensive integration between Sharepoint 2007 and K2. I can't even imagine where to start with automated testing as the workflow involves multiple users interacting with Sharepoint, K2 workflows and custom web pages.
Has anyone done automated testing on a workflow server like K2? Is it more effort than it's worth?
I'm having a similar problem testing workflow-heavy MOSS-based application. Workflows in our case are based on WWF.
My idea is to mock pretty much everything that you can't control from unit tests - documents storage, authentication, user rights and actions, sharepoint-specific parts of workflows for sharepoint (these mocks should be thoroughly tested to mirror behavior of real components).
You use inversion of control to make code choose which component to use at runtime - real or mock.
Then you can write system-wide tests to test workflows behavior - setting up your own environment, checking how workflow engine reacts. These tests are too big to call them unit-tests, still it is automated testing.
This approach seems to work on trivial cases, but I still have to prove it is worthy to use in real-world workflows.
Here's the solution I use. This is a simple wrapper around the runtime that allows executing single activity, simplifies passing the parameters, blocks the invoking thread until the workflow or activity is done, and translates / rethrows exceptions if any. Since my workflow only sends or waits for messages through a custom workflow service, I can mock out the service to expect certain messages from workflow and post certain messages to it and here I'm having real unit-tests for my WF! The credit for technology goes to Michael Kennedy.
If you are going to do unit testing, Typemock Isolator is the only tool that can currently mock SharePoint objects.
And by the way, Richard Fennell is working on a workflow mocking solution here.
We've just today written an application that monitors our K2 worklist, picks up certain tasks from it, fills in some data and submits the tasks for completion. This is allowing us to perform automated testing, find regressions, and run through as many different paths of the workflow in a fraction of the time that it would take people to do it. I'd imagine a similar program could be written to pretend to be sharepoint.
As for the unit testing of the workflow items themselves, we have a dll referenced from k2 which contains all of our line rule and processing logic. We don't have any code in the k2 workflows themselves, it is all referenced from these dlls. This allows us to easily write unit tests on them to test all of the individual line rules.
I've done automated integration testing on K2 workflows using the K2ROM API (probably SourceCode.Workflow.Client if you're using K2 blackpearl).
Basically you start a process on a test server with a known folio (I generate a GUID), then use the management API to delete it afterwards. I wrote helper methods like AssertAtClientActivity (basically calls ProvideWorkItem with criteria).
Use the IsSynchronous parameter to StartProcessInstance, WorklistItem.Finish, etc. so that relevant method calls will not return until the process instance has reached a stable state.
Expect tests to be slow and to occasionally fail. These are not unit tests.
If you want to write unit tests against other systems, you'll probably want to wrap the K2 API.
Consider looking at Windows Workflow 4 and the new workflow features in SharePoint 2010. You may not need K2.

Resources