What is difference between Test case and Test case(if we are not taking automation into consideration( - manual-testing

If Automation is excluded and from manual testing point of view, what is diffrerence between Test Strategy, Test Scenario, Test case and Test Script

**
Test Strategy
A Test Strategy document is a high level document and normally developed by project manager. This document defines “Software Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.
Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing.
Components of the Test Strategy document
1)Scope and Objectives
2)Business issues
3)Roles and responsibilities
4)Communication and status reporting
5)Test deliverability
6)Industry standards to follow
7)Test automation and tools
8)Testing measurements and metrices
9)Risks and mitigation
10)Defect reporting and tracking
11)Change and configuration management
12)Training plan
**
Test Scenario
A scenario is a story that describes a hypothetical situation. In testing, you check how the program copes with this hypothetical situation.
The ideal scenario test is credible, motivating, easy to evaluate, and complex.
Scenarios are usually different from test cases in that test cases are single steps and scenarios cover a number of steps. Test suites and scenarios can be used in concert for complete system tests.
A Scenario is any functionality that can be tested. It is also called Test Condition ,or Test Possibility.
**
Test Cases
In software engineering, a test case is a set of conditions or variables under which a tester will determine if a requirement upon an application is partially or fully satisfied. It may take many test cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub requirements. In that situation, each sub requirement must have at least one test case .
A test case is also defined as a sequence of steps to test the correct behavior of a functionality/feature of an application.
A sequence of steps consisting of actions to be performed on the system under test. (These steps are sometimes called the test procedure or test script). These actions are often associated with some set of data (preloaded or input during the test). The combination of actions taken and data provided to the system under test leads to the test condition. This condition tends to produce results that the test can compare with the expected results; I.e assess quality under the given test condition. The actions can be performed serially, in parallel, or in some other combination of consecution.
**
Test Script
Test Script is a set of instructions (written using a scripting/programming language) that is performed on a system under test to verify that the system performs as expected. Test scripts are used in automated testing.
Sometimes, a set of instructions (written in a human language), used in manual testing, is also called a Test Script but a better term for that would be a Test Case.

Test Scenario means " What to be tested" and test case means " How to be tested".
Test case: It consist of test case name, Precondition, steps / input condition, expected result.
Test Scenario: Test scenario consists of a detailed test procedure. We can also say that a test scenario has many test cases associated with it. Before executing the test scenario we need to think of test cases for each scenario.
Test Script: A Test Script is a set of instructions (written using a programming language) that is performed on a system under test to verify that the system performs as expected.
Test scripts is the term used when referring to automated testing. When you're creating a test script, you are using an automation tool to create your script.

Test strategy
outlines the testing approach and everything else that surrounds it. It is different from the test plan, in the sense that a Test strategy is only a sub set of the test plan. It is a hard core test document that is to an extent generic and static. There is also an argument about at what levels test strategy or plan is used- but I really do not see any discerning difference.
Example: Test plan gives the information of who is going to test at what time. For example: Module 1 is going to be tested by “X tester”. If tester Y replaces X for some reason, the test plan has to be updated.
On the contrary, test strategy is going to have details like – “Individual modules are to be tested by test team members. “ In this case, it does not matter who is testing it- so it’s generic and the change in the team member does not have to be updated, keeping it static.
Test scenario
This is a one line pointer that testers create as an initial, transitional step into the test design phase. This is mostly a one line definition of “What” we are going to test with respect to a certain feature. Usually, test scenarios are an input for the creation of test cases. In agile projects, Test scenarios are the only test design outputs and no test cases are written following these. A test scenario might result in multiple tests.
Examples test scenarios:
Validate if a new country can be added by the Admin
Validate if an existing country can be deleted by the admin
Validate if an existing country can be updated
Test Case:
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Script:
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool

Test Scenarios: A high-level/simple/individual test panorama of actual system capability. We no need to define a clear step-by-step way of validation at this stage as we define test scenarios at very early stages of software life cycle. This will not be considered for test plan as this is a non-defined item in terms resource allocation.
Test Case: Is a document which consists of system specific prerequisites, but no step-by-step validation. In test case traceability we use a test case document against requirements. This is how we will define the test coverage matrix against requirements. In most of the cases, a test case will cover multiple test scenarios. A test case will carry complexity. Test cases are used for calculation of testing efforts for a particular release with respect to code version.
Test Script(without Automation/programming language context): Every one aware of the fact that a test script is an automation program which is uniquely mapped to a test case. But without automation as well we can use this term especially when you are using Rational Quality Manager(RQM) as your test repo.
1.When a test case has multiple versions and the testing team needs to maintain all test case versions against multiple system code versions.In this case, one test case will have multiple test scripts(one for each version).
2.When a test case produces different results in different environments(Operating system or technology.. etc), a test case will be mapped to multiple test scripts which have the expected results change but entire test case remains same.
In either of the above cases, while creating test plan we need to first decide on which version of the test case(in other terms, test script) for execution based on code version or the environment.
Hope this helps to answer your question.

Related

How gitlab Test Coverage is calculated?

I added a coverage percentage after my auomated tests execution :
but my manager asked me how is it calculated based on the code only or the user interface ?
Some websites mentioned that it's calculated based on code and black box testing ?
How is that ? could someone explain to me how the test coverage via gitlab is calculated ?
PS : I'm using robot framework
Since robot framework is python gitlab-uses the very common coverage.py:
https://docs.gitlab.com/ee/user/project/merge_requests/test_coverage_visualization.html#python-example
coverage.py computes the percentage of touched vs untouched lines for each python program file the test touches.
https://coverage.readthedocs.io
But if it touches test-code you only use to test external apis/interfaces/web-pages/code not in the same repository, etc there is little point.
A solution is to use a BDD driven language like behave/cocumber/cherkin (https://pypi.org/project/behave/) to map the business logic to tests with project stakeholders.
I have used it for very complex business logic like automatic payments, subscriptions etc things that should not fail and I'm not completely sure of the logic. The project leader has written BDD schemas in simple business logic which can be translated automatically to tests.

Separate executions of BDD scenario outline examples in to different TestRail test cases (per example)

For reporting in to TestRail on automated BDD (cucumber-jvm) runs are using the Jenkins test rail plugin https://github.com/jenkinsci/testrail-plugin and we are getting false positives for test cases from scenario outlines.
The default implementation logs scenario outline example executions as multiple executions of the same test case in the same run. If the last example to run passed then the test case is passed for the run, even if all other examples actually failed.
Has anyone experienced this behaviour and did you find a way to change it so if any fail then the test case is failed or to list each example execution as a different test case?
I would report this behaviour to the authors of the plugin. The behaviour you describe is clearly very wrong.

Example test cases for hypothesis based strategies?

What is considered current best practice to test own strategies which are based on hypothesis? There are e.g. tests about how good examples shrink HypothesisWorks/hypothesis-python/tests/quality/test_shrink_quality.py. However I could not find examples so far which test the data generation functionality of strategies (in general, performance, etc.).
Hypothesis runs a series of health checks on each strategy you use, including for time taken to generate data and the proportion of generation attempts that succeed - try e.g. none().map(lambda x: time.sleep(2)).example() or integers().map(lambda x: x % 17 == 0).example() to see them in action!
In most cases you do not need to test your own strategies beyond using these healthchecks. Instead, I would check that your tests are sufficient by using a code coverage library.

How does Origen handle tests pre-pattern and pre-flow?

Prior to creating a test program flow with tests that contain patterns there is what is commonly referred to as the 'test list'. This test list is commonly stored in Excel or as Jira tickets, and it determines what is run through simulation or emulation prior to first silicon. Given all of the test pattern generation capabilities of Origen, users must deal with this test list in some manner. I don't see any IP or top level $dut modeling for tests that would yield an API like this:
$dut.ddr.has_tests? # => true
$dut.ddr.tests # => [:ddr4_2133_dataeye, :ddr4_1867_dataeye]
Anyone who is creating patterns with Origen, can you explain how Origen interacts with your 'test list'?
thx
We don't tend to handle that in the model, see the comments about lists here: http://origen-sdk.org/origen//guides/pattern/running/?highlight=list

What are test case enumeration.? How to write it.?

Recently, while looking for a job change on manual QA, I had interviewed on regular testing concepts questions. But, in a few companies,they gave some scenario and asked to write test case enumerations for it. Is it like test steps I need to write.? As per my knowledge, enumeration means complete, ordered list of all the items in a collection, so, is it writing all the test steps with description.?
Listing all possible test case names which could be extracted out of the scenario provided and classifying them in terms of priority and positive/negative/types is test case enumeration.
Kindly comment if you need anything, here is an example for better understanding.
Enumerate test cases for Login:(Classifying priorities into P1>P2>P3)
Positive cases include:
P1-Verify the login dialog box
P1-Verify the login id
P1-Verify the password
P1-Verify the submit button
Negative cases include:
1. P3-Verify logging in with empty id and password fields
Note: Haven't covered all the test cases.
Test Enumeration orders those scripts one by one- like 1,2,3... etc present in the test suite.It is just like defining the priority with which you want to run a specific script in a test suite.
For me enumeration means give for each test case identifier which is no 1, 2, 3 etc but which can tell you something, for example in very simple project you have three modules Users, Orders, Reports you can enumerate your use cases User.Accounts.1, User.Accounts.2..., User.Roles.1, User.Roles.2, Orders.Add.1, Orders.Edit.1, Orders.Edit.2, etc.
I gave long identifiers but you can short it or even replace names by numbers.
Other way (which is even much clear) you can gave names to use cases:
User.Accounts.Add account
User.Accounts.Edit account
User.Accounts.Remove account
User.Accounts.Remove account - negative (cannot remove)
User.Roles.Add role
etc...
This helps you (and others) to see if list of test cases you planned is full or you should add some new.

Resources