Is it possible to mock a module only inside a it/test-case in Jest?
Case: In this NodeJS app, I was able to almost cover 100%.
It yellow-complains on lines
src/index.js: 6
src/core/server.js: 7
src/core/middlewares/poswares/error.js: 4
For now, I am interested in either mock or not a variable in my environment file, which corresponds to case 1. We may understand the further cases in a future analysis.
Related
I am evaluating using the test_ids gem and had a few questions:
Can test numbers be left un-configured, only configuring bins and softbins?
Can the test interface query another object for the TestId config? We have lots of test modules that have standardized hardbins but the softbins are always product specific due to test count variance. Would like to have the test interface query the current active test module for the binning config.
Can you explain why there are 3 'b's in the softbin config? Seems like you would only need one to create 100, 200, and 300.
config.softbins = :bbb00
thx
1) I think so, if it doesn't work it should. If you don't configure the test numbers in the TestIds config then it should not generate a test number.
2) Sure, up to application logic how to select between the different configs. It is definitely intended to work like that though. The goal of this plugin is to allow you to write a test flow which is bin/test number agnostic, then depending on what TestIds config you generate it with you can have completely different numbering schemes being output.
3) You would only need one in that case, its just saying that potentially the bin can be between 1 and 999, and from memory I think it will raise an error if it encounters a bin > 999 when presented with a config like that with only 3 places allocated.
I am trying to test functions in my script that I have written in Node JS. I am using tape for unit testing. But I am facing one problem of how to make each test case independent. Eg. there are some global variables in the script. The values of these variables are getting set inside these methods. Now, the problem is these variables are sharing the values among different unit test cases. Is there a way to clear all global variables before any test case ran?
If Automation is excluded and from manual testing point of view, what is diffrerence between Test Strategy, Test Scenario, Test case and Test Script
**
Test Strategy
A Test Strategy document is a high level document and normally developed by project manager. This document defines “Software Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.
Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing.
Components of the Test Strategy document
1)Scope and Objectives
2)Business issues
3)Roles and responsibilities
4)Communication and status reporting
5)Test deliverability
6)Industry standards to follow
7)Test automation and tools
8)Testing measurements and metrices
9)Risks and mitigation
10)Defect reporting and tracking
11)Change and configuration management
12)Training plan
**
Test Scenario
A scenario is a story that describes a hypothetical situation. In testing, you check how the program copes with this hypothetical situation.
The ideal scenario test is credible, motivating, easy to evaluate, and complex.
Scenarios are usually different from test cases in that test cases are single steps and scenarios cover a number of steps. Test suites and scenarios can be used in concert for complete system tests.
A Scenario is any functionality that can be tested. It is also called Test Condition ,or Test Possibility.
**
Test Cases
In software engineering, a test case is a set of conditions or variables under which a tester will determine if a requirement upon an application is partially or fully satisfied. It may take many test cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub requirements. In that situation, each sub requirement must have at least one test case .
A test case is also defined as a sequence of steps to test the correct behavior of a functionality/feature of an application.
A sequence of steps consisting of actions to be performed on the system under test. (These steps are sometimes called the test procedure or test script). These actions are often associated with some set of data (preloaded or input during the test). The combination of actions taken and data provided to the system under test leads to the test condition. This condition tends to produce results that the test can compare with the expected results; I.e assess quality under the given test condition. The actions can be performed serially, in parallel, or in some other combination of consecution.
**
Test Script
Test Script is a set of instructions (written using a scripting/programming language) that is performed on a system under test to verify that the system performs as expected. Test scripts are used in automated testing.
Sometimes, a set of instructions (written in a human language), used in manual testing, is also called a Test Script but a better term for that would be a Test Case.
Test Scenario means " What to be tested" and test case means " How to be tested".
Test case: It consist of test case name, Precondition, steps / input condition, expected result.
Test Scenario: Test scenario consists of a detailed test procedure. We can also say that a test scenario has many test cases associated with it. Before executing the test scenario we need to think of test cases for each scenario.
Test Script: A Test Script is a set of instructions (written using a programming language) that is performed on a system under test to verify that the system performs as expected.
Test scripts is the term used when referring to automated testing. When you're creating a test script, you are using an automation tool to create your script.
Test strategy
outlines the testing approach and everything else that surrounds it. It is different from the test plan, in the sense that a Test strategy is only a sub set of the test plan. It is a hard core test document that is to an extent generic and static. There is also an argument about at what levels test strategy or plan is used- but I really do not see any discerning difference.
Example: Test plan gives the information of who is going to test at what time. For example: Module 1 is going to be tested by “X tester”. If tester Y replaces X for some reason, the test plan has to be updated.
On the contrary, test strategy is going to have details like – “Individual modules are to be tested by test team members. “ In this case, it does not matter who is testing it- so it’s generic and the change in the team member does not have to be updated, keeping it static.
Test scenario
This is a one line pointer that testers create as an initial, transitional step into the test design phase. This is mostly a one line definition of “What” we are going to test with respect to a certain feature. Usually, test scenarios are an input for the creation of test cases. In agile projects, Test scenarios are the only test design outputs and no test cases are written following these. A test scenario might result in multiple tests.
Examples test scenarios:
Validate if a new country can be added by the Admin
Validate if an existing country can be deleted by the admin
Validate if an existing country can be updated
Test Case:
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Script:
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool
Test Scenarios: A high-level/simple/individual test panorama of actual system capability. We no need to define a clear step-by-step way of validation at this stage as we define test scenarios at very early stages of software life cycle. This will not be considered for test plan as this is a non-defined item in terms resource allocation.
Test Case: Is a document which consists of system specific prerequisites, but no step-by-step validation. In test case traceability we use a test case document against requirements. This is how we will define the test coverage matrix against requirements. In most of the cases, a test case will cover multiple test scenarios. A test case will carry complexity. Test cases are used for calculation of testing efforts for a particular release with respect to code version.
Test Script(without Automation/programming language context): Every one aware of the fact that a test script is an automation program which is uniquely mapped to a test case. But without automation as well we can use this term especially when you are using Rational Quality Manager(RQM) as your test repo.
1.When a test case has multiple versions and the testing team needs to maintain all test case versions against multiple system code versions.In this case, one test case will have multiple test scripts(one for each version).
2.When a test case produces different results in different environments(Operating system or technology.. etc), a test case will be mapped to multiple test scripts which have the expected results change but entire test case remains same.
In either of the above cases, while creating test plan we need to first decide on which version of the test case(in other terms, test script) for execution based on code version or the environment.
Hope this helps to answer your question.
I just started using YUI3 Test module (http://yuilibrary.com/yui/docs/test/).
I have testcases with many asserts that verify state. If one of the assert fails, the TestConsole indicates an assert failed, but doesn't indicate which of the many asserts in the test failed. It would be great to have the failure message report the line number.
The browser exception actually contains the JS failure line number, but the YUI3 Test class filters this out and throws its own exception, which doesn't seem to contain the line number. Is there an easy way to add this reporting while still taking advantage of the YUI3 Test class as a harness??
I will start with the tl;dr
YUI3 does not provide any intrinsic way to display the line number of a failed test. I suppose it would be possible to manipulate Error constructors such that you could interrogate them; however, the problem is that Error.lineNumber is only supported in certain browsers (I believe it is Mozilla only).
Even if that did work, you'd end up realizing that this is a bit convoluted. You'd have to always be sure to do:
throw new Error*(...);
In your calling code, you'd always have to do:
try {...} catch(e) { /* e.lineNumber */ }
And even if this all worked and you were willing to do this, I wouldn't recommend it.
The real answer
The root of the problem is that you seem to have multiple Asserts in your test methods. Developers that are trying to be pragmatic will sometimes tell you that "one assertion per test method" is unreasonable and dogmatic. It is very attractive to think that multiple assertions per test method is fine...until it isn't.
I'm sure there are times where multiple assertions are better, but I haven't seen it yet. I've been testing for years now and here is what I've found:
I've given multiple asserts per method a try, and each time I've been bitten by the problem of not knowing which assertion failed. No cargo-culting here...I've tried both, and out of the two methodologies, only one has not bitten me.
One assertion per test forces you to really think about what/how you are testing.
Reading:
Testing: One assertion per test
One Assertion Per Test
I have an app that relies on a 3rd party API called PSC, but I want to isolate my cucumber tests from API calls to PSC.
So, I wrote a couple of cucumber steps:
When /^we pretend that PSC is up$/ do
PscV1.default_psc_connection("test user").stub!(:default_connection_is_up?).and_return(true)
end
When /^we pretend like PSC assignments exist for all subjects$/ do
PscV1.default_psc_connection("test user").stub!(:assignment_exists?).and_return(true)
end
...and what these stubs are supposed to do is make the Cucumber scenario think that the API calls are working. However, the stubs don't seem to persist between steps, so further steps in my scenario don't get the stubbed return values, they try to make an actual API call, and therefore they fail.
Is there a way to get stubs to persist at least as long as an entire scenario? I've used stubs successfully in other Cucumber tests, so I know they work in general, but this is the first time I've written a Cucumber step whose entire purpose is to provide a stub.
As far as I can tell, the answer to whether or not they persist is, quite simply, "no".
I wound up writing a combined step that did the following:
When /^I follow "([^\"]*)" while pretending that PSC is up and assignments exists for all users$/ do |link_text|
PscV1.stub!(:default_connection_is_up?).and_return(true)
PscV1.default_psc_connection("test user").stub!(:assignment_exists?).and_return(true)
click_link link_text
end
...which works. It doesn't allow me to reuse the stubs, as their own steps, unfortunately, but it works.
UPDATE You might be able to get around this limitation by assigning the stub to a class level variable, which is accessible from other steps within the same scenario.