I am testing web application which has been built by REST api. I want to simulate my application performance test, load test and stress test. Now I would like to know what is the difference among Performance test, Load test, stress test.
Performance Testing - is a testing technique, it is not something you can apply to your web application directly. Performance testing is a sub-type of Non Functional testing and Load Testing and Stress Testing in their turns are lesser subsets of the Performance Testing.
Load Testing - when you basically test how does your application act under anticipated load, i.e. you expect 500 concurrent users the process of asseessment your application under that load would be the Load Testing
Stress Testing - revealing the application boundaries and breaking points, finding bottlenecks, etc. It allows to have the following questions answered:
what is the maximum capacity of the system
how many users it may handle providing reasonable response time
what is the component which breaks first
does the system recover when the load gets back to normal
See Why ‘Normal’ Load Testing Isn’t Enough for more detailed explanation.
Related
I am a beginner in performance testing and I would like to ask, with automation testing is it possible to be transformed into performance testing?
For example, I have the code of an automation of the login scenario for X users, will it be a good practice if I use the statistics of the code run to represent it as a performance diagram?
Up to certain extent yes, you will get response time and may be some relationship between the number of users and response time, however there are some constraints as well:
Most probably you won't get all the metrics and KPIs you can get with the protocol-level-based-tools
Browsers are very resource intensive, i.e. Firefox 94 system requirements are at least 1 CPU core and 2 GB of RAM per browser instance
So I would rather think not about re-using existing automation tests for checking the performance, but rather converting them into a performance test script to get better results and less resource footprint
I am trying to figure out how (or even if) I can replace the API calls made by my app when running a Webdriver test to return stubbed output. The app uses a lot of components that are completely dependent on a time frame or 3rd party info that will not be consistent or reliable for testing. Currently I have no way to test these elements without using 'run this test if...' as an approach which is far from ideal.
My tests are written in C#.
I have found a Javascript library called xhr-mock which seems to kind of do what I want but I can't use that with my current testing solution.
The correct answer to this question may be 'that's not possible' which would be annoying but, after a whole day reading irrelevant articles on Google I fear that may be the outcome.
WebDriver tests are End to End, Black Box, User Interface tests.
If your page depends on an external gateway,
you will have a service and models to wrap that gateway for use throughout your system,
and you will likely already be referencing your models in your tests.
Given the gateway is time dependent, you should use the service consumed by your api layer in your tests as-well, and simply check that the information returned by the gateway at any time is displayed on the as page as you would expect it to be. You'll have unit tests to check the responses model correctly.
As you fear, the obligatory 'this may not be possible': Given the level of change your are subject to from your gateway, you may need to reduce your accuracy or introduce some form of refresh in your tests, as the two calls will arrive slightly apart.
You'll likely have a mock or stub api in order to develop the design, given the unpredictable gateway. It would then be up to you if you used the real or fake gateway for tests in any given environment. These tests shouldn't be run on production, so I would use a fake gateway for a ci-test environment and the real gateway for a manual-test environment, where BBT failures don't impact your release pipeline.
How do I test the behavior of a java restful web service in case of multiple concurrent requests? Is there any 3rd party tool that can be leveraged?
The service accepts POST method. It expects a couple of parameters in it's request body and produces the response in the form of JSON.
The functionality of the service is to perform database read operations using the request body parameters and populate the fetched data in the JSON.
I would recommend one of the following:
SoapUI - superior tool for web service testing. Has limited load testing capabilities. However it does not scale (no clustered mode is available) and has quite poor reporting (all you get is average, min and max response times)
Apache JMeter - multiprotocol load testing tool, supports web services load testing as well. Has better load capabilities and ways to define the load patterns and can represent load test results via HTML Reporting Dashboard. Check out Testing SOAP/REST Web Services Using JMeter article to learn how to conduct a web service load test using JMeter.
You can try Gatling to generate some load.
It has nice documentation and easy QuickStart .
For advanced usage it requires some knowledge of Scala, but it also features GUI tool for simple scenarios recording, so you can run some scripts by postman or whatever browser tool you use for debugging, record it and make that scenario automated.
After running scenarios it generates nice reports using Graphite, so you can see response times and general stats.
Later you can also use Gatling for load and performance tests of your web service, it's convenient and fast as soon as you start playing with it. It can easily generate up to 5k requests per second from my old Mac, or hold up to 1k connections.
One of the bests tools to test web services is SOAPUI.
You can use it for what you want.
Link to SOAPUI
You can check this link to see how to use SOAPUI and concurrent tests.
I have an Web application which I have to measure the Performance from the Server response of that application while testing it manually in several machine.
How can I do it. Do we have any tool to measure the Server Response time of a Web Application while performing the manual testing on several machine.
Thanks,
Udhay S
Udhay,
JMETER provides you all necessary performance graphs from which you can get the server response. Its easy to use and you can increase the load to test the server response.
In JMETER you have a list of reports - Aggregate graph, Spline Visualizer graph and a summary report explaining the server response.
A stopwatch? Seriously, if you are running manual tests then the best, most accurate and the most purely subjective way to record how long it appears to take for a page to load is to use a stopwatch wired to the human brain. Until someone can invent a computer that can pass a Turing test this is as good as it gets.
If you feel like you need to automate things then automate the test first, and then think about automating recording response time.
Has anyone got any advice or know of any frameworks for unit-testing of multithreaded applications?
Do not unit test multithreaded applications. Refactor the code to remove coupling between work that is done in different threads. Then test it separately.
Typically you don't unit test the concurrency of multi threaded applications as the unit tests aren't dependable and reproducible - because of the nature of concurrency bugs its not generally possible to write unit tests that consistently either fail or succeed, and so unit tests of concurrent code generally don't make very useful unit tests.
Instead you unit test each single threaded components of your application as normal and rely on load testing sessions to identify concurrency issues.
That said, there are some experimental load testing frameworks for testing concurrent applications, such as Microsoft CHESS - CHESS repeatedly runs a given unit test and systematically explores every possible interleaving of a concurrent test. This makes your unit tests dependable and reproducible.
For the moment however CHESS is still experimental (and probably not usable with the JVM) - for now stick with load testing to weed out concurrency issues.
Try multithreadedTC
http://code.google.com/p/multithreadedtc/
http://code.google.com/p/multithreadedtc-junit4/