Point of sale performance testing - performance-testing

I am working on a approach to performance test Point of sale application.
Has anyone performance tested POS systems? If so how would the environment setup look like for performance testing? Do we need to setup multiple POS systems to simulate the required TPS or can we trigger multiple transactions from one POS system?
Basically, i am not trying to test the POS application by itself, I am trying to measure the time taken to send the request and get the response back to POS.
Thanks

Yes.
You need to specifically look at your requirements for your point of sale system and whether the focus is on the performance of the local device or the remote systems that the device connects to, either in-store or potentially across the country/internet. You also need to be very well aware of what is in your control versus what is not (such as credit card authorization times) when you start reporting on the performance of your system.
On the long question on whether you actually need to have the devices when you run a test? If the focus is the back end systems that all of the POS devices connect to, then no. You only need to exercise the interfaces of the back end systems the same way as the front end devices. Most modern systems are using a variant of standard protocols for communication so there is often a method to proxy the call to record. If not then these items can often be reconstructed from protocol analyzer (Wireshark/sniffer/...) traces or database logs containing the queries and sequence from the front end client.
if all of the requirements point to front end performance then you need really only a couple of variants on the device you are testing and the vast majority of your efforts, save for perhaps a network impairment device to slow down the network representing a congested environment, will be manual. Get your stopwatches ready.

By your question I am understanding that you want to have performance testing on the switch or the gateway, where you pos transaction is hitting.
Checking the time for your terminal to generate the payment packet, encrypting the packet and hitting the switch / gateway. Though this will depend on the internet speed and other network dependencies, so you can just collect the data and I don't think this will not help you lot.
By generating multiple payment request from different terminals, and making transaction, you are not actually testing the performance testing of terminal, but the performance testing of the switch or the gateway. Easy way to do this, is to keep sending the payment request from Simulated POS software.
By performance testing of POS terminal, I would rather check on below point:
1) Testing the stability and performance of the software modules in POS
2) What is the success rate of detecting the contactless card, chip card and MSR swipe
3) Test the integration of the POS terminal software
4) Check for the time require to reboot the terminal
5) Check for any un-wanted code / app residing in the POS software

Related

Load and Performance testing on android and ios app

I need to perform a load test with 200+ concurrent devices on the android and ios apps. Is there any tool that can do that?
It depends on the network protocol(s) which your application is using for communicating with the backend.
You can identify which protocol(s) are in scope by installing the application into Android Emulator or iOS Simulator and use a sniffer tool like Wireshark to capture the network traffic.
Once you figure out which protocol(s) are being used you can choose a proper load testing tool which supports this(ese) protocol(s), an example comparison of free and open source load testing tools can be found i.e. in Open Source Load Testing Tools: Which One Should You Use? article
After you decide which tool you will be using you will need to replicate mobile device traffic using the tool of your choice to 100% match the network footprint of the mobile device (you might need to perform parameterization of credentials and correlation of dynamic parameters) and as soon as it will be done you should be able to replay the requests with increased number of virtual users.
Try AWS Device Farm they have a lot of configurations, devices and global options for testing.
Typically
you capture the device network requests using a proxy (we use charles proxy) as you are functional testing the app
Take out static resources, css, images, scripts (which are served from a cdn) and third party resources
then parameterise the dynamic requests to create a load test script
While you are perf testing, monitor navigate through the app to see the end-user impact when the back-end is under heavy load.
Yes, there are many solutions. The governing factor is going to be the communications model between your handheld device and the application/system under test.
In most cases (but not all) the protocol for communication is HTTP. In this case you may leverage a proxy for recording the conversation between client and server to reproduce the conversation of a single session. You may then modify this session to address dynamic server data for session, date, time, account information and user inputs. Once that is done then you may replay 200++ session representing the load of 200++ users on your system.
I would recommend a network simulator be involved in your test. Mobile networks are particularly dirty, leading to higher error rates and longer latch times (protocol, layer 3) on sites. Having the impairment from the network simulator will better allow you to understand the response times for your client. Look for impairment solutions which can ingest OOKLA data for various locations and times of day matching your high load windows.

Real Browser based load testing or Browser level user testing

I am currently working on multiple Load testing tool such as Jmeter, LoadRunner and Gatling.
All above tool works upon protocol level user load testing except TrueClient protocol offered by LoadRunner. Now something like real browser testing is in place which is definitely high on resources consumption tools such as LoadNinja and Flood.IO works on this novel concept.
I have few queries in this regards
What will be the scenario where real browser based load testing fits perfectly?
What real browser testing offers which is not possible in protocol based load testing?
I know, we can use Jmeter to Mimic browser behavior for load testing but is there anything different that real browser testing has to offer?
....this novel concept.....
You're showing your age a bit here. Full client testing was state of the art in 1996 before companies shifted en masse to protocol based testing because it's more efficient in terms of resources. (Mercury, HP, Microfocus) LoadRunner, and (Segue, Borland, Microfocus) Silk, and (Rational, IBM) Robot, have retained the ability to use full GUI virtual users (run full clients using functional automation tools) since this time. TruClient is a recent addition which runs a full client, but simply does not write the output to the screen, so you get 99% of the benefits and the measurements
What is the benefit. Well, historically two tier client server clients were thick. Lots of application processing going on. So having a GUI Virtual user in a small quantity combined with protocol virtual users allowed you to measure the cost/weight of the client. The flows to the server might take two seconds, but with the transform and present in the client it might take an addtional 10 seconds. You now know where the bottleneck is/was in the user experience.
Well, welcome to the days of future past. The web, once super thin as a presentation later, has become just as thick as the classical two tier client server applications. I might argue thicker as the modern browser interpreting JavaScript is more of a resource hog than the two tier compiled apps of years past. It is simply universally available and based upon a common client-server protocol - HTTP.
Now that the web is thick, there is value in understanding the delta between arrival and presentation. You can also observe much of this data inside of the performance tab of Chrome. We also have great w3c in browser metrics which can provide insight into the cost/weight of the local code execution.
Shifting the logic to the client also has resulted in a challenge on trying to reproduce the logic and flow of the JavaScript frameworks for producing the protocol level dataflows back and forth. Here's where the old client-server interfaces has a distinct advantage, the protocols were highly structured in terms of data representation. So, even with a complex thick client it became easy to represent and modify the dataflows at the protocol level (think database as an example, rows, columns....). HTML/HTTP is very much unstructured. Your developer can send and receive virtually anything as long as the carrier is HTTP and you can transform it to be used in JavaScript.
To make it easier and more time efficient for script creation with complex JavaScript frameworks the GUI virtual user has come back into vogue. Instead of running a full functional testing tool driving a browser, where we can have 1 browser and 1 copy of the test tool per OS instance, we now have something that scale a bit more efficiently, Truclient, where multiple can be run per OS instance. There is no getting around the high resource cost of the underlying browser instance however.
Let me try to answer your questions below:
What will be the scenario where real browser based load testing fits perfectly?
What real browser testing offers which is not possible in protocol based load testing?
Some companies do real browser based load testing. However, as you rightly concluded that it is extremely costly to simulate such scenarios. Fintech Companies mostly do that if the load is pretty less (say 100 users) and application they want to test is extremely critical and such applications cannot be tested using the standard api load tests as these are mostly legacy applications.
I know, we can use JMeter to Mimic browser behaviour for load testing but is there anything different that real browser testing has to offer?
Yes, real Browsers have JavaScript. Sometimes if implementation is poor on the front end (website), you cannot catch these issues using service level load tests. It makes sense to load test if you want to see how well the JS written by the developers or other logic is affecting page load times.
It is important to understand that performance testing is not limited to APIs alone but the entire user experience as well.
Hope this helps.
There are 2 types of test you need to consider:
Backend performance test: simulating X real users which are concurrently accessing the web application. The goal is to determine relationship between increasing number of virtual users and response time/throughput (number of requests per second), identify saturation point, first bottleneck, etc.
Frontend performance test: protocol-based load testing tools don't actually render the page therefore even if response from the server came quickly it might be the case that due to a bug in client-side JavaScript rendering will take a lot of time therefore you might want to use real browser (1-2 instances) in order to collect browser performance metrics
Well-behaved performance test should check both scenarios, it's better to conduct main load using protocol-based tools and at the same time access the application with the real browser in order to perform client-side measurements.

Can Loadrunner with Amazon Load Generators test a site that is not publicly accessible?

I'm a web developer and completely new to Loadrunner suite.
Our client has already provided us with some Loadrunner actions, that I need to run them to test a site that is hosted on the company's intranet that I'm currently working.
The computer I'm using can not handle more than 7 vusers, therefore I was requested to use Amazon EC2 for load generators.
Before I request my company to be charged with Amazon services I need to know, would I be able to test our internal page from my computer exactly as I do with the load generator on my localhost, or the page that will be tested needs to be publicly accessible from the internet?
Any feedback will be appreciated. Thanks.
Please read carefully what James wrote. You said you are a web developer so the task that was given to you is roughly equivalent to "write a new DB access layer".
You didn't mention which protocol you are using but I will assume TruClient (based on the 7 vUsers per machine). I will also assume you are using the latest version of LoadRunner or at least something from the 12.6X family.
1) You already have a solution for AWS out of the box in the form of StormRunner (https://www.microfocus.com/en-us/products/stormrunner-load-agile-cloud-testing/overview). If you want to test if the solution works for you please request a couple of execution hours from the sales team and try it. If your company has a valid license for LoadRunner I don't think this will be an issue.
2) You have a simple integration into the controller application for EC2 and alike. In the controller go to Tools->Manage cloud accounts. If you run a small test the cost should not be too great I assume.
3) If you are a developer, we have a new offering called TruWeb which is a transport level protocol which should be more developer friendly. It will be able to run much more users per machine so you will be able to use it to test on EC2 micro machine (free tier). The caveat is that you will have to write some JavaScript code and not be able to reuse the actions given to you. You can download TruWeb from here - https://marketplace.microfocus.com/appdelivery/content/truweb and it comes with the LoadRunner installation out of the box since 12.58. If you need further assistance with TruWeb feel free to email us to - truweb_forum#microfocus.com
I hope this will give you some directions.
a) You need training. This is not a discipline that someone is socially promoted to and finds success
b) Expect that it will take at least six months to begin delivering value in this field, longer if you are not working with a mentor
c) This is a question of application communication architecture. Architecture is one of the foundation skills for a performance tester/engineer/architect.
d) It is not recommended that you use the controller as a load generator. It is not recommended that you use just one load generator. Both of which will cause your test to fail an audit from a more mature testing firm. Go with a minimum of three, two for primary load, one for a control set of a single virtual user of each type. Design your tests to allow for the examination of Control timing records compared to the global set to understand if you have an application issue or a load generator issue as part of your test design.
e) You will need to coordinate with your network team for two reasons. One, you may need to open outbound ports (covered in documentation) to allow your controller to communicate with your load generators. Two, you absolutely will have to coordinate a tunnel from the outside internet to your internal applications under test. Expect that security will be paramount only our requests and no other requests through the tunnel. There are many mechanisms to address this, from a custom HTTP header to certificates. Speak with your network security professionals for the setup and configuration which you will be able to implement.
The self paced training for loadrunner is available for download. It takes about three days to go through. This is the absolute minimum before you pick up this tool in anger. Ideally, you would go through training with a certified instructor and be paired with a mentor for a period. The length of time for the mentor is directly related to the number of foundation skills which you bring to the table.

How to test the maximum load of a bulk emailing web application?

I have to test a bulk emailing website (like MailChimp). I am a manual tester. Now how can I perform a performance test so that I can know when the website will be crash/down ?
Web applications performance testing basically assumes simulating normal usage of the website by real users. The overall process should look something like:
Consider one of the load testing tools. If you don't have a corporate standard or ready solution there is quite a number of free and open source load testing tools. If you are uncertain which one to choose check out Open Source Load Testing Tools: Which One Should You Use? article which describes and compares the most popular and advanced solutions which are currently available.
The majority of load testing tools come with record-and-replay functionality so the next step would be recording your use cases / test scenarios to build up a load test "skeleton"
After that most probably you will need to perform correlation - the process of identifying dynamic parameters, cookies, headers, etc. and handling them by extracting changing parts from the previous response and replacing recorded hard-coded values
Normally you will also require parameterization, i.e. configure your test to use different credentials for different virtual users.
Once your test is ready you can run it with 1-2 users and iterations and inspect request and response details to ensure that your test works as expected.
When you're happy with your test behavior you can start **gradually* adding more and more virtual users unless response time exceeds acceptable maximum or errors start occurring or website crashes, whatever comes the first.
When you discover the maximum amount of virtual users and collect associated performance metrics it would also be good to identify the bottleneck
You need to be very explicit about what you are testing. Are you testing the scheduling engine? Are you testing the SMTP relay? Are you testing the SMTP relay with specific max out of spec conditions delays in connecting to a downstream relay? Are you testing re-queues before your SMTP throughput drops to zero? It makes a significant difference in your test setup.
Also, it would be worthwhile for you to read the RFCs on SMTP before this test. Email is designed to be resilient across less-than-perfect connections, but this would result in a slowed SMTP relay over time.
< soapbox >
Also, as a manual tester you should never be asked to do this by your management unless they have committed to your training and a mentor to assist you in the effort. Any other path and they are interested in a check box or billing, but not a reduction in risk
< /soapbox >

Creating a simple mobile agent system

I am looking to create a simple mobile agent system which will deal with 4 tasks, i.e 4 different mobile agents jobs: Database update, meeting scheduling, network services discovery and kernel update.
I have done my research and have seen different frameworks such as Aglet, Jade, agent builder etc. My question is which one should i use? Also i need to setup the base code for it to work, can someone point me to a site or help me to setup the basic functions of the mobile agent?
I've read about tahiti server for the Aglet model. I'm quite confused about how to set up the mobile agent system. Any help would be much appreciated.
I have also tried to it using RMI. I had created a method of type agent, but i couldn't pass it through remote method implementation. I was reading about tcp and udp socket programming. I was thinking may be it would be more fair to do it using socket programming. In this case, would this be called an agent? I was thinking about the server sending datagram packets to multiple clients.
You need to ask yourself why you want to use mobile agents at all. The notion of a mobile agent was popular in the agent research community in the early 90's, but fell out of favour because (i) it wasn't clear what problem it was solving, (ii) the capability to allow arbitrary code to migrate to a particular computer and execute with enough privileges to access local data and services is very open to abuse, and (iii) all of the claimed benefits of mobile agents can actually be achieved though web services (REST or otherwise) and open data formats such as RDF. Consequently, few, if any, mobile agent platforms have been properly maintained since the early experiments.
It also sounds as though you need to be clear which end-user problem you want to solve. Scheduling a meeting and updating my kernel are very different tasks - I'd be very uncomfortable with a program that claims do both. If your interest is in the automation of system maintenance tasks, such as DB tuning and kernel patching, on large networks you might want to look at the SmartFrog project, or read up on autonomic computing.
I use JADE and I agree with the first guy, agent systems usually take alot of overhead to going so if you can avoid it, please do. If however you choose to proceed choose a platform with alot of support and a big user group.
Jade has some neat features like a directory facilitator DF, which works like a yellow pages so other agents don't have to know what agents are running and what services are supplied they can simply inquire by the DF.
Also JADE ContractNetBehaviours help simplify communication.

Resources