Load and Performance testing on android and ios app - performance-testing

I need to perform a load test with 200+ concurrent devices on the android and ios apps. Is there any tool that can do that?

It depends on the network protocol(s) which your application is using for communicating with the backend.
You can identify which protocol(s) are in scope by installing the application into Android Emulator or iOS Simulator and use a sniffer tool like Wireshark to capture the network traffic.
Once you figure out which protocol(s) are being used you can choose a proper load testing tool which supports this(ese) protocol(s), an example comparison of free and open source load testing tools can be found i.e. in Open Source Load Testing Tools: Which One Should You Use? article
After you decide which tool you will be using you will need to replicate mobile device traffic using the tool of your choice to 100% match the network footprint of the mobile device (you might need to perform parameterization of credentials and correlation of dynamic parameters) and as soon as it will be done you should be able to replay the requests with increased number of virtual users.

Try AWS Device Farm they have a lot of configurations, devices and global options for testing.

Typically
you capture the device network requests using a proxy (we use charles proxy) as you are functional testing the app
Take out static resources, css, images, scripts (which are served from a cdn) and third party resources
then parameterise the dynamic requests to create a load test script
While you are perf testing, monitor navigate through the app to see the end-user impact when the back-end is under heavy load.

Yes, there are many solutions. The governing factor is going to be the communications model between your handheld device and the application/system under test.
In most cases (but not all) the protocol for communication is HTTP. In this case you may leverage a proxy for recording the conversation between client and server to reproduce the conversation of a single session. You may then modify this session to address dynamic server data for session, date, time, account information and user inputs. Once that is done then you may replay 200++ session representing the load of 200++ users on your system.
I would recommend a network simulator be involved in your test. Mobile networks are particularly dirty, leading to higher error rates and longer latch times (protocol, layer 3) on sites. Having the impairment from the network simulator will better allow you to understand the response times for your client. Look for impairment solutions which can ingest OOKLA data for various locations and times of day matching your high load windows.

Related

Real Browser based load testing or Browser level user testing

I am currently working on multiple Load testing tool such as Jmeter, LoadRunner and Gatling.
All above tool works upon protocol level user load testing except TrueClient protocol offered by LoadRunner. Now something like real browser testing is in place which is definitely high on resources consumption tools such as LoadNinja and Flood.IO works on this novel concept.
I have few queries in this regards
What will be the scenario where real browser based load testing fits perfectly?
What real browser testing offers which is not possible in protocol based load testing?
I know, we can use Jmeter to Mimic browser behavior for load testing but is there anything different that real browser testing has to offer?
....this novel concept.....
You're showing your age a bit here. Full client testing was state of the art in 1996 before companies shifted en masse to protocol based testing because it's more efficient in terms of resources. (Mercury, HP, Microfocus) LoadRunner, and (Segue, Borland, Microfocus) Silk, and (Rational, IBM) Robot, have retained the ability to use full GUI virtual users (run full clients using functional automation tools) since this time. TruClient is a recent addition which runs a full client, but simply does not write the output to the screen, so you get 99% of the benefits and the measurements
What is the benefit. Well, historically two tier client server clients were thick. Lots of application processing going on. So having a GUI Virtual user in a small quantity combined with protocol virtual users allowed you to measure the cost/weight of the client. The flows to the server might take two seconds, but with the transform and present in the client it might take an addtional 10 seconds. You now know where the bottleneck is/was in the user experience.
Well, welcome to the days of future past. The web, once super thin as a presentation later, has become just as thick as the classical two tier client server applications. I might argue thicker as the modern browser interpreting JavaScript is more of a resource hog than the two tier compiled apps of years past. It is simply universally available and based upon a common client-server protocol - HTTP.
Now that the web is thick, there is value in understanding the delta between arrival and presentation. You can also observe much of this data inside of the performance tab of Chrome. We also have great w3c in browser metrics which can provide insight into the cost/weight of the local code execution.
Shifting the logic to the client also has resulted in a challenge on trying to reproduce the logic and flow of the JavaScript frameworks for producing the protocol level dataflows back and forth. Here's where the old client-server interfaces has a distinct advantage, the protocols were highly structured in terms of data representation. So, even with a complex thick client it became easy to represent and modify the dataflows at the protocol level (think database as an example, rows, columns....). HTML/HTTP is very much unstructured. Your developer can send and receive virtually anything as long as the carrier is HTTP and you can transform it to be used in JavaScript.
To make it easier and more time efficient for script creation with complex JavaScript frameworks the GUI virtual user has come back into vogue. Instead of running a full functional testing tool driving a browser, where we can have 1 browser and 1 copy of the test tool per OS instance, we now have something that scale a bit more efficiently, Truclient, where multiple can be run per OS instance. There is no getting around the high resource cost of the underlying browser instance however.
Let me try to answer your questions below:
What will be the scenario where real browser based load testing fits perfectly?
What real browser testing offers which is not possible in protocol based load testing?
Some companies do real browser based load testing. However, as you rightly concluded that it is extremely costly to simulate such scenarios. Fintech Companies mostly do that if the load is pretty less (say 100 users) and application they want to test is extremely critical and such applications cannot be tested using the standard api load tests as these are mostly legacy applications.
I know, we can use JMeter to Mimic browser behaviour for load testing but is there anything different that real browser testing has to offer?
Yes, real Browsers have JavaScript. Sometimes if implementation is poor on the front end (website), you cannot catch these issues using service level load tests. It makes sense to load test if you want to see how well the JS written by the developers or other logic is affecting page load times.
It is important to understand that performance testing is not limited to APIs alone but the entire user experience as well.
Hope this helps.
There are 2 types of test you need to consider:
Backend performance test: simulating X real users which are concurrently accessing the web application. The goal is to determine relationship between increasing number of virtual users and response time/throughput (number of requests per second), identify saturation point, first bottleneck, etc.
Frontend performance test: protocol-based load testing tools don't actually render the page therefore even if response from the server came quickly it might be the case that due to a bug in client-side JavaScript rendering will take a lot of time therefore you might want to use real browser (1-2 instances) in order to collect browser performance metrics
Well-behaved performance test should check both scenarios, it's better to conduct main load using protocol-based tools and at the same time access the application with the real browser in order to perform client-side measurements.

Can Loadrunner with Amazon Load Generators test a site that is not publicly accessible?

I'm a web developer and completely new to Loadrunner suite.
Our client has already provided us with some Loadrunner actions, that I need to run them to test a site that is hosted on the company's intranet that I'm currently working.
The computer I'm using can not handle more than 7 vusers, therefore I was requested to use Amazon EC2 for load generators.
Before I request my company to be charged with Amazon services I need to know, would I be able to test our internal page from my computer exactly as I do with the load generator on my localhost, or the page that will be tested needs to be publicly accessible from the internet?
Any feedback will be appreciated. Thanks.
Please read carefully what James wrote. You said you are a web developer so the task that was given to you is roughly equivalent to "write a new DB access layer".
You didn't mention which protocol you are using but I will assume TruClient (based on the 7 vUsers per machine). I will also assume you are using the latest version of LoadRunner or at least something from the 12.6X family.
1) You already have a solution for AWS out of the box in the form of StormRunner (https://www.microfocus.com/en-us/products/stormrunner-load-agile-cloud-testing/overview). If you want to test if the solution works for you please request a couple of execution hours from the sales team and try it. If your company has a valid license for LoadRunner I don't think this will be an issue.
2) You have a simple integration into the controller application for EC2 and alike. In the controller go to Tools->Manage cloud accounts. If you run a small test the cost should not be too great I assume.
3) If you are a developer, we have a new offering called TruWeb which is a transport level protocol which should be more developer friendly. It will be able to run much more users per machine so you will be able to use it to test on EC2 micro machine (free tier). The caveat is that you will have to write some JavaScript code and not be able to reuse the actions given to you. You can download TruWeb from here - https://marketplace.microfocus.com/appdelivery/content/truweb and it comes with the LoadRunner installation out of the box since 12.58. If you need further assistance with TruWeb feel free to email us to - truweb_forum#microfocus.com
I hope this will give you some directions.
a) You need training. This is not a discipline that someone is socially promoted to and finds success
b) Expect that it will take at least six months to begin delivering value in this field, longer if you are not working with a mentor
c) This is a question of application communication architecture. Architecture is one of the foundation skills for a performance tester/engineer/architect.
d) It is not recommended that you use the controller as a load generator. It is not recommended that you use just one load generator. Both of which will cause your test to fail an audit from a more mature testing firm. Go with a minimum of three, two for primary load, one for a control set of a single virtual user of each type. Design your tests to allow for the examination of Control timing records compared to the global set to understand if you have an application issue or a load generator issue as part of your test design.
e) You will need to coordinate with your network team for two reasons. One, you may need to open outbound ports (covered in documentation) to allow your controller to communicate with your load generators. Two, you absolutely will have to coordinate a tunnel from the outside internet to your internal applications under test. Expect that security will be paramount only our requests and no other requests through the tunnel. There are many mechanisms to address this, from a custom HTTP header to certificates. Speak with your network security professionals for the setup and configuration which you will be able to implement.
The self paced training for loadrunner is available for download. It takes about three days to go through. This is the absolute minimum before you pick up this tool in anger. Ideally, you would go through training with a certified instructor and be paired with a mentor for a period. The length of time for the mentor is directly related to the number of foundation skills which you bring to the table.

How to share Network tab of chrome to different users, realtime?

Looking forward to a free or commercial solution:
During a web page presentation, QA, back-end and front-end
developers need to view network traffic, while scenario is being
played in browser.
With a motto to identify problematic server (Http Api) calls. which
breaks a page.
All network tab history becomes available to all parties realtime.
Looking forward to a solution to sync this history across multiple
users. Possible?
You could use Chrome's remote debugging or you could also develop an extension which will intercept all the networks activity from a browser (The browser/s where the "scenario" is being played needs to have this Extension installed). You can then send this network activity to remote host. You can even create a webpage to view the network activity from any machine.
Chrome extensions have ability to view internet traffic. Use the chrome.webRequest API to observe and analyze traffic and to intercept, block, or modify requests in-flight. You can read more about this here ::: https://developer.chrome.com/extensions/webRequest
There is also a good article which can clear any doubts if you have regarding this ::: https://medium.com/#gilfink/adding-web-interception-abilities-to-your-chrome-extension-fb42366df425
At present, There is no any inbuilt feature available to share the network tab of chrome, FireFox or Edge in real time.
There are some extensions are available in chrome store to sharing the Tab.
You can try to make a test with those and check whether it shows the development tools when you share the tab with other users.
if it works then it can solve your issue.
Otherwise you can try to use calling apps like Skype or Microsoft Teams. with the help of that you can share your desktop or any specific window for example Chrome window to other users in a conference call.
Regards
Deepak

VoIP Integration in App & Web

I have a very general question on how to implement VoIP for our current mobile & Web App. (we have an Android+iOS App and a Web Application based on AngularJS/NodeJS).
What we want to achieve
In the first step we want to achieve inter Application Voice and Video Calls. Later on we might expand into outbound calls into the normal telephone network. But this post is mainly for getting info on how to implement only our first step.
general thoughts
We had some experiences with Asterisk before which turned out to be far from easy. So for this project we wanted to get some feedback before actually implementing anything.
thoughts on technology
At first I thought it might be a good idea to use WebRTC, but since it's only supported on Chrome, FF and Opera for the moment and pretty much is unsupported for native mobile Apps we think that WebRTC is probably out of the picture for now. (or do you think otherwise?)
After searching the web a bit more we found this: http://www.webrtc.org/native-code
Has anyone experience with this libs? It seems to us, that this could be the best solution for a modern voip solution (and also would allow us to skip the asterisk server)
The second idea would be to setup an Asterisk Server for ourselves. Every time a user logs into the App we would connect him as a SIP Client to the asterisk. If one user calls the other one we think we should be able to make the call for example with the node package Asterisk Manager API (https://github.com/pipobscure/NodeJS-AsteriskManager).
The third idea would be to use a SIP Provider, but at the moment I'm not sure if that's really the best idea.
Since we're no VoIP experts, are there any other possibilities for VoIP integration into our apps?
Any thoughts on that subject would be very appreciated! Thank you!
The main factor is the network configuration that you app will be working with. Given you're using mobile clients and web apps it's almost certain that you're using the internet and also likely that you'll have 3G and 4G mobile networks in the mix (3G/4G cause a lot more problems for VoIP than WiFi).
Given the above assumption holds the biggest challenge your app will have is establishing media (audio and/or video) connections between mobile clients which are behind different NATs and in a lot of cases multiple NATs. There is almost no chance you'd be able to get by without a server here. The server will be needed to act as a relay point for the media streams for the mobile clients. You will use the RTP protocol for the media and working out how to get it reliably from client A to client B is your biggest obstacle. The signalling side - whether it be SIP, web sockets or something else - will be secondary (note both SIP and WebRTC use RTP to carry the media).
If I were in your shoes the steps I'd take would be:
Install and try out some softphones (blink, bria, zoiper et al) on your own mobile devices, find a SIP provider that supports video calls and get some experience with calls. It may not be the experience you anticipated...
Once you are comfortable with the softphone experience you would then need to make two decisions:
Whether to deploy your own server or use an existing provider,
Whether to write your own client, find an existing one or something in between.
I can answer the deploy your own server question. You don't want to do that unless the VoIP part of your app is going to be something you charge for and make a good margin off. Running a VoIP server and all the security and network considerations that go along with it is a full time job. It may start out being easy but once a few customers start connecting and the fraudsters come along it will take on a life of its own. In the decade I've been messing around with SIP I'd estimate 75% of providers have gone out of business and it was their full time job.
Besides all that I'd be surprised if there wasn't a SIP provider that suited your needs. These days there are highly sophisticated services available that led you control every aspect of your call flow with your own code (anveo, tropo, twilio) right down to free services (sip2sip, sipbroker) that may be all you need to get started.
For the client software there are various SIP SDK's you'll be able to leverage (pjsip).

Creating a simple mobile agent system

I am looking to create a simple mobile agent system which will deal with 4 tasks, i.e 4 different mobile agents jobs: Database update, meeting scheduling, network services discovery and kernel update.
I have done my research and have seen different frameworks such as Aglet, Jade, agent builder etc. My question is which one should i use? Also i need to setup the base code for it to work, can someone point me to a site or help me to setup the basic functions of the mobile agent?
I've read about tahiti server for the Aglet model. I'm quite confused about how to set up the mobile agent system. Any help would be much appreciated.
I have also tried to it using RMI. I had created a method of type agent, but i couldn't pass it through remote method implementation. I was reading about tcp and udp socket programming. I was thinking may be it would be more fair to do it using socket programming. In this case, would this be called an agent? I was thinking about the server sending datagram packets to multiple clients.
You need to ask yourself why you want to use mobile agents at all. The notion of a mobile agent was popular in the agent research community in the early 90's, but fell out of favour because (i) it wasn't clear what problem it was solving, (ii) the capability to allow arbitrary code to migrate to a particular computer and execute with enough privileges to access local data and services is very open to abuse, and (iii) all of the claimed benefits of mobile agents can actually be achieved though web services (REST or otherwise) and open data formats such as RDF. Consequently, few, if any, mobile agent platforms have been properly maintained since the early experiments.
It also sounds as though you need to be clear which end-user problem you want to solve. Scheduling a meeting and updating my kernel are very different tasks - I'd be very uncomfortable with a program that claims do both. If your interest is in the automation of system maintenance tasks, such as DB tuning and kernel patching, on large networks you might want to look at the SmartFrog project, or read up on autonomic computing.
I use JADE and I agree with the first guy, agent systems usually take alot of overhead to going so if you can avoid it, please do. If however you choose to proceed choose a platform with alot of support and a big user group.
Jade has some neat features like a directory facilitator DF, which works like a yellow pages so other agents don't have to know what agents are running and what services are supplied they can simply inquire by the DF.
Also JADE ContractNetBehaviours help simplify communication.

Resources