Automation vs performance testing - performance-testing

I am a beginner in performance testing and I would like to ask, with automation testing is it possible to be transformed into performance testing?
For example, I have the code of an automation of the login scenario for X users, will it be a good practice if I use the statistics of the code run to represent it as a performance diagram?

Up to certain extent yes, you will get response time and may be some relationship between the number of users and response time, however there are some constraints as well:
Most probably you won't get all the metrics and KPIs you can get with the protocol-level-based-tools
Browsers are very resource intensive, i.e. Firefox 94 system requirements are at least 1 CPU core and 2 GB of RAM per browser instance
So I would rather think not about re-using existing automation tests for checking the performance, but rather converting them into a performance test script to get better results and less resource footprint

Related

Most efficient way to get users country in Next.js/Node.js?

In a Next.js app, what would be the most efficient (fastest) way to retrieve the users country?
Among other things, I would use it to determine which scripts are loaded using next/script.
I looked in to node-geoip and fast-geoip, but even though fast-geoip has a very thorough explanation below, I do not understand the mechanisms behind Next.js/Node.js to evaluate the methods properly.
Concretely, what geoip-lite does is that, on startup, it reads the whole database from disk, parses it and puts it all on memory, thus this results in the startup time being increased by about ~233 ms along with an increase of memory being used by the process of around ~110 MB, in exchange for any new queries being resolved with low sub-millisecond latencies (~0.02 ms).
This works if you have a long-running process that will need to geolocate a lot of IPs and don't care about the increases in memory usage nor startup time, but if, for example, your use-case requires only geolocating a single IP, these trade-offs don't make much sense as only a small part of the satabase is needed to answer that query, not all of it.
This library tries to provide a solution for these use-cases by separating the database into chunks and building an indexing tree around them, so that IP lookups only have to read the parts of the database that are needed for the query at hand. This results in the first query taking around 9ms and subsequent ones that hit the disk cache taking 0.7 ms, while memory consumption is kept at around 0.7MB.
Wrapping it up, geoip-lite has huge overhead costs but sub-millisecond queries whereas this library doesn't have any overhead costs but its queries are slower (0.7-9 ms).
As geoip would be called for every visitor, I assume it would have to read the whole database on each initialization and thereby making fast-geoip the best choice?
or is there some built in mechanism, that makes sure it is accessed from memory across the subsequent requests, when frequently loaded and hence making node-geoip the best choice?
or am I focused on solving my problem the wrong way, and should rather see if there is some way I can get the location via the users browser?
Would appreciate any feedback, even if there is a completely different path worth exploring:-)
I read the documentation for fast-geoip. It's designed for "serverless" cloud services such as AWS Lambda, GCP Cloud Functions, CF Workers where RAM is limited and expensive.
Note the package author's emphasis on low steady-state RAM use in the graphs below.
In summary, assuming a cloud VM/bare-metal deployment and the need to call the IP to location method on every page request, there is probably no compelling reason to use the above package.
PS: Check if the above packages require you to rotate a DB file on disk every few weeks (or rebuild+redeploy your Node app) to keep data up to date. There are commercial REST APIs such as the one in my bio (I am the developer) that may mitigate this hassle, YMMV.

Performance test vs Load test vs stress test

I am testing web application which has been built by REST api. I want to simulate my application performance test, load test and stress test. Now I would like to know what is the difference among Performance test, Load test, stress test.
Performance Testing - is a testing technique, it is not something you can apply to your web application directly. Performance testing is a sub-type of Non Functional testing and Load Testing and Stress Testing in their turns are lesser subsets of the Performance Testing.
Load Testing - when you basically test how does your application act under anticipated load, i.e. you expect 500 concurrent users the process of asseessment your application under that load would be the Load Testing
Stress Testing - revealing the application boundaries and breaking points, finding bottlenecks, etc. It allows to have the following questions answered:
what is the maximum capacity of the system
how many users it may handle providing reasonable response time
what is the component which breaks first
does the system recover when the load gets back to normal
See Why ‘Normal’ Load Testing Isn’t Enough for more detailed explanation.

Troubleshooting Azure Search poor performance

I am seeing erratic performance with an Azure Search Basic instance. Our index only has 1,544 documents and is 28MB in size, so I would expect searches to be very fast.
Azure Application Insights is reporting 4.7K calls to Azure Search from our app within the last 12 hours, with an average response time of 2.1s and a standard deviation of 35.8s(!).
I am personally seeing erratic performance during my manual testing. A query can take 20+ seconds at one moment, and then just a bit later the same query will take less than 100ms.
There queries are very simple. Here's an example query string:
api-version=2015-02-28&api-key=removed&search=&%24count=true&%24top=10&%24skip=0&searchMode=all&scoringProfile=FieldBoost&%24orderby=sortableTitle
What can I do to further troubleshoot this issue?
First off, I am assume you have a fairly even distribution of queries which means based on your numbers, you are only ~1 query per second. Does that sound correct? If not, and you are seeing large spikes of queries, it is very possible that you do not have enough replicas (copies of the index) to handle the query load. Please note that a single replica Basic service is targeted to handle low single digit QPS (although this can vary widely based on the complexity or simplicity of the queries). If you go beyond the limits of the service, latency can certainly become an issue. A good way to drill into this is to use Azure Search Traffic Analytics which can expose the search metrics that include data such as the number of queries per second over various timeframe as well as the latency metrics that we are seeing internally.
Also, most importantly, please try to reuse HTTP connections as much as possible and leverage HTTP connection pooling if possible. By the way, in .NET you should reuse a single HttpClient instance, or SearchIndexClient instance if using our Azure Search SDK.
I gathered more data and posted my results over at the Azure Search forum.
The slowdowns are due to the fact that we're running a single basic instance and code deployments by the Azure Search team cause a brief (a few minutes in my experience) interruption / degradation in service.
I find running two basic instances too expensive. Our search traffic doesn't warrant two instances except for availability purposes.
It's my understanding from the forum that the free tier has generally higher availability than a single basic instance. As a result, I have submitted a feedback item suggesting a paid shared tier that would provide more storage than the free tier while retaining higher availability than a single dedicated instance.

synthetic multi-node crossbar system implementation

I am implementing a system composed of a collection of small systems, ie. Raspberry, Yun, Beaglebone, the occasional PC. Crossbar.io has real promise ... but, as I understand it, doesn't currently support multiple nodes. Am I correct? Does anyone know when that might happen?
In the meantime it occurred to me that each individual node can offer an http interface that I might be able to use for my purposes. My initial thought is to crate workers that wrap access to the web the interface on subsidiary nodes. This fits the overall architecture of the system I want to create - does it have any merit? Is it tractable? I'm new to websockets - and insight would be a great help.
Thanks for your time,
Al
In general that does sound like a fit for Crossbar.io.
There is no timeline on multi-node (i.e. multiple routers), but we hope to have at least hot-standby nodes for high availability ready in Q1. Other than for high availability, I think that a single instance should provide sufficient performance for most applications out there - on a single current (non-high-end) Xeon we're talking tens of thousands of events a second, and concurrent connections are mostly limited by RAM (and 100s of thousands on a single box are definitely not a problem). (If you need more than that then I'd be very interested in your specific use case - we want to learn more about our users.)
I don't completely understand the second part of your question: What precisely is the architecture you're planning here? If you're talking about the integrated Web server, then with recent optimizations (it can now use multiple cores) this should be enough for even moderately big sites, and with SPAs you're not likely to ever run into performance issues.
Hope this helps, and I'll be glad to answer in more detail once you've clarified the second part.

How to run a stress test on Azure Table Storage table partition?

I've read some MSDN articles about Azure Table Storage and some techniques/strategies on PartitionKeys selection and how that can benefit performance on scalable solutions.
One thing that had my attention were stress tests, something that was mentioned here.
But I couldn't find many examples of them.
How to perform these tests then and under which situations?
Stress testing is trying out a system at normal (and extreme) load(s) to figure out its limitations. The idea is to create conditions similar to the intended production system, then perform various actions which are similar to the actions that will be performed by your application, measuring their performance.
By stress testing Azure Tables, you'll be able to make sure that it'll be able to support the load generated by your application. It'll also allow you to play with different partitioning strategies and see their effect on performance.
To perform such a stress test, design a partitioning/keying strategy, then fill an Azure Table with a typical amount of data for your application. Then perform insertions, updates, queries and other actions as fast as possible and see if the performance meets your demands.

Resources