I am conducting some research on emerging web technologies and have created a very simple Azure website which makes use of web sockets and mongo db as the database. I have managed to get all the components working together and now must perform load testing on the application.
The main criteria is the maximum user load that the app can support, at the moment there is 1 web role instance, so probably I would need to test the max user load for that instance, then try with 2 instances and so on.
I found some solutions online such as Loadstorm, however I cannot afford to pay to use these services so I need to be able to do this from my own development machine OR from another cloud service.
I have come across Visual Studio Load Tests and they seem quite useful, however it seems they require VS Ultimate and an active msdn subscription - the prerequisites are listed here. Also, from this video which shows the basics of load tests, it seems like these load tests are created completely separately from the actual web project, so does that mean I can only see metrics related to the user? i.e. I cannot see the amount of RAM being used, processor etc.
Any suggestions?
You might create a Linux virtual machine in Azure itself or another hosting provider and use ApacheBench (ab) or JMeter to do simple load testing on your application. Be aware that in such a setup your benchmark servers may be a bottleneck themselves.
Another approach is to use online load testing services wich allow some free usage, such as:
loader.io, by SendGrid Labs
LoadStorm
Blazemeter
Blitz
Neotys
Loadimpact
For load-testing, LoadStorm is very reasonably priced, especially compared to on-premises software (and has a free tier with up to 25 virtual clients). You can install code such as jmeter, but you'll still need machines (or vm's) to host and run it from, and you need to make sure that the load-generator machines aren't the bottleneck in your tests.
When you run your tests, you may want to consider separating your web tier from MongoDB. MongoDB will consume as much memory as possible (as that's what gives MongoDB its speed). In a real-world scenario, you'll likely have MongoDB in its own environment. So for your tests, I'd consider offloading MongoDB to its own instance(s), and 10gen has a Worker Role setup that's fairly straightforward to install.
Also remember that NIC bandwidth is 100Mbps per core, which could be a limiting factor on your tests, depending on how much load you're driving.
One alternative to self-hosting MongoDB: Offload MongoDB to a hoster such as MongoLab. This will allow you to test the capacity of your web app without worrying about the details around MongoDB setup, configuration, optimization, etc. Currently MongoLab offers their free tier hosted in Azure, US West and US East data centers.
Editing my response, didnt read the question carefully.
Check out this thread for various tools and links:
Open source Tool for Stress, Load and Performance testing
If you are interested in finding the performance counters of the application under test you can revisit some of the latest features added to Visual Load Cloud base load test.
http://blogs.msdn.com/b/visualstudioalm/archive/2014/04/07/get-application-performance-data-during-load-runs-with-visual-studio-online.aspx
To get more info on Visual Studio Cloud Load Testing solution - https://www.visualstudio.com/features/vso-cloud-load-testing-vs
Related
In the company I work for,
we plan to renew and re-code our 12 years old , online sales web application.
Our traffic is a bit high ; over 100.000 sales orders a day
means there will be at least 1 million interactions for a day on the web application.
I'm want to use NodeJS as web the server which will be integrated to our ERP system running on Oracle Exadata database.
My question is :
Performance is Very Very critical for us, I'm not sure NodeJS is scalable enough for this high transaction count.
I've read some blogs on internet which states some very very big companies uses NodeJS already,
but I'm not sure they use it as main & backbone system or only for some smaller applications in corporate usage.
Can you share your experiences , if possible with examples including transaction count ?
Thanks in advance !
Why are you looking at Node.js? What other options are you considering? Why choose one over the other? What expertise does your team have?
Node.js is quite scalable, provided you know what you're doing. How much of your load is mid-tier vs database? If there's a lot going on in the mid-tier, then you need to be able to scale it out horizontally. Here are a few high-level things to consider:
Many people use Docker to containerize their apps and scale them out with Kubernetes (though those aren't Node.js specific).
You'll likely want to learn about PM2 to keep your Node.js processes running.
Use node-oracledb connection pools.
Use bind variables for security and performance.
Look into using DRCP if you are using Kubernetes and each container has it's own connection pool.
Consider looking through this guide to creating a REST API with Node.js and Oracle Database to get an idea of how things work:
https://jsao.io/2018/03/creating-a-rest-api-with-node-js-and-oracle-database/
I've started to port a web app backend to Google App Engine for scaling. But I'm completely new to GAE and just reading into the concepts. Steep learning curve.
I'm 95% certain that at some point many millions or at another point at least hundreds of thousands of users will start using the web app through a GUI app that I'm writing. And they will be globals users, so at some point in the future I'm expecting a relatively stable flow of connection requests.
The GAE Standard Environment comes to mind for scaling.
But I also want the GUI app to react when user related data changes in the backend. Which suggests web sockets, which aren't supported in the Standard Environment, but in the Flexible Environment.
Here's my idea: The main backend happens in a Standard app, but the GUI listens to update notifications from a Flexible app through web sockets. The Standard app calls the Flexible app after noteworthy data changes have occurred, and the Flexible app notifies the GUI.
But is that even possible? Because sibling Flexible instances aren't aware of each other (or are they?), how can I trigger the persistent connections held by the Flexible instance with an incoming call from the Standard app to send out a notification?
(The same question goes for the case where I have only one Flexible app and no Standard app, because the situation is kind of the same.)
I'm assuming that the Flexible app can access the same datastore that the Standard app can. Didn't look this one up.
Please also comment on whether the Standard app is even a good idea at all in this case and I should just go with Flexible. These are really new concepts to me.
Also: Are there limits to number of persistent connections held by a Flexible app? Or will it simply start another instance if a limit is reached?
Which of the two environments end up cheaper in the long run?
Many thanks.
You can only have one App engine instance per project however you can have multiple flex services or standard services inside of an instance.
Whether if standard is a good idea it depends up to your arquitecture, I'm pretty sure you've looked at the comparison chart, from experience is that if your app can work okay with all the restrictions (code runtimes, no availability to do background process, no SSH debugging, among others) I will definitely go for standard since it has a very good performance when working with spikes of traffic deploys new services in just seconds, keep in mind that automatic scaling is needed for the best performance result.
There are multiple ways to connect between flex or standard services one would be to just send an HTTP request from one service to another, but some other options with GCP services like Pub/Sub.
In the standard environment, you can also pass requests between
services and from services to external endpoints using the URL Fetch
API.
Additionally, services in the standard environment that reside within
the same GCP project can also use one of the App Engine APIs for the
following tasks:
Share a single memcache instance.
Collaborate by assigning work
between services through Task Queues.
Regarding Data Store you can access the same datastore from different services here is a quickstart for flex and the quickstart for standard
Which of the two environments end up cheaper in the long run?
Standard pricing is based on instance hours
Flexible pricing is based on usage of vCPU, memory, and persistent disks
If your service run very hight performance process on short periods of time probably standard will be chepear, however if you run low performance process on long periods of time, flex will be chepear, but again it depends on each use case.
I'd like to know how to test a web service with up to 1 million active users, all accessing the site at the same time.
This is in theory - I don't have a web service like this, but was recently reading this article on how to build a scalable app for > 500K users, and it got me wondering how people would test this?
For the sake of discussion, lets assume that I'm in full control of the service and have 1 million test accounts already created, with the usernames test1 -> test1000000 available. I'd prefer that the accounts were accessing my service from places all over the world, but am open to any suggestions!
EDIT: I'm familiar with JMeter and Selenium, but was concerned with the idea that possibly the client activity if all run from a single location would be bottlenecked by the local network, and thus not a great test? So instead of having say 10 JMeter clients at different locations running 100K clients, I was thinking that it might be better to have 1000 JMeter clients testing 1000 users each, all from different locations... but maybe this isn't much of a concern?
I think at a high level, there could be test nodes distributed around the world. Each would contain the logic to authenticate and execute a certain type of transaction. Blocks of test accounts could be distributed to each node and each node would launch the tests in parallel.
At a practical level I would start by looking at a framework locust.io claims it does this in its tag line :)
http://locust.io/
You can use apache jmeter or my personal preference siege
In case of siege I would think of generating a urls.txt file with million urls each representing a call from a user and running them concurrently.
As for your concern about the locations
Blazemeter has a geo-distributed stress testing too
You can take a look on Tsung tool http://tsung.erlang-projects.org/
It is really light weight and allows to run hundred of thousand virtual users from average machine (depends from your script difficulty).
While you can't do multi step automation at these sites, the following services will let you hit an url from different client locations (e.g Asia, North America, Australia), and throttle the bandwidth if you like for testing purposes:
WebPageTest - https://www.webpagetest.org/
- from their about page: "WebPagetest is an open source project that is primarily being developed and supported by Google as part of our efforts to make the web faster." This site also has an api, is opensource, and allows you to automate it via a node api and cli.
Pingdom
https://tools.pingdom.com/
More info in this blogpost from KeyCDN
I'm writing an application in Node.js for a spare-time, bootstrap project. I have a Windows background and Windows Azure with three-month free trial currently seems like the simplest way to develop, deploy and host the project.
However Windows Azure appears to get expensive after the free trial expires, and in any case I'd like the option to host on non-MS platforms, so I have a couple of questions:
I can see from the tutorial that I need some Windows-specific code to import the port number at which the app should listen - are there many more examples of Windows or Azure specific code requirements further down the line?
I'd like to take a NoSQL approach to data storage since I'm more interested in flexibility and performance than in referential integrity or structural consistency - would it be difficult to wrap Azure Tables in a data access layer that would be reasonably portable to other NoSQL databases such as MongoDB or the various cloud offerings?
Finally, the catch-all question - is there anything else I should be looking out for?
Tackling your second question: there are modules in the NPM registry that can help you here.
Firstly Microsoft have recently released the Azure SDK for node as an NPM installation module. This has a rich API that will help you interface into Azure Tables.
There are also NoSQL clients available in the NPM registry for most solutions (including MongoDB).
If you keep your data access simple, you should be able to make use of the various NoSQL clients that are available and create a nice little module layer that sits above all the ones you need to support.
You could even create a public github repository and submit your hard work into the NPM registry for other people to help you develop.
I have built an app on Windows Azure's node.js support as well and there is virtually no lock in if you stick to npm modules and open platforms.
You should also check into Microsoft's Bizspark program - you get two years of 2 reserved instances for free + storage. Its a great program.
I believe that the mvc mini profiler is a bit of a 'God-send'
I have incorporated it in a new MVC project which is targeting the Azure platform.
My question is - how to handle profiling across server (role instance) barriers?
Is this is even possible?
I don't understand why you would need to profile these apps any differently. You want to profile how your app behaves on the production server - go ahead and do it.
A single request will still be executed on a single instance, and you'll get the data from that same instance. If you want to profile services located on a different physical tier as well, that would require different approaches; involving communication through internal endpoints which I'm sure the mini profiler doesn't support out of the box. However, the modification shouldn't be that complicated.
However, would you want to profile physically separated tiers, I would go about it in a different way. Specifically, profile each tier independantly. Because that's how I would go about optimizing it. If you wrap the call to your other tier in a profiler statement, you can see where the problem lies and still be able to solve it.
By default the mvc-mini-profiler stores and delivers its results using HttpRuntime.Cache. This is going to cause some problems in a multi-instance environment.
If you are using multiple instances, then some ways you might be able to make this work are:
to change the Http Cache to an AppFabric Cache implementation (or some MemCached implementation)
to use an alternative Storage strategy for your profile results (the code includes SqlServerStorage as an example?)
Obviously, whichever strategy you choose will require more time/resources than just the single instance implementation.