LDAP vs DNS performance - dns

I am a newbie about DNS and LDAP, I have installed both of them to test how it works. However, I just want to compare the difference between them about performance or overhead (network latency, network bandwidth, or overall performance). Is there any benchmark tool or source code to test the difference?
Any suggestions would be helpful.
Thanks in advance.

Related

Express App, diagnosing latency spikes in EC2 instances in Beanstalk

We have a medium-sized Express API that performs well as long as incoming traffic is relatively stable, but when there is a quick uptick in requests, the latency has a tendency to briefly shoot through the roof.
You can see in this image 3 latency spikes that clearly correlate to incoming traffic spikes.
enter image description here
The app is perfectly capable of handling a larger sum of requests with low latency, it is just when there is a spike that this happens.
Also of note, the DB does not struggle or show any correlating spikes at all, and the CPU usage is only moderately affected, rarely going much above 50%-60% even at peak times.
Load-testing shows same issues, whether on local machine or against another AWS QA environment, so probably not related to network.
Given that the DB and CPU are rarely chugging to process requests, and given that the app performs well under higher traffic amounts (reqs fulfilled in <100ms), how would you go about diagnosing the root cause specific issue we have here with traffic spikes?
We have tried pinpointing poorly performing code bottlenecks, but the savings have been negligible. We have engaged with our devops folks to see if there were some options to smooth this out with config changes, but efforts to give us larger instances and adjust auto-scaling haven't really helped this particular problem.
Here's hoping seasoned eyes can give us some clues, assumption is still that there is some code improvements that will help, maybe this is a typical issue in node/express apps. Happy to give additional details, thx in advance.

Ways to reduce the latency achieved on my Virtual Machine in Azure

does anyone have any ways in which I could reduce the latency for my Azure WM (in-managed disk)? I currently have one running on the UK South area which is running at an average of 2.85m/s which is fantastic compared to the 16m/s I was receiving using my on-premises system.
However, if possible, I'd like to have this even lower, preferably down to 0.5m/s. Does anyone have any ways in which I could achieve this in the most cost-effective manner
Thanks
Azure VM size could effect latency. Different vm sizes have different bandwidth.
You could check this blog.
Well, to get 0.5ms you need to be in the range of, like, 50-60km (máximum) from the VM. Because physics, you know?
Having said that, the only way to reduce latency is use Express Route, but it probably wont help given you have <3ms latency already.

How to calculate DNS BIND capacity

How can DNS BIND capacity can be calculated i.e. How many queries DNS can handle per second? I am facing an issue where DNS is not responding to some of the queries and my technical support is saying that is cuz DNS capacity is being exceeded. He is quoting the figure of maximum 10,000 queries/second that DNS can handle but i am not sure how this figure is being calculated.
I am using BIND 9.4.3 and my system is 16 CPU core Intel 2.13GHz. CPU usage is around 6% of each processor.
Thanks
This is off-topic for here, but truly the answer can only be found by benchmarking on your specific architecture. It also makes a massive difference whether you're talking about recursive or authoritative DNS service. For former is generally slower because your server has to reach out to the internet to find the answers it needs.
The version of BIND you are running is very old, BTW. Newer versions have much improved multithreading support, although that wasn't enabled by default until 9.10. More at https://kb.isc.org/article/AA-00629/0/Performance%3A-Multi-threaded-I-O.html
See also my recent blog article at https://www.isc.org/blogs/benchmarking-dns/

Benchmark simulating "realistic" desktop/server workload

I'm currently working on a good energy estimation using the CPU's performance counters. To be able to choose the best counters, I need a benchmark simulating realistic workload.
So, does anybody know a good (free if possible) benchmark suite which simulates usual desktop and/or server workload?
I'm thinking of a suite of isolated benchmarks, e.g.
compile C code
interpretation of JavaScript
some SSL
some IO (disk/network usage)
image conversion
some math problem solving
In fact a good mix of tasks a computer executes all the time a user is working :-).
EDIT: The best would be something where very little floating point gets used.
Phoronix Test Suite is your answer!
It can even use external wat-o-meter.
And is best benchmark of cpu and gpu for linux.
The best benchmark would probably be installing Apache or some other web server, setting up a script on one or more computers to request pages (using http, https, and whatever other protocols you will use). You could make a script to request from localhost, but it would be more realistic if you had an external computer making requests, you could also test network latency.
Once set up, you could use PowerTop to estimate watts used during load.

hardware infrastructure for public web application

I'd like to start a free budget/personal finance site and will need plenty of horsepower and storage. I'm definitely a nubee, so how does one get started in terms of hardware infrastructure? Do I need to get a dedicated IP from my ISP and obtain my own servers? Do I go with amazon or Sql Server Data Services/Azure or something like that? Is the latter services free or a discount offering available to non-profit/free services such as the budget/personal finance site I'm looking to start?
If you don't mind writing your web application in python, then I's suggest using Google App Engine. See: What Is Google App Engine?
What I like to do when I have new ideas for a site is to find an inexpensive hosting solution ($10 per month). This allows me to test the idea and see if the site is going to be successful. If it is a flop, I haven't wasted much money and if it is successful I can upgrade to better hosting (dedicated server).
There are many hosting options available and several of them have great tools such as an online SQL Server management studio. Your other option would be to host it yourself if you are prepared to deal with firewall issues, backups, storage, etc.
Whether it is feasible to DIY varies a lot by country...if you have a decent broadband connection with a fixed IP this can be the cheapest route to play around with first, especially if you need an awful lot of storage.
Note however that many fast broadband connections are only fast for downloads - when you're running a server, the speed your users will see is the upload speed, which is usually a lot less. Also, you'll need to do your own admin and backup etc.
Apart from this most hosting options have a price tag on top, varying from virtual hosts (sharing a real machine), to colocation (your machine in somebody's data center), to cloud services like amazon et al (which have a good scaling ability)- and you will need to shop around for the software stack and hardware features you really need.
There's really two ways to answer this question, what differentiates them is budget.
One is to properly design this solution, prototype it, benchmark the prototype, extrapolate anticipated user load, add overhead and scale accordingly. This takes time, costs but gives you a supportable solution that serves your customers well.
The other is to just give something, anything a go and fix the problems as they come along. This is quicker and cheaper but might be a headache for a while and might p*** off your customers.
Basically it comes down to budget.
Best of luck.

Resources