As I'm looking at IIS 8 and Windows Server 2012, it says that virtually, there is no limit to handle simultaneous connection requests, and that it pretty much depends on the hardware you have.
My question is, is there some sort of table to determine something like "How much RAM/Processor" do I need to handle 300000 connections?
I'm pretty new at this stuff so all help is greatly appreciated. Thx in advance!
There really isn't a hard limit with this kind of stuff. It really depends on the software / hardware running on the machine. If you are just accepting TCP connections and not doing anything with them, you will be able to handle alot. However, if you are also running a database server on that machine, you are going to be using CPU cycles for the database and be able to handle less connections.
Related
I am sure that this question has already been answered, but unfortunately I do not know the keywords. Therefore my search remained unsuccessful until now.
Scenario: I want to transmit a lifestream via Mobile Internet using RaspberryPi, and depending on the bandwidth, downscale the streams and upscale them again when available.
My two questions for the network specialists among you:
i know i can actively check the bandwidth, but how would you do this without interfering with the existing processes transmitting? Should I commit a bandwidth to the processes and then slowly determine the remaining bandwidth using a test tool? Or are there already practical solutions?
Can I determine in the mobile Internet, or in the network interface, when a bottelneck is reached?
Passive methods would be my preference. where I wouldn't have to load the bandwidth. e.g. I could know how much bandwidth the stream uses, and how much arrives. But how do I make sure there is enough capacity before I go up with the bitrate?
Thanks for your wisdom ;)
I'm currently working on an api for an app that I'm developing, which I don't want to tell too much about. I'm a solo dev with no patents so it's probably a good idea to keep it anonymous. It uses socket.io together with node.js to interact with clients and the other way around, which I might be swapping out sometime later for elixir and it's sockets, but that isn't relevant for now. Now I'm trying to look into cloud hosting, but I'm having a rough time finding a good service to use.
These are my requirements:
24/7 uptime
Low memory and performance necessary (at least to start with). About 1+ gig with 2+ cores will most likely suffice (need 2 threads or more for node to handle async programming well)
Preferably free for like maybe even a year, or just really cheap, but that might be munch to ask
Must somehow be able to run some sort of database. Haven't really settled on this yet, but I want to implement a custom currency at some point, and probably have the ability to add some cooldowns. So it can be fairly simple and small. If anybody has any tips on what database I should use, that would also be very welcome. I was thinking of Cassandra because of the blazing fast performance and expandability. But I also wanna look into remote databases, especially if I'm gonna go international with the product
Ability to keep open socket.io connections, as you've probably guessed :P
Low ping decently high bandwith internet. The socket.io connections are lightweight and not a lot of data has to be sent. Mostly packets of a few kilobytes every now and then for all of the clients.
If this information is too vague or you want to know some other requirements I haven't thought of, let me know.
Check out Heroku (PaaS), they have a free version to start with
I have the idea to dev a gaming server (not a mmo server, but a server that can handle many game instance, like many chess party at the same times)
I was guessing that it could be interesting to manage (for the server) a client repartition on many socket at a time. So is there an interest to do this ?
For Example:
5 game instance on port 1234
5 game instance on port 1235
etc ...
I was thinking about firewall. A firewall will do its job faster on huge traffic on the same port or it could be faster to treat small traffic on many port ? and so on can this behavior could be change on different firewall ? (iptables, other ?)
Can we optimize the bandwith by separate the traffic in multiple socket ? or It doesn't matter and put all the traffic in same socket do the same result ?
Do you think multiport network can give a better latency for the client ?
A security gain? if a issue, not one that could give a root access, is find on a server the hacker could cheat with only few people because the traffic is cloisonned on many different socket?
Do you think it's could bring something that I don't think (not difficulty of programming of course) ?
Thanks you for your response.
Separating a traffic does not necessarily improve the performance of the network handling. Just because you're allocating more network resources, it doesn't mean that it's going to improve the overall performance because you'll most likely end up with bottleneck problem. You could have multiple ports but still have performance issue if you only use single thread, and conversely have single port and have really good proven performance if you use multiple threads(e.g. thread pool) for that socket( using epoll, kqueue, IOCP etc ). It all depends on how much resource you allocate overall. I strongly suggest you to use asynchronous socket for your server, either epoll and kqueue for unix (epoll for linux, kqueue for freeBSD) or IOCP for Windows. They have the best performance period.
My gut feeling is that using a dedicated port per game will slow down a server. Its better to have a single port, and to denote the game at an application level protocol.
It also limits you to ~64K games and can introduce issues with address reuse. On the other hand, I can't see what it gains either; to be honest it looks like unnecessary complexity.
I am the author of hellepoll so my gut feeling is considered but not authoritative.
Specifying the game at an application level also makes it easier to go in and out of games and a lobby on the same connection, at an application level.
What exactly is the question?
You'll need a multiplexing socket system call like poll (or ppoll or pselect) on Linux.
Your concern might be related to the C10k problem
Say I have a bunch of webservers each serving 100's of requests/s, and I want to see real time stats like:
Request rate over last 5s, 60s, 5 min etc
Number of unique users seen again per time window
Or in general for a bunch of timestamped events, I want to see real-time derived statistics - what's the best way to go about it?
I've considered having each GET request update a global counter somewhere, then sampling that at various intervals, but at the event rates I'm seeing it's hard to get a distributed counter that's fast enough.
Any ideas welcome!
Added: Servers are Linux running Apache/mod_wsgi, with a Python (Django) stack.
Added: To give a sense of the event rates I want to track stats for, they're coming in at over 10K events/s. Even incrementing a distributed counter at that rate is a challenge.
You might like to help us try out the beta of our agent for application performance monitoring in Python web applications.
http://newrelic.com
It delves more into the application performance rather than just the web server, but since any bottlenecks aren't generate going to be the web server, but your application then that is going to be more useful anyway.
Disclaimer. I work for New Relic and this is the project I am working on. It is a paid product, but the beta means it is free for now with all features. Later when that changes, if you didn't want to pay for it, their is still a Lite subscription level which is free and which gives you basic web metrics reporting which still covers some of what you are after. Anyway, right now would be a great opportunity to make use of it to debug your performance while you can.
Virtually all good servers provide this kind of functionality out of the box. For example, Apache has the mod_status module and Glassfish supports JMX. Furthermore, there are many commercial packages for monitoring clusters, such as Hyperic and Zenoss.
What web or application server are you using? It is difficult to provide a solution without that information.
Look at using WebSockets, their overhead is much smaller than a HTTP request, they are very well suited to real-time web applications. See: http://nodeknockout.com/ for Node based websocket examples.
http://en.wikipedia.org/wiki/WebSocket
You will need to run a daemon if you want to run it on your apache server.
Also take a look at:
http://kaazing.com/ if you wan't less hassle, but are willing to fork out some cash.
On the Windows side, Perfmonance monitor is the tool you should investigate.
As Jared O'Connor said, you should precise what kind of web server you want to monitor.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I have a Linux web server farm with about 5 web servers, web traffic is about 20Mbps.
We currently have a Barracuda 340 Load Balancer (keep away from this device - piece of crap!) that is acting as a firewall. I want to put in a dedicated firewall and I'd like to know what peoples opinions are on building versus buying a dedicated firewall.
Main requirements:
Dynamically block rouge traffic
Dynamically rate limit traffic
Block all ports except 80, 443
Limit port 22 to a set of IPs
High availability setup
Also if we go for the build route, how do we know what level traffic the system can handle.
As they say - "there are more than one way to skin a cat":
Build it yourself, running something like Linux or *BSD. The benefit of this, is that it makes it easy to do the dynamic part of your question, it's just a matter of a few well-placed shell/python/perl/whatever scripts. The drawback is that your ceiling traffic rate might not be what it would be on a purpose-built firewall device, although you should still be able to achieve data rates in the 300Mbit/sec range. (You start hitting PCI bus limitations at this point) This may be high enough to where it won't be a problem for you.
Buy a dedicated "firewall device" - Possible drawbacks of doing this, is that doing the "dynamic" part of what you're trying to accomplish is somewhat more difficult - depending on the device, this could be easy (Net::Telnet/Net::SSH come to mind), or not. If you are worried about peak traffic rates, you'll have to carefully check the manufacturer's specifications - several of these devices are prone to the same traffic limitations as "regular" PC's, in that they still run into the PCI bus bandwidth issue, etc. At that point, you might as well roll your own.
I guess you could read this more as a "pro's and con's" of doing either, if you want.
FWIW, we run dual FreeBSD firewalls at my place of employment, and regularly push 40+Mbit/sec with no noticeable load/issues.
Definitely build. I help manage an ISP and we have two firewalls built. One is for fail over and for redundancy. We use a program called pfsense. I couldn't recommend this program more. It has a great web interface for configuring it and we actually run it off a compact flash card.
in my current startup, we have used PFSense to replace multiple routers/firewalls, and it has throughput which replaces much more expensive routers.
Maybe that is why Cisco is having trouble? :)
Related to high availability: OpenBSD can be configured in a failover / HA way for firewalls. See this description. I've heard that they've done demos where such setups done as well (if not better) as high-end Cisco gear.
Over the last 8 years we maintained a small development network with about 20 to 30 machines. We had one computer dedicated to be the firewall.
Actually, we never run into serious problems we are now replacing it with a dedicated router/firewall solution (though we haven't decided yet which). Reasons for that are: simplicity (the goal is the firewall, not to maintain the linux for running it as well), less space and less power consumption.
Don't know much about this field, but maybe an Astaro security gateway?
Hi I would go for a dedicated firewall product in this scenario. I have used the Checkpoint firewall range of products for many years and I have always found them to be easy to setup and manage and they have great support. Using Checkpoint or one of their competitors is a fairly expensive option, especially if you're comparing it to open source software, so it depends on your budget.
I've also used Cisco's line of PIX and ASA firewalls. These are also good, but in my opinion are more difficult to manage