I have a considerably large application that uses MSDTC. How many ports should I open? Is there any way to determine it?
EDIT: I know what ports I need to open, I don't know how many I need.
When we've had to do this kind of debugging this article has been especially useful:
How to troubleshoot MS DTC firewall issues. It includes an app called DTCPing which helps you to rapidly debug what the problem is.
As far as I remember the following ports were used:
TCP Port 1433 (Default port used by SQL Server)
UDP Port 1434 (Used by SQL Server)
TCP Port 3372 (Used by MSDTC.EXE)
I think Migol wants to know how big the range of the RPC dynamic port allocation should be.
In the KB they mention a minimum of 100 ports
Furthermore, previous experience shows
that a minimum of 100 ports should be
opened, because several system
services rely on these RPC ports to
communicate with each other.
So I would design a benchmark application to test different values of your dynamic range.
Related
http.createServer(onRequest).listen(8888);
http.createServer(onRequest).listen(8080);
In this case, I understand that two different http servers are created which listen to at different ports.
http.createServer(onRequestA).listen(8888);
http.createServer(onRequestB).listen(8080);
In this case the servers listen on various ports and also do different actions.
I have a a few questions.
Are these two approaches commonly used in the real world?
Is there really an advantage of snippet 1?
If such multiple servers can be created, what is the maximum number
of servers that can be created from a single node instance?
In answering your questions directly,
Are these two approaches commonly used in the real world?
It depends on what you're trying to archive on those ports.
Is there really an advantage of snippet 1?
It also depends on the action you plan to take on the ports, if you wanna run the same requests, it doesn't make sense running multiple ports.
If such multiple servers can be created, what is the maximum number of servers that can be created from a single node instance?
You might want to NOTE: Standard practices say no non-root process gets to talk to
the Internet on a port less than 1024, and also remember that the maximum number you can go for the ports is 65536 i.e (0 ~ 65535).
I am trying to get as much network connections from one machine as I can, using few machines. I just want to be sure that establishing many connections between servers will give me similar results as having many connections with different server?
Yes, and some issues related to that are called the C10K problem !
However, a connection to localhost is not the same as a distant remote one: the latency and the bandwidth are quite different.
Maybe you want some web server benchmarking ? There are some tools for that!
I have already established a Server-Client Application using UDP Sockets, but my Server is not capable of handling more than one Client at a time. Now I want to modify my applicaton in such a way that there are 10 Clients each running on different machines and my Server running on a separate machine. I want my Server to be able to communicate with each of the 10 Clients running on 10 different machines. I also don't want to miss the data coming from any of the Client.
What is the best possible way to do it? Kindly share some examples with me:(
I have been searching it on internet since a week, but was unable to find anything that may suit my application requirements.
Waiting for help.
Our project needs to do TCP packet load balance to node.js .
The proposal is: (Nginx or LVS) + Keepalived + Node Cluster
The questions:
The high concurrent client connections to TCP server needs to be long-lived. Which one is more suitable, Nginx or LVS?
We need to allocate different priority levels for node master on the Master server (the priority of localhost server will be higher than the remote servers). Which one can do this, Nginx or LVS?
Whose CPU utilization is smaller and the throughput is higher, Nginx or LVS?
Any recommended documents for performance benchmarking/function comparison between Nginx and LVS?
At last, we wonder whether our proposal is reasonable. Is there any other better proposals or component to choose?
I'm assuming you do not need nginx to server static assets, otherwise LVS would not be an option.
1) nginx only supports TCP via 3rd party module https://github.com/yaoweibin/nginx_tcp_proxy_module If you don't need a webserver, I'd say LVS is more suitable, but see my additional comment at the end of the #'d answers.
2) LVS supports priority, nginx does not.
3) Probably LVS: nginx is userland, LVS kernel.
4) Lies, Damned Lies and Benchmarks. You have to simulate your load on your equip, write a node client script and pound your setup.
We are looking at going all node from front to back with up https://github.com/LearnBoost/up Not in production yet, but we are pursuing this route for the following reasons:
1) We also have priority requirements, but they are custom and change dynamically. We are adjusting priority at runtime and it took us less than an hour to program node to do it.
2) We deploy a lot of code updates and up allows us to do it without interrupting existing clients. Because you can code it to do anything you want, we can spin up brand new processes to handle new connections and let the old ones die when existing connections are all gone.
3) We can see everything because we push any metric we want to see into a redis server.
I'm sure it's not the most performant per process/server, but the advantage of having so much programatic control is worth it, and scale out has the advantage of more redundancy so we are not looking at squeezing the last bit of performance out of the stack.
I just checked real quick to see if I could copy/paste a bunch of code, but we are rapidly coding it and it has a lot of references to stuff that would not be suitable for public consumption.
I have a working prototype of a concurrent Scala program using Actors. I am now trying to fine tune the number of different Actors, etc..
One stage of the processing requires fetching new data via the internet. Of course, there is nothing I can really do to speed that aspect up. However, I figure if I launch a bunch of requests in parallel, I can bring down the total time. The question, therefore, is:
=> Is there a limit on concurrent networking in Scala or on Unix systems (such as max num sockets)? If so, how can I find out what it is.
In Linux, there is a limit on the number of open file descriptors each program can have open. This can be seen using the ulimit -n. There is a system-wide limit in /proc/sys/kernel/file-max.
Another limit is the number of connections that the Linux firewall can track. If you are using the iptables connection tracking firewall this value is in /proc/sys/net/netfilter/nf_conntrack_max.
Another limit is of course TCP/IP itself. You can only have 65534 connections to the same remote host and port because each connection needs a unique combination of (localIP, localPort, remoteIP, remotePort).
Regarding speeding things up via concurrent connections: it isn't as easy as just using more connections.
It depends on where the bottlenecks are. If your local connection is being fully used, adding more connections will only slow things down. If you are connecting to the same remote server and its connection is fully used, more will only slow it down.
Where you can get a benefit is when your local connection is not fully used and you are connecting to multiple remote hosts.
If you look at web browsers, you will see they have limits on how many connections will be made to the same remote server. They also have limits on how many connections will be made in total.