My puppet master has modules for many different services (mysql, redis, jenkins, beanstalk, etc.). Each module seems to have its own way of defining what port(s) the service listens on. Is there some unified way to track TCP ports across puppet modules? I would think there should be a resource type, like file, which enforces global uniqueness, but I don't see anything appropriate in the puppet docs
The idea is that if I accidentally configure two different services to listen on the same port, I should get an error about the conflict when compiling the catalog, not when the second service restarts on the node and gets EADDRINUSE.
If all you want to do is catch conflicts, you can create a defined type. For example:
define port (
$port = $name
) {
}
You can then use it to signal that a port is being used (eg port { 8080: }) and puppet will fail to compile a catalog if the same port is defined twice.
Related
I'm trying to scale my game servers (nodejs) where instances should have unique ports assigned to them and where instances are separate (no load balancing of any kind) and are aware what port is assigned to them (ideally by env variable?).
I've tried using docker swarm but it has no option to specify port range and I couldn't find any way to allocate or to pass the allocated port to the instance so it's aware of the port its running on e.g via env variable.
Ideal solution would look like:
Instance 1: hostIP:1000
Instance 2: hostIP:1001
Instance 3: hostIP:1002
... etc
Now, I've managed to make this work by using regular Docker (non-swarm) by binding to host network and passing env variable PORT, but this way I'd have to manually spin up as many game servers as I'd need.
My node app uses "process.env.PORT" to bind to host's IP address:port
Any opinion on what solutions I could use to scale my app?
You could try different approaches.
Use docker compose and external service for extracting data from docker.sock as suggested here How to get docker mapped ports from node.js application?
Use redis or any key-value storage service to store port information and get it with every new instance launch. The most simple solution is to use redis incr command to get next free number but it has some limitations
Not To Sure What You Mean There? Could You Provide More Detail?
Let me be more specific about my question with an example: Let's say that I have a slew of little servers that all start up on different ports using TCPv4. These ports are going to be destination ports, of course. Let's further assume that these little servers don't just start up at boot time like a typical server, but rather they churn dynamically based on demand. They start up when needed, and may shut themselves down for a while, and then later start up again.
Now let's say that on this same computer, we also have lots of client processes making requests to server processes on other computers via TCPv4. When a client makes such a request, it is assigned a source port by the OS.
Let's say for the sake of this example that a client processes makes a web request to a RESTful server running on a different computer. Let's also say that the source port assigned by the OS to this request is port 7777.
For this example let's also say that while the above request is still occurring, one of our little servers wants to start up, and it wants to start up on destination port 7777.
My question is will this cause a conflict? I.e., will the server get an error because port 7777 is already in use? Or will everything be fine because these two different kinds of ports live in different address spaces that cannot conflict with each other?
One reason I'm worried about the potential for conflict here is that I've seen web pages that say that "ephemeral source port selection" is typically done in a port number range that begins at a relatively high number. Here is such a web page:
https://www.cymru.com/jtk/misc/ephemeralports.html
A natural assumption for why source ports would begin at high numbers, rather than just starting at 1, is to avoid conflict with the destination ports used by server processes. Though I haven't yet seen anything that explicitly comes out and says that this is the case.
P.S. There is, of course, a potential distinction between what the TCPv4 protocol spec has to say on this issue, and what OSes actually do. E.g., perhaps the protocol is agnostic, but OSes tend to only use a single address space? Or perhaps different OSes treat the issue differently?
Personally, I'm most interested at the moment in what Linux would do.
The TCP specification says that connections are identified by the tuple:
{local addr, local port, remote addr, remote port}
Based on this, there theoretically shouldn't be a conflict between a local port used in an existing connection and trying to bind that same port for a server to listen on, since the listening socket doesn't have a remote address/port (these are represented as wildcards in the spec).
However, most TCP implementations, including the Unix sockets API, are more strict than this. If the local port is already in use in any existing socket, you won't be able to bind it, you'll get the error EADDRINUSE. A special exception is made if the existing sockets are all in TIME_WAIT state and the new socket has the SO_REUSEADDR socket option; this is used to allow a server to restart while the sockets left over from a previous process are still waiting to time out.
For this reason, the port range is generally divided into ranges with different uses. When a socket doesn't bind a local port (either because it just called connect() without calling bind(), or by specifying IPPORT_ANY as the port in bind()), the port is chosen from the ephemeral range, which is usually very high numbered ports. Servers, on the other hand, are expected to bind to low-numbered ports. If network applications follow this convention, you should not run into conflicts.
I keep getting errors related to conflicting ports. When I set a breakpoint inside Program.cs at the line containing
ServiceRuntime.RegisterServiceAsync
It actually stops there more then once per service in the service fabric project which is obviously why it's trying to bind to the same port more than once! Why is it doing this all of a sudden?!
HttpListenerException: Failed to listen on prefix 'https://+:446/' because it conflicts with an existing registration on the machine.
The problem is that the httplistener is trying to bind to a port that is already in use. The cause of this problem can be one of the following.
Another process is already using the port. Try netstat -ano to find out the process that is using the port and then tasklist /fi "pid eq <pid of process>" to find the process name.
Maybe you are starting your development cluster as a multi node instance. That way several nodes on one machine are trying to access the same port.
Maybe you have a frontend and an api that you want to run on the same port then you have to use the path-based binding capabilities of http.sys (If you are using the WebListener)
If this fails could you please post a snippet of the ServiceManifest.xml.
There should be a line defining your endpoint <Endpoint Protocol="https" Type="Input" Port="446" />
In your application manifest, you define how many instances of your service you want, the common mistake people do is to set this number to more than 1, and it will fail, because your local cluster show 5 nodes, but they all run on same machine, and the machine port will be used only in the first instance started.
Set the number of instances to 1 and you won't see multiple entrance on main entry-point at program.cs.
Make it configurable from ApplicationParameters, so you can define these number per environment.
You say that you didn't have to set the instance count before and that could be because you have the option to use Publish profiles that can differ from Cloud vs Local deployment. The profile will point to the corresponding Application Parameters file in which you can set the instance count to 1 for local deployments.
Perhaps something happened to your publish profiles?
ApplicationParameters/Local.1Node.xml:
My Azure app hosts multiple ZeroMQ Sockets which bind to several tcp ports.
It worked fine when I developed it locally, but they weren't accessible once uploaded to Azure.
Unfortunately, after adding the ports to the Azure ServiceDefinition (to allow access once uploaded to azure) every time I am starting the app locally, it complains about the ports being already in use. I guess it has to do with the (debug/local) load balancer mirroring the azure behavior.
Did I do something wrong or is this expected behavior? If the latter is true, how does one handle this kind of situation? I guess I could use different ports for the sockets and specify them as private ports in the endpoints but that feels more like a workaround.
Thanks & Regards
The endpoints you add (in your case tcp) are exposed externally with the port number you specify. You can forcibly map these endpoints to specific ports, or you can let them be assigned dynamically, which requires you to then ask the RoleEnvironment for the assigned internal-use port.
If, for example, you created an Input endpoint called "ZeroMQ," you'd discover the port to use with something like this, whether the ports were forcibly mapped or you simply let them get dynamically mapped:
var zeromqPort = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["ZeroMQ"].IPEndpoint.Port;
Try to use the ports the environment reports you should use. I think they are different from the outside ports when using the emulator. The ports can be retrieved from the ServiceEnvironment.
Are you running more than one instance of the role? In the compute emulator, the internal endpoints for different role instances will end up being the same port on different IP addresses. If you try to just open the port without listening on a specific IP address, you'll probably end up with a conflict between multiple instances. (E.g., they're all trying to just open port 5555, instead of one opening 127.0.0.2:5555 and one openining 127.0.0.3:5555.)
We've got a custom application that needs to serve requests on it's own port number. We really don't care what the number is, although we'll stick to that port after we decide. How do I select a number which is least likely to conflict with other applications or services that are running on the user's system?
Are there any rules or standards we should follow?
A clarification: once we pick a port, we need to stick with it. Can't use a dynamic one. We're building a custom SFTP server and we'll have to tell our customers what port it's running on.
For a static application, consider checking /etc/services to find a port that will not collide with anything else you are using and isn't in common use elsewhere.
$ tail /etc/services
nimspooler 48001/udp # Nimbus Spooler
nimhub 48002/tcp # Nimbus Hub
nimhub 48002/udp # Nimbus Hub
nimgtw 48003/tcp # Nimbus Gateway
nimgtw 48003/udp # Nimbus Gateway
com-bardac-dw 48556/tcp # com-bardac-dw
com-bardac-dw 48556/udp # com-bardac-dw
iqobject 48619/tcp # iqobject
iqobject 48619/udp # iqobject
If you can't predict the exact kind of environment your application is going to run, just don't bother with this. Pick any number over 1024 and also make it configurable so the user can change it in case of conflict with another service/application.
Of course you can still avoid very common ports like 8080 (alternative HTTP) or 3128 (proxies like squid), 1666 (perforce), etc. You can check a comprehensive list of known ports here, or take a look at /etc/services.
If you don't care about the port number, and don't mind that it changes every time your program is run, simply don't bind the port before you listen on it (or bind with port 0, if you want to bind a specific IP address). In both cases, you're telling the OS to pick a free port for you.
After you begin listening, use getsockname to find out which port was picked. You can write it to a file, display on it on the screen, have a child inherit it via fork, etc.
If you want a unique port number for your application, you need to request an assignment from IANA, who maintain the Service Name and Transport Protocol Port Number Registry for the IETF. /etc/services and other secondary records are populated from the IANA registry.
Please do not simply pick a number and ship your application (as mentioned in another answer), because sooner or later, IANA will assign the port you're squatting on to an incoming request, which can lead to conflicts for your application and the unaware assignee.