Scaling nodejs app while assigning unique ports - node.js

I'm trying to scale my game servers (nodejs) where instances should have unique ports assigned to them and where instances are separate (no load balancing of any kind) and are aware what port is assigned to them (ideally by env variable?).
I've tried using docker swarm but it has no option to specify port range and I couldn't find any way to allocate or to pass the allocated port to the instance so it's aware of the port its running on e.g via env variable.
Ideal solution would look like:
Instance 1: hostIP:1000
Instance 2: hostIP:1001
Instance 3: hostIP:1002
... etc
Now, I've managed to make this work by using regular Docker (non-swarm) by binding to host network and passing env variable PORT, but this way I'd have to manually spin up as many game servers as I'd need.
My node app uses "process.env.PORT" to bind to host's IP address:port
Any opinion on what solutions I could use to scale my app?

You could try different approaches.
Use docker compose and external service for extracting data from docker.sock as suggested here How to get docker mapped ports from node.js application?
Use redis or any key-value storage service to store port information and get it with every new instance launch. The most simple solution is to use redis incr command to get next free number but it has some limitations

Not To Sure What You Mean There? Could You Provide More Detail?

Related

Dynamic Ports with Docker and Oracle CQN

I have a Dockerized node app that creates a CQN subscription in an Oracle 11g database on another machine. It gives Oracle a callback and listens on port 3033... all is well.
I see my subscription on the database:
SELECT REGID, REGFLAGS, CALLBACK FROM USER_CHANGE_NOTIFICATION_REGS;
When the subscription is registered, it's assigned a randomly available port, in this case 18837. Unfortunately, my Docker container is not listening for 18837 and so Oracle cannot reach my container. No problem I'll just manually specify which port to use and tell Docker to start on port 12345.
await conn.subscribe('mysub', {
callback: myCallback,
sql: "SELECT * FROM kpi_measurement", // the table to watch
timeout: 0
port: 12345
});
Unfortunately. this is met with an "ORA-24911: Cannot start listener thread at specified port." I've even tried specifying ports that I know were previously used like 18837. Any port I use bombs out. Also, I'm not sure if I want to start hardcoding ports on the database side since I'm not guaranteed they'll be available in production.
I suppose one solution would be to expose my Docker container to a range of ports, but I've seen this thing choose a wide range.
Another solution would be to break my container into two parts 1) The CQN subscription registration piece and 2) a helper that runs a SELECT to get the dynamic port and then starts the docker callback code with that dynamic port. That's really frustrating considering this works nicely outside of Docker.
It works outside of Docker because you're being more liberal with your host's ports ("a wide range") than you are with the container image.
If you're willing to let your host present the range of ports, there's little difference with permitting a container running on that host to accept the same range.
One way to effect this for the container is to --net=host which directly presents the host's networking to the container's. You don't need to --publish ports and the container can then use the port defined by Oracle's service.

App service fabric not starting

I keep getting errors related to conflicting ports. When I set a breakpoint inside Program.cs at the line containing
ServiceRuntime.RegisterServiceAsync
It actually stops there more then once per service in the service fabric project which is obviously why it's trying to bind to the same port more than once! Why is it doing this all of a sudden?!
HttpListenerException: Failed to listen on prefix 'https://+:446/' because it conflicts with an existing registration on the machine.
The problem is that the httplistener is trying to bind to a port that is already in use. The cause of this problem can be one of the following.
Another process is already using the port. Try netstat -ano to find out the process that is using the port and then tasklist /fi "pid eq <pid of process>" to find the process name.
Maybe you are starting your development cluster as a multi node instance. That way several nodes on one machine are trying to access the same port.
Maybe you have a frontend and an api that you want to run on the same port then you have to use the path-based binding capabilities of http.sys (If you are using the WebListener)
If this fails could you please post a snippet of the ServiceManifest.xml.
There should be a line defining your endpoint <Endpoint Protocol="https" Type="Input" Port="446" />
In your application manifest, you define how many instances of your service you want, the common mistake people do is to set this number to more than 1, and it will fail, because your local cluster show 5 nodes, but they all run on same machine, and the machine port will be used only in the first instance started.
Set the number of instances to 1 and you won't see multiple entrance on main entry-point at program.cs.
Make it configurable from ApplicationParameters, so you can define these number per environment.
You say that you didn't have to set the instance count before and that could be because you have the option to use Publish profiles that can differ from Cloud vs Local deployment. The profile will point to the corresponding Application Parameters file in which you can set the instance count to 1 for local deployments.
Perhaps something happened to your publish profiles?
ApplicationParameters/Local.1Node.xml:

Docker app and database on different containers

I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.

Configuring hostname for memcached on EC2 instances

I'm using Memcached on each of my EC2 web server instances. I am not sure how to configure the various hostnames for the memcache nodes at the server level.
Consider the following example:
<?php
$mc = new Memcached()
$mc->addServer('node1', 11211);
$mc->addServer('node2', 11211);
$mc->addServer('node3', 11211);
How are node1, node2, node3 configured?
I've read about a few setups to configure the instance with hostname and update /etc/host with these entries. However, I'm not familiar enough with configuring such things.
I'm looking for a solution that scales - handles adding and removing instances - and automatic.
The difficulty with this is keeping an updated list of hosts within your application. When hosts could be added and removed, keeping this list up to date may be a challenge. You may be able to use some sort of proxy which would help by giving you a constant endpoint for your application.
If you can't use a proxy, I have a couple ideas.
If the list of hosts is static, assign an elastic ip to each memcached host. Within ec2 region, this will resolve to the local IP address of the host its associated with. With this idea, you have a constant list of hosts that your application can use.
If you are going to add/remote hosts on a regular basis, you need to be able dynamically update the lists of hosts your application will use. You can query the EC2 api for instances with a certain tag, then get the IP addresses for all of those instances. Cache the list in memory or on disk and load it with your application. If you run this every minute, any host changes should propagate within 1 minute, unless the EC2 api is being slow to update.

Azure InputEndpoints block my tcp ports

My Azure app hosts multiple ZeroMQ Sockets which bind to several tcp ports.
It worked fine when I developed it locally, but they weren't accessible once uploaded to Azure.
Unfortunately, after adding the ports to the Azure ServiceDefinition (to allow access once uploaded to azure) every time I am starting the app locally, it complains about the ports being already in use. I guess it has to do with the (debug/local) load balancer mirroring the azure behavior.
Did I do something wrong or is this expected behavior? If the latter is true, how does one handle this kind of situation? I guess I could use different ports for the sockets and specify them as private ports in the endpoints but that feels more like a workaround.
Thanks & Regards
The endpoints you add (in your case tcp) are exposed externally with the port number you specify. You can forcibly map these endpoints to specific ports, or you can let them be assigned dynamically, which requires you to then ask the RoleEnvironment for the assigned internal-use port.
If, for example, you created an Input endpoint called "ZeroMQ," you'd discover the port to use with something like this, whether the ports were forcibly mapped or you simply let them get dynamically mapped:
var zeromqPort = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["ZeroMQ"].IPEndpoint.Port;
Try to use the ports the environment reports you should use. I think they are different from the outside ports when using the emulator. The ports can be retrieved from the ServiceEnvironment.
Are you running more than one instance of the role? In the compute emulator, the internal endpoints for different role instances will end up being the same port on different IP addresses. If you try to just open the port without listening on a specific IP address, you'll probably end up with a conflict between multiple instances. (E.g., they're all trying to just open port 5555, instead of one opening 127.0.0.2:5555 and one openining 127.0.0.3:5555.)

Resources