Dynamic Ports with Docker and Oracle CQN - node.js

I have a Dockerized node app that creates a CQN subscription in an Oracle 11g database on another machine. It gives Oracle a callback and listens on port 3033... all is well.
I see my subscription on the database:
SELECT REGID, REGFLAGS, CALLBACK FROM USER_CHANGE_NOTIFICATION_REGS;
When the subscription is registered, it's assigned a randomly available port, in this case 18837. Unfortunately, my Docker container is not listening for 18837 and so Oracle cannot reach my container. No problem I'll just manually specify which port to use and tell Docker to start on port 12345.
await conn.subscribe('mysub', {
callback: myCallback,
sql: "SELECT * FROM kpi_measurement", // the table to watch
timeout: 0
port: 12345
});
Unfortunately. this is met with an "ORA-24911: Cannot start listener thread at specified port." I've even tried specifying ports that I know were previously used like 18837. Any port I use bombs out. Also, I'm not sure if I want to start hardcoding ports on the database side since I'm not guaranteed they'll be available in production.
I suppose one solution would be to expose my Docker container to a range of ports, but I've seen this thing choose a wide range.
Another solution would be to break my container into two parts 1) The CQN subscription registration piece and 2) a helper that runs a SELECT to get the dynamic port and then starts the docker callback code with that dynamic port. That's really frustrating considering this works nicely outside of Docker.

It works outside of Docker because you're being more liberal with your host's ports ("a wide range") than you are with the container image.
If you're willing to let your host present the range of ports, there's little difference with permitting a container running on that host to accept the same range.
One way to effect this for the container is to --net=host which directly presents the host's networking to the container's. You don't need to --publish ports and the container can then use the port defined by Oracle's service.

Related

Google Cloud Firewall Exposing Port Docker

I managed to successfully deploy a docker image to a VM instance. How can I send network requests to it?
The code is a simple Node.js / express app that simply res.json() "Hi there!" on the root path. It is listening on port 3000.
I think the deploy process was this:
Build Docker image from Node.js / express src.
Run container on local command line, correctly expose ports. It works locally.
Tagged the image with the correct project ID / zone.
Pushed to VM. I Think I pushed the image, rather than the container. is this a problem?
SSH into VM. Run docker ps and see running container with correct image tag
use command line curl (I am using zsh terminal) as well as browser to check network requests. Getting connection refused error
As a beginner, the google firewall settings appear to be open--I have allowed ingress on all ports.
I will also want to allow egress at some point but for now my problem is that I am getting a connection refused error whenever I try to contact the IP address, either with my web-browser or curl from the command line.
It would seem that the issue is most likely with the firewalls, and I have confirmed that my docker container is running in the VM (and the source code works on my machine).
EDIT:
Updated Firewall Rules with Port 3000 Ingress:
You need a firewall rule that permits traffic to tcp:3000.
Preferably from just your host's IP (Google "what's my IP?" And use that) but for now you can (temporarily) use any IP 0.0.0.0/0.
Firewall rules can be applied only to the VM running your container too, but I'd not worry about that initially.

AWS - Security Groups not opening ports

I created a Linux t3a.nano EC2 on AWS, I haven't done anything on the instance other than starting it and connect to it through SSH.
I would like to open 2 ports, port 80, and 3000, for that, I created a Security Group and added both ports to the inbound rules.
Based on AWS documentation that is all you need to do in other to open the ports, but if I connect to the instance and list the ports open none of the ports on my Security Group are listening, only 22, but that is open by default.
I am running this command to list the ports:
sudo netstat -antp | fgrep LISTEN
Other Steps I tried:
Check my ACL, will attach a picture of the configuration below, didn't change anything it looks to be fine.
Checked that the instance is using the correct security group.
Stoped and started the instance.
Created an Elastic IP and associated it to the instance to have a permanent public IP address.
Any suggestions about which steps could I am missing?
You are checking the ports from inside the instance. Security Groups (SGs) work outside of your instance.
You can imagine them as a bubble around your instance. Subsequently, the instance is not aware of their existence. This can be visualized like on the below image, where the SG is a barrier outside of the instance. Only if SG allow traffic in, then your instance can further limit it by using regular software level firewalls.
To open/block ports on the instance itself you have to use a regular a firewall such as ufw. By default all ports on the instance will be opened, at least when using Amazon Linux 2 or Ubuntu.
Therefore, with your setup, inbound traffic for pots 22, 3000 and 80 will be allowed to the instance.
Update - Response
I got to this point thanks to the comments above!
I wanted to open port 3000 to host a web service, so I did all the steps on my original question, the step that I was missing was to run a server to do something on port 3000. After I ran node I was able to see the port open internally and was able to make requests to that port.
The Security Group remains the same, but now if I list the ports this is what I get: sudo netstat -antp | fgrep LISTEN

Scaling nodejs app while assigning unique ports

I'm trying to scale my game servers (nodejs) where instances should have unique ports assigned to them and where instances are separate (no load balancing of any kind) and are aware what port is assigned to them (ideally by env variable?).
I've tried using docker swarm but it has no option to specify port range and I couldn't find any way to allocate or to pass the allocated port to the instance so it's aware of the port its running on e.g via env variable.
Ideal solution would look like:
Instance 1: hostIP:1000
Instance 2: hostIP:1001
Instance 3: hostIP:1002
... etc
Now, I've managed to make this work by using regular Docker (non-swarm) by binding to host network and passing env variable PORT, but this way I'd have to manually spin up as many game servers as I'd need.
My node app uses "process.env.PORT" to bind to host's IP address:port
Any opinion on what solutions I could use to scale my app?
You could try different approaches.
Use docker compose and external service for extracting data from docker.sock as suggested here How to get docker mapped ports from node.js application?
Use redis or any key-value storage service to store port information and get it with every new instance launch. The most simple solution is to use redis incr command to get next free number but it has some limitations
Not To Sure What You Mean There? Could You Provide More Detail?

Docker app and database on different containers

I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.

Azure InputEndpoints block my tcp ports

My Azure app hosts multiple ZeroMQ Sockets which bind to several tcp ports.
It worked fine when I developed it locally, but they weren't accessible once uploaded to Azure.
Unfortunately, after adding the ports to the Azure ServiceDefinition (to allow access once uploaded to azure) every time I am starting the app locally, it complains about the ports being already in use. I guess it has to do with the (debug/local) load balancer mirroring the azure behavior.
Did I do something wrong or is this expected behavior? If the latter is true, how does one handle this kind of situation? I guess I could use different ports for the sockets and specify them as private ports in the endpoints but that feels more like a workaround.
Thanks & Regards
The endpoints you add (in your case tcp) are exposed externally with the port number you specify. You can forcibly map these endpoints to specific ports, or you can let them be assigned dynamically, which requires you to then ask the RoleEnvironment for the assigned internal-use port.
If, for example, you created an Input endpoint called "ZeroMQ," you'd discover the port to use with something like this, whether the ports were forcibly mapped or you simply let them get dynamically mapped:
var zeromqPort = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["ZeroMQ"].IPEndpoint.Port;
Try to use the ports the environment reports you should use. I think they are different from the outside ports when using the emulator. The ports can be retrieved from the ServiceEnvironment.
Are you running more than one instance of the role? In the compute emulator, the internal endpoints for different role instances will end up being the same port on different IP addresses. If you try to just open the port without listening on a specific IP address, you'll probably end up with a conflict between multiple instances. (E.g., they're all trying to just open port 5555, instead of one opening 127.0.0.2:5555 and one openining 127.0.0.3:5555.)

Resources