I have a node.js Websocket server listening on port 443 running in a Docker Container(let's say ws_container) that is connected to both the host network and an internal network , let's say internal_net.
When the Websocket server running in ws_container establish a connection with a Websocket, I want to spawn a new container(let's say an Ubuntu 18.04 container) connected to internal_net from the ws_container.
I came across this question Is it possible to start a stopped container from another container , that states the best way to accomplish this is to mount the docker socket in the container(in my case ws_container).
Are there any better ways of solving the problem?
It depends where you want to spawn container. Containers run where docker daemon runs. To connect to the daemon you can use socket if the daemon is on same machine or you can use tcp connection to spawn container on any daemon that you have access to.
If you use some kind of orchestration like swarm or kubernetes then you can use some management tools to do that, like portainer, open shift or others.
However in most of the cases mapping socket is the easiest and sufficient option to do.
Related
I managed to successfully deploy a docker image to a VM instance. How can I send network requests to it?
The code is a simple Node.js / express app that simply res.json() "Hi there!" on the root path. It is listening on port 3000.
I think the deploy process was this:
Build Docker image from Node.js / express src.
Run container on local command line, correctly expose ports. It works locally.
Tagged the image with the correct project ID / zone.
Pushed to VM. I Think I pushed the image, rather than the container. is this a problem?
SSH into VM. Run docker ps and see running container with correct image tag
use command line curl (I am using zsh terminal) as well as browser to check network requests. Getting connection refused error
As a beginner, the google firewall settings appear to be open--I have allowed ingress on all ports.
I will also want to allow egress at some point but for now my problem is that I am getting a connection refused error whenever I try to contact the IP address, either with my web-browser or curl from the command line.
It would seem that the issue is most likely with the firewalls, and I have confirmed that my docker container is running in the VM (and the source code works on my machine).
EDIT:
Updated Firewall Rules with Port 3000 Ingress:
You need a firewall rule that permits traffic to tcp:3000.
Preferably from just your host's IP (Google "what's my IP?" And use that) but for now you can (temporarily) use any IP 0.0.0.0/0.
Firewall rules can be applied only to the VM running your container too, but I'd not worry about that initially.
I've node application and I use the following guide to debug it which works great
https://codeburst.io/an-easy-way-to-debug-node-js-apps-in-cloud-foundry-22f559d44516
Now I've a bit more complex scenario which one application is spawn other node application which I want to debug (the spawned application ) , in the cf top I see this app PID (of the spawned app) but my question if there is a way to debug it also ? both app running in the same container .
I was able to debug the master app but not the spawned app..., any idea how ?
I was able to ssh the main app, we are using cf 2.98 version
I don't think there's anything CloudFoundry-specific that needs to be done to make this work. The process described at the link you provided is showing how you can launch an application with the node --inspect flag, create an SSH tunnel to the port where node is listening and then attach to it remotely over the SSH tunnel.
If you're spawning subprocesses, I would suggest that you make sure those subprocesses, assuming they're also running Node, have the --inspect=<port> flag passed to them. In this case, you will need to set a port because the default port 9229 used by --inspect is already taken by your main process.
I don't know if your subprocesses are short or long-lived, but you may need to log the inspect port assigned to them somewhere so that you know which port to connect to so that you inspect a specific subprocess.
Hope that helps!
I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.
I currently have an EC2 instance up and running with Amazon Linux running and transferred my project (which contains both React/NodeJS/Express) onto the EC2 instance via SFTP using FileZilla.
For the EC2's Security Groups, I opened a port for 3000 (protocol: tcp, source: 0.0.0.0/0), which is how my Express is defined as well.
So I sshed into EC2 instance and ran the project's Express, and sees it listening to port 3000 within the terminal. But once I hit the Public DNS with ec2...us-west-1.compute.amazonaws.com:3000, it says The site can't be reached - ec2...us-west-1.compute.amazonaws.com took too late to respond.
What could be the issue and how can I go about from here to connect to it?
Thank you in advance and will upvote/accept answer.
Just check if your Node.js server is running on the EC2 instance.
Debugging:
Check first if It working locally properly.
Check for the node.js server in EC2.
sudo netstat -tulpn | grep :3000
try to run server with --verbose flag i.e npm run server --verbose
it will show logs of the server while starting.
Check for the security group Setting for the EC2 instance.
try to connect with the ip:port i.e 35.2..:3000
If still it not working and response taking long time.
that means some other service is running on the same port.
try this in ec2:
sudo killall -9 node
npm run server
And connect with using IP(54.4.5.*:3000) or public DNS (http://ec2...us-west-1.compute.amazonaws.com:3000).
Hope It will help :)
You may be encountering an issue with outbound traffic. You may be inside a company's network, either physically connected or VPN'd in. In some instances, your VPN isnt set up to handle split traffic, so you must abide by your company's outbound restrictions.
In a situation like this, you would want to use a proxy to access your site. when locking down your security group, make sure you use your proxy's public IP (not your company's).
Usually, when we have connectivity issues, it is something basic or a firewall. I assume you have checked whether a firewall is running on either end, eg. iptables -L -n. Also, any protocol analyzer like wireshark or tcpdump would tell you where packets to port 3000 are visible.
I am using an official Couchbase Docker container.
Some important ports Couchbase server uses.
Those are exposed through the container as random ports on the host.
Is there a way to supply those host ports on obtaining a Couchbase server connection? Something akin to how the server is configured preinstall but for the client.
I am using their latest Node.js SDK but don't see a good options hash, e.g., in the Cluster.
I could fall back on a 1:1 mapping (container:host) in the Docker run if all else fails.
Docker publishes all exposed ports to a random port on the host interface if the container is started using -P. The port is within the range 32768 and 61000.
Each exposed port can be mapped to an explicit port -p hostPort:containerPort.
This approach is independent of the client SDK.
Its recommended to start one Couchbase server on one Docker host to ensure ports are available.