I am using an official Couchbase Docker container.
Some important ports Couchbase server uses.
Those are exposed through the container as random ports on the host.
Is there a way to supply those host ports on obtaining a Couchbase server connection? Something akin to how the server is configured preinstall but for the client.
I am using their latest Node.js SDK but don't see a good options hash, e.g., in the Cluster.
I could fall back on a 1:1 mapping (container:host) in the Docker run if all else fails.
Docker publishes all exposed ports to a random port on the host interface if the container is started using -P. The port is within the range 32768 and 61000.
Each exposed port can be mapped to an explicit port -p hostPort:containerPort.
This approach is independent of the client SDK.
Its recommended to start one Couchbase server on one Docker host to ensure ports are available.
Related
I am running a parity node on "AWS EC2" instance. I need to connect to this parity node remotely using web-socket provider at port 8546. But I am not able to connect to it remotely, though it is working fine when I run the script within the ec2 instance.
I have already defined the inbound tcp rule for port 8546.
Custom TCP Rule
TCP
8546
IP/32
Is it possible to connect to websocket port from outside the ec2 machine?
Is there anything special I need to do in order access the web-socket port outside of the instance?
In your config, specifically the [websockets] section, you'll need to specify the interface, hosts, and origins that are allowed to communicate with Parity.
By default, parity only listens on the local interface, and both "hosts" and "origin" are set to none, so only pages/apps hosted on your local device have access to your node.
I would use the websocket section here: https://paritytech.github.io/parity-config-generator/ for reference.
I am trying to restrict access to my express/nodejs app so that it can only be accessed from it's domain url. Currently if I go to http://ip-address-of-server:3000, the app gets served directly, bypassing nginx.
I have tried adding 'localhost' in the app.listen --
app.listen(4000, 'localhost' , () => console.log('Server running'));
but this ends up making the app completely inaccessible. Even through nginx.
The app is running inside a docker container. And nginx is running on the host. I think this might be causing it but don't know how to fix this.
You can use Docker host mode networking for your app since you mentioned that "The app is running inside a docker container. And nginx is running on the host.".
Ref - https://docs.docker.com/network/host/
This way, it will be reachable via nginx on the same network. Your "localhost" settings will start working as usual after launching the container in docker host mode networking.
Looks like you want to do IP based filtering i.e. you want the node.js program to be accessible only from nginx (localhost or remote) and locally.
There are two ways
Use express-ipfilter middleware. https://www.npmjs.com/package/express-ipfilter to filter requests based on ips
Let node.js listen to everything but change the iptables on the host to restrict port access to specific ips. Expose the port of the node.js container to the host using -p and close the iptable for that port to the outside world
I prefer the second way as it is more robust and restricts traffic at the network level
I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.
I am running several docker containers for a very small web app: nginx, node, and redis. These containers are all linked together using the legacy methods (not a network) with the pattern
nginx --proxies-> node --uses-> redis
My nginx proxy is set up to use HTTPS but my node server (using hapi.js) is not. Is this a security issue?
It isn't security issue if you aren't sending your data from nginx to node using public networks. If your HTTP traffic will be transfered inside one host machine and this host machine is fully controlling by you it will be unreachable for external access.
I have a meteor application with mongodb running on one of my systems. I want another application running on a different system to be able to access the mongodb that is spawned by my meteor application.
How can I accomplish this because by default the mongodb bind ip is localhost, so it's not accessible from outside.
Not recommended but you can disable the restriction of MongoDB that can be accessed only via localhost.
See: http://www.mkyong.com/mongodb/mongodb-allow-remote-access/
if your service is on the same server, then use the localhost address:
Meteor tends to expose the mongodb at its adress +1
(if meteor is on port 3000, mongodb is on port 3001)
then your service can access it at localhost:3001
If you want to access from another server, then you need to change the mongodb config to expose the port to the outside (probably also setup some firewall rules to only give access to your other server etc...)
and then use the suggested above
MONGO_URL=mongodb://hostname:port
ideally deploy your mongodb securely somewhere and plug Meteor an any other app needing it to it via the connection string.