I dockerized my Node.js server, which also handles my Telegram bot.
now, I'm not able to use my docker image more than once for the load balancer etc. without getting duplicate telegram bot error.
is there a way to fix it without extract the bot to different docker image?
The nginx handle the load balancing if it matters.
Docker assigns a random container id which is set as the hostname of the container, either unless you are using --net=host or manually overriding, which is available as an environment variable inside the container. During the start of your node.js application, you could read this environment variable (HOSTNAME) and use it as unique identifier for your scaled telegram bots.
Related
I am trying to get a streaming service running from a modified version of an open source repo https://github.com/nabendu82/streams.
I have a frontend client in React, a RTMP server for the stream, and a backend API. I have got a docker compose file to host them all together. If I run docker-compose up on my local computer, everything works perfectly. I can visit http://localhost:3000/matches/view and see two stream windows that aren't loaded, until I open up the streaming software OBS, Settings -> Stream -> Server: rtmp://localhost/live, Stream Key: 7. Then the right stream window will start.
To host this repo on the internet, I've created a basic EC2 instance on AWS (http://13.54.200.18:3000/matches/view). I installed docker-compose and I've copied all the repo files up to it.
However, when running on the AWS box the stream does not load, and the console error is always the spectacularly unhelpful:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://server:3002/streams/6. (Reason: CORS request did not succeed).
So for some reason CORS is preventing the React frontend from reading the server backend while it is hosted on AWS.
Here is the catch. I can actually get the streaming on the AWS hosted site to work, but only by running docker-compose up on my LOCAL computer at the same time. For some unknown reason, the AWS hosted version is able to pick up on the backend server running on my local machine (rather than the one running alongside it in docker-compose on AWS) and connect that way. I can even stream to the website via OBS at rtmp://13.54.200.18/live and everything works. But it only works on exactly my local computer running the docker-compose infrastructure (and only if I use calls to 'localhost' instead of the docker-compose service 'server'), if anyone else tries to see the stream on the live site they will just get Loading... perpetually and the CORS error.
Why is the AWS hosted code not looking at its own docker-compose file and its own server:3002 service? For the rest of the world, and for me if I'm not running a local server, it throws a CORS error. For just my local computer, and only if I'm running a local server and making requests to 'localhost:3002', it works perfectly.
If I ssh on to the AWS image, then docker-compose run client curl localhost:3002/streams will fail, but docker-compose run client curl server:3002/streams will give me back the correct JSON data. From everything I understand about docker compose, my services should be able to access each other and it appears they can, everything works great locally, and the services can talk to each other on the AWS box too, but just somehow this CORS error appears out of nowhere only on the AWS hosted version.
I've tried everything under the sun I can think of. I was originally using json-server, but I thought that might be the issue (as it has to specifically bind to -H 0.0.0.0), so I wrote my own Express server using the cors package to replace it and there has been no change. I've tried every configuration of docker-compose variables I can imagine. As far as I can understand I've done everything right, but somehow the AWS box wants to talk to my own computer's localhost aka "server service" aka 0.0.0.0 instead of its own. What is going on?
Repository here: https://github.com/JeremyEllingham/streams
Any help much appreciated.
I figured out how to get it working, by just posting direct to the Linux box IP address in production instead of trying to get it working with "localhost" or the docker service names. Kind of disappointed that docker-compose doesn't seem to work quite like I thought it did, but it's totally functional to just conditionally alter the base URL.
See also this answer: React app (in a docker container) cannot access API (in a docker container) on AWS EC2
I'm trying to scale my game servers (nodejs) where instances should have unique ports assigned to them and where instances are separate (no load balancing of any kind) and are aware what port is assigned to them (ideally by env variable?).
I've tried using docker swarm but it has no option to specify port range and I couldn't find any way to allocate or to pass the allocated port to the instance so it's aware of the port its running on e.g via env variable.
Ideal solution would look like:
Instance 1: hostIP:1000
Instance 2: hostIP:1001
Instance 3: hostIP:1002
... etc
Now, I've managed to make this work by using regular Docker (non-swarm) by binding to host network and passing env variable PORT, but this way I'd have to manually spin up as many game servers as I'd need.
My node app uses "process.env.PORT" to bind to host's IP address:port
Any opinion on what solutions I could use to scale my app?
You could try different approaches.
Use docker compose and external service for extracting data from docker.sock as suggested here How to get docker mapped ports from node.js application?
Use redis or any key-value storage service to store port information and get it with every new instance launch. The most simple solution is to use redis incr command to get next free number but it has some limitations
Not To Sure What You Mean There? Could You Provide More Detail?
We are using jhipster for our microservices apps and sending app logs directly to logstash server using jhipster.logging.logstash.host properties. All our apps and elk(jhipster console) are running as docker containers. We are planning to run multiple docker swarm stacks(dev sita sitb etc) on a single docker host. We have only one ELK server and all logs will go to this server. I would like to index the logs using environment names like stack-deva, stack-sita etc. For this, is there a way to add a new field like 'env' in jhipster properties that can be used in logstash to create indexes? for example
if env == 'sita' {
index => "sita-projectname"
}
Thank you
You could define several tcp listeners on different ports in logstash.conf.
This way you could have different indexes, your apps properties would use a different port per environment.
I have two Docker containers, one running a React app (built using create-react-app) and another with a Node app API. I have a docker-compose file set up, and according to the documentation I should be able to use the names of the services to communicate between containers.
However, when I try to send a request to the /login endpoint of my API from the React app I received a net::ERR_NAME_NOT_RESOLVED error. I'm using Unirest to send the request.
I've done a bunch of digging around online and have come across a few things describing similar issues but still haven't been able to find a solution. When I run cat /etc/resolve.conf (see this issue) in my React container the container with my API doesn't show up, but Docker is still fairly new to me so I'm not sure if that's part of the issue. I've also tried using links and user-defined networks in my compose file but to no avail.
I've included gists of my docker-compose.yml file as well as the code snippet of my request. Any help is much appreciated!
docker-compose.yml
Unirest request to /login
As discussed in the comments on the original post this was a DNS issue. Configuring the DNS was a little to involved for the use case of this project, so I've solved my problem by using an environment variable to set the URL that is used to make calls to my API container based on whether I'm running on a dev or prod environment:
(process.env.REACT_APP_URL_ROOT || '/app')
I set REACT_APP_URL_ROOT to my localhost address when running locally, and have an nginx container configured to proxy to /app when I build and deploy the React app.
I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.