We are using jhipster for our microservices apps and sending app logs directly to logstash server using jhipster.logging.logstash.host properties. All our apps and elk(jhipster console) are running as docker containers. We are planning to run multiple docker swarm stacks(dev sita sitb etc) on a single docker host. We have only one ELK server and all logs will go to this server. I would like to index the logs using environment names like stack-deva, stack-sita etc. For this, is there a way to add a new field like 'env' in jhipster properties that can be used in logstash to create indexes? for example
if env == 'sita' {
index => "sita-projectname"
}
Thank you
You could define several tcp listeners on different ports in logstash.conf.
This way you could have different indexes, your apps properties would use a different port per environment.
Related
I have a VPS server (ubuntu)
wanted multiple node.js sites to be running on it, thus multiple domains
Was trying out
kubernetes with ha and docker images(containers) per website /But memory consumption would increase and the deployment is complex.
What i Need
I don't care if the database instance is shared
each website can have a own database in the database instance.
Node.js must run in the background, has got some env variables.
Simplest routing based on domain names to node.js port like 3000, 4000, 5000 and so on ..
I would advise using NGINX as described here, Setting up an Nginx Reverse Proxy
I'm trying to scale my game servers (nodejs) where instances should have unique ports assigned to them and where instances are separate (no load balancing of any kind) and are aware what port is assigned to them (ideally by env variable?).
I've tried using docker swarm but it has no option to specify port range and I couldn't find any way to allocate or to pass the allocated port to the instance so it's aware of the port its running on e.g via env variable.
Ideal solution would look like:
Instance 1: hostIP:1000
Instance 2: hostIP:1001
Instance 3: hostIP:1002
... etc
Now, I've managed to make this work by using regular Docker (non-swarm) by binding to host network and passing env variable PORT, but this way I'd have to manually spin up as many game servers as I'd need.
My node app uses "process.env.PORT" to bind to host's IP address:port
Any opinion on what solutions I could use to scale my app?
You could try different approaches.
Use docker compose and external service for extracting data from docker.sock as suggested here How to get docker mapped ports from node.js application?
Use redis or any key-value storage service to store port information and get it with every new instance launch. The most simple solution is to use redis incr command to get next free number but it has some limitations
Not To Sure What You Mean There? Could You Provide More Detail?
I dockerized my Node.js server, which also handles my Telegram bot.
now, I'm not able to use my docker image more than once for the load balancer etc. without getting duplicate telegram bot error.
is there a way to fix it without extract the bot to different docker image?
The nginx handle the load balancing if it matters.
Docker assigns a random container id which is set as the hostname of the container, either unless you are using --net=host or manually overriding, which is available as an environment variable inside the container. During the start of your node.js application, you could read this environment variable (HOSTNAME) and use it as unique identifier for your scaled telegram bots.
I keep getting errors related to conflicting ports. When I set a breakpoint inside Program.cs at the line containing
ServiceRuntime.RegisterServiceAsync
It actually stops there more then once per service in the service fabric project which is obviously why it's trying to bind to the same port more than once! Why is it doing this all of a sudden?!
HttpListenerException: Failed to listen on prefix 'https://+:446/' because it conflicts with an existing registration on the machine.
The problem is that the httplistener is trying to bind to a port that is already in use. The cause of this problem can be one of the following.
Another process is already using the port. Try netstat -ano to find out the process that is using the port and then tasklist /fi "pid eq <pid of process>" to find the process name.
Maybe you are starting your development cluster as a multi node instance. That way several nodes on one machine are trying to access the same port.
Maybe you have a frontend and an api that you want to run on the same port then you have to use the path-based binding capabilities of http.sys (If you are using the WebListener)
If this fails could you please post a snippet of the ServiceManifest.xml.
There should be a line defining your endpoint <Endpoint Protocol="https" Type="Input" Port="446" />
In your application manifest, you define how many instances of your service you want, the common mistake people do is to set this number to more than 1, and it will fail, because your local cluster show 5 nodes, but they all run on same machine, and the machine port will be used only in the first instance started.
Set the number of instances to 1 and you won't see multiple entrance on main entry-point at program.cs.
Make it configurable from ApplicationParameters, so you can define these number per environment.
You say that you didn't have to set the instance count before and that could be because you have the option to use Publish profiles that can differ from Cloud vs Local deployment. The profile will point to the corresponding Application Parameters file in which you can set the instance count to 1 for local deployments.
Perhaps something happened to your publish profiles?
ApplicationParameters/Local.1Node.xml:
I have a node app running in one docker container, a mongo database on another, and a redis database on a third. In development I want to work with these three containers (not pollute my system with database installations), but in production, I want the databases installed locally and the app in docker.
The app assumes the databases are running on localhost. I know I can forward ports from containers to the host, but can I forward ports between containers so the app can access the databases? Port forwarding the same ports on different containers creates a collision.
I also know the containers will be on the same bridged network, and using the "curl" command I found out they're connected and I can access them using their relative IP addresses. However, I was hoping to make this project work without changing the "localhost" specification in the code.
Is there a way to forward these ports? Perhaps in my app's dockerfile using iptables? I want the container of my app to be able to access mongoDB using "localhost:27017", for example, even though they're in separate containers.
I'm using Docker for Mac (V 1.13.1). In production we'll use Docker on an Ubuntu server.
I'm somewhat of a noob. Thank you for your help.
Docker only allows you to map container ports to host ports (not the reverse), but there are some ways to achieve that:
You can use --net=host, which will make the container use your host network instead of the default bridge. You should note that this can raise some security issues (because the container can potentially access any other service you run in your host)...
You can run something inside your container to map a local port to a remote port (ex rinetd or a ssh tunnel). This will basically create a mapping localhost:SOME_PORT --> HOST_IP_IN_DOCKER0:SOME_PORT
As stated in the comments, create some script to extract the ip address (ex: ifconfig docker0 | awk '/inet addr/{print substr($2,6)}'), and then expose this as an environment variable.
Supposing that script is wrappen in a command named getip, you could run it like this:
$ docker run -e DOCKER_HOST=$(getip) ...
and then inside the container use the env var named DOCKER_HOST to connect your services.