How to dynamically manage prometheus file_sd_configs in docker container? - node.js

I have been using targets.json inside a node.js application running locally to dynamically add ip addresses for prometheus to probe service discovery as file_sd_configs option. It has worked well. I was able add new ip's and execute the prometheus reload api from the node app, monitor those ip's and issue alerts(with blackbox and alertmanager).
However, now the application and prometheus are running inside docker on same network. How can I make my node application write to a file(or update it) inside a folder in prometheus container?

You could bind the target.json file to the Prometheus and the application container by adding a volume mapping to your docker-compose file.
volumes:
- /hostpath/target.json:/containerpath/target.json
Instead of using a mapped hostsystem folder you can also use named volumes, see here for more information about docker volumes.

Related

Does Kubernetes restart a failed container or create a new container when the running container fails for any reason?

I have ran a docker container locally and it stores data in a file (currently no volume is mounted). I stored some data using the API. After that I failed the container using process.exit(1) and started the container again. The previously stored data in the container survives (as expected). But when I do this same thing in Kubernetes (minikube) the data is lost.
Posting this as a community wiki for better visibility, feel free to edit and expand it.
As described in comments, kubernetes replaces failed containers with new (identical) ones and this explain why container's filesystem will be clean.
Also as said containers should be stateless. There are different options how to run different applications and take care about its data:
Run a stateless application using a Deployment
Run a stateful application either as a single instance or as a replicated set
Run automated tasks with a CronJob
Useful links:
Kubernetes workloads
Pod lifecycle

Modify file from kubernetes pod

I want to modify particular config file from kubernetes running pod at runtime.
How can I get pod name at runtime and I can modify the file from running pod and restart it to reflect the changes? I am trying this in python 3.6.
Suppose,
I have two running pods.
In one pod I have config.json file. In that I have
{
"server_url" : "http://127.0.0.1:8080"
}
So I want to replace 127.0.0.1 to other kubernetes service's loadbalancer IP in it.
Generally you would do this with an initContainer and a templating tool like envsubst or confd or Consul Templates.
Use downwardAPI to capture the pod name. Develop start up script to get the config file that you want to update. Populate the required values using ' sed' command and then run container process

How do I implement a simple scheduler to start docker container on the least busy host

I am not using docker swarm or kubernetes. I need to implement a simple scheduler to start docker container on demand on the least busy host. The docker container runs nodejs codes BTW.
Is there any open source project out here already implements this?
Thanks!
Take a look at kubernetes jobs if you need to run container once and on demand.
As for least busy node you can use nodeAffinity to specify nodes where you don't have other apps or better to specify app resources and allow kubernetes to decide where to run your app.

How to connect Cassandra and Node application in docker?

I am trying to connect Cassandra which is inside a docker container, from a Node js application which is also present in another docker container.
My question is What is the best way to do it?
Still now I am able to create both of the container using docker-compose.
There are many tutorials on connecting Docker containers. See:
https://deis.com/blog/2016/connecting-docker-containers-1/
https://docs.docker.com/engine/userguide/networking/default_network/container-communication/
https://docs.docker.com/engine/userguide/networking/

Addressing issues with Apache Spark application run in Client mode from Docker container

I'm trying to connect to Standalone Apache Spark cluster from a dockerized Apache Spark application using Client mode.
Driver gives the Spark Master and the Workers its address. When run inside a docker container it will use some_docker_container_ip. The docker address is not visible from outside so an application won't work.
Spark has spark.driver.host property. This property is passed to Master and Workers. My initial instinct was to pass host machine address in there so the cluster would address visible machine instead.
Unfortunately the spark.driver.host is also used to set up a server by Driver. Passing a host machine address in there will cause server startup errors because a docker container cannot bind ports under host machine host.
It seems like a lose-lose situation. I cannot use neither the host machine address nor the docker container address.
Ideally I would like to have two properties. The spark.driver.host-to-bind-to used to set up the driver server and the spark.driver.host-for-master which would be used by Master and Workers. Unfortunately it seems like I'm stuck with one property only.
Another approach would be to use --net=host when running a docker container. This approach has many disadvantages (e.g. other docker containers cannot get linked to a container with the --net=host on and must be exposed outside of the docker network) and I would like to avoid it.
Is there any way I could solve the driver-addressing problem without exposing the docker containers?
This problem is fixed in https://github.com/apache/spark/pull/15120
It will be part of Apache Spark 2.1 release

Resources