I am trying to connect Cassandra which is inside a docker container, from a Node js application which is also present in another docker container.
My question is What is the best way to do it?
Still now I am able to create both of the container using docker-compose.
There are many tutorials on connecting Docker containers. See:
https://deis.com/blog/2016/connecting-docker-containers-1/
https://docs.docker.com/engine/userguide/networking/default_network/container-communication/
https://docs.docker.com/engine/userguide/networking/
Related
I am learning Kubernetes and finding some suggestions about deploying my application.
My application background:
Backend: NodeJS
Frontend: ReactJS
Database: MongoDB (Just run mongod to start instead of using MongoDB cloud services)
I already know how to use Docker compose to deploy the application in single node.
And now I want to deploy the application with Kubernetes (3 nodes).
So how to deploy MongoDB and make sure the MongoDB data is synchronize in 3 nodes?
I have researched some information about this and I am confused on some keywords.
E.g. Deploy a Standalone MongoDB Instance,
StatefulSet, ...
Are this information / articles suitable for my situation? or do you know any information about this? Thanks!
You can install mongodb using this helm chart.
You can start the MongoDB chart in replica set mode with the following parameter: replicaSet.enabled=true
Some characteristics of this chart are:
Each of the participants in the replication has a fixed stateful set so you always know where to find the primary, secondary or arbiter nodes.
The number of secondary and arbiter nodes can be scaled out independently.
Easy to move an application from using a standalone MongoDB server to use a replica set.
See here to learn configuration and installation details
You can create helm charts for your apps for deployment -
Create Dockerfile for your app, make sure you copy the build that was created using npm build
Push to dockerhub or any other registry like ACR or ECR
Add the image tags in helm deployments & pass values from values.yaml
For MongoDb deployment, use this chart https://github.com/bitnami/charts/tree/master/bitnami/mongodb
I have been using targets.json inside a node.js application running locally to dynamically add ip addresses for prometheus to probe service discovery as file_sd_configs option. It has worked well. I was able add new ip's and execute the prometheus reload api from the node app, monitor those ip's and issue alerts(with blackbox and alertmanager).
However, now the application and prometheus are running inside docker on same network. How can I make my node application write to a file(or update it) inside a folder in prometheus container?
You could bind the target.json file to the Prometheus and the application container by adding a volume mapping to your docker-compose file.
volumes:
- /hostpath/target.json:/containerpath/target.json
Instead of using a mapped hostsystem folder you can also use named volumes, see here for more information about docker volumes.
This question is for Cassandra as well.
I am using the Cassandra-driver packagae in Node to connect to my ScyllaDB node. The node can be accessed via cqlsh from other linux machines outside of its network. However, when I am using Windows to connect to it via a Node application, it is unable to reach the host. I have tried with ports 9042, 9160 and a few others as well.
Previously, I was using Docker to host multiple ScyllaDB nodes in the same Linux VM and Docker was finally exposing them via port 80 - which I was able to connect to, via the Node application.
Where do you think is the problem? Does Windows have a problem connecting to Scylla/Cassandra nodes?
P.S.: The Scylla node is hosted on a Ubuntu 18.04 VM on Azure.
I am not using docker swarm or kubernetes. I need to implement a simple scheduler to start docker container on demand on the least busy host. The docker container runs nodejs codes BTW.
Is there any open source project out here already implements this?
Thanks!
Take a look at kubernetes jobs if you need to run container once and on demand.
As for least busy node you can use nodeAffinity to specify nodes where you don't have other apps or better to specify app resources and allow kubernetes to decide where to run your app.
I'm trying to connect to Standalone Apache Spark cluster from a dockerized Apache Spark application using Client mode.
Driver gives the Spark Master and the Workers its address. When run inside a docker container it will use some_docker_container_ip. The docker address is not visible from outside so an application won't work.
Spark has spark.driver.host property. This property is passed to Master and Workers. My initial instinct was to pass host machine address in there so the cluster would address visible machine instead.
Unfortunately the spark.driver.host is also used to set up a server by Driver. Passing a host machine address in there will cause server startup errors because a docker container cannot bind ports under host machine host.
It seems like a lose-lose situation. I cannot use neither the host machine address nor the docker container address.
Ideally I would like to have two properties. The spark.driver.host-to-bind-to used to set up the driver server and the spark.driver.host-for-master which would be used by Master and Workers. Unfortunately it seems like I'm stuck with one property only.
Another approach would be to use --net=host when running a docker container. This approach has many disadvantages (e.g. other docker containers cannot get linked to a container with the --net=host on and must be exposed outside of the docker network) and I would like to avoid it.
Is there any way I could solve the driver-addressing problem without exposing the docker containers?
This problem is fixed in https://github.com/apache/spark/pull/15120
It will be part of Apache Spark 2.1 release