I am learning Kubernetes and finding some suggestions about deploying my application.
My application background:
Backend: NodeJS
Frontend: ReactJS
Database: MongoDB (Just run mongod to start instead of using MongoDB cloud services)
I already know how to use Docker compose to deploy the application in single node.
And now I want to deploy the application with Kubernetes (3 nodes).
So how to deploy MongoDB and make sure the MongoDB data is synchronize in 3 nodes?
I have researched some information about this and I am confused on some keywords.
E.g. Deploy a Standalone MongoDB Instance,
StatefulSet, ...
Are this information / articles suitable for my situation? or do you know any information about this? Thanks!
You can install mongodb using this helm chart.
You can start the MongoDB chart in replica set mode with the following parameter: replicaSet.enabled=true
Some characteristics of this chart are:
Each of the participants in the replication has a fixed stateful set so you always know where to find the primary, secondary or arbiter nodes.
The number of secondary and arbiter nodes can be scaled out independently.
Easy to move an application from using a standalone MongoDB server to use a replica set.
See here to learn configuration and installation details
You can create helm charts for your apps for deployment -
Create Dockerfile for your app, make sure you copy the build that was created using npm build
Push to dockerhub or any other registry like ACR or ECR
Add the image tags in helm deployments & pass values from values.yaml
For MongoDb deployment, use this chart https://github.com/bitnami/charts/tree/master/bitnami/mongodb
Related
I'm using a Bitnami Helm Chart for Cassandra in order to deploy it with Terraform. I'm freshly new to it all, and I struggle with changing one config value, mainly commitlog_segment_size_in_mb. I want to do it before I run terraform commands, but in the Helm Chart itself, I failed to find any mentions of it.
I know I can change it after the terraform deployment in the cassandra.yaml file, but I would like to have this value controllable, so that another terraform update will not overwrite this file.
What would be the best approach to change values of Cassandra config?
Can I modify it in Terraform if it's not in the Helm Chart?
Can I export parts of the configuration to a different file, so that I know my next Terraform installations will not overwrite them?
This isn't a direct answer to your question but in case you weren't aware of it already, K8ssandra.io is a ready-made platform for running Apache Cassandra in Kubernetes using Helm charts to deploy Cassandra with the DataStax Cassandra Operator (cass-operator) under the hood with all the tools built-in:
Reaper for automated repairs
Medusa for backups and restores
Metrics Collector for monitoring with Prometheus + Grafana
Traefik templates for k8s cluster ingress
Stargate.io - a data gateway for connecting to Cassandra using REST API, GraphQL API and JSON/Doc API
K8ssandra and all components are fully open-source and free to use, improve and enjoy. Cheers!
After month of research, we are here, hoping for someone to have a insight about these issue:
On a GKE cluster, our pods (node.JS) are having trouble connecting to our external oracle business database.
To be more precise, ~70% of our connection tentative are ending in error:
ORA-12545: Connect failed because target host or object does not exist
The 30% left are working well, and doesn't reset or end prematurely. Once it's connected, it's all good from here.
Our stack:
Our flux are handed by containers based on a node:12.15.0-slim image, at which we add LIBAIO1 and a instant oracle client (v12.2). We use oracleDB v5.0.0 as node module
We use cron job pod handling our node container, in a clusterIP service on a GKE cluster (1.16.15-gke.4300).
Our external oracle database in on a private network (which our cluster have access), in a Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi version, behind a load balancer
I can give more detail if needed.
What we have already tried:
We have tried to pass directly on the database, cutting off the load balancer: no effect
We had cron job pod doing ping each min on the database server for a day: no error, although flux pod somehow encounter the ORA-12545 error
We redo all our code, connecting differently to the database and making update for our node module oracledb (v4 to v5): no effect
We tried to monitore the load up over the oracle database and take action spreading our flux over all night instead of a 1 hour window: no effect
We had our own kubernetes cluster before GKE, directly in our private network, causing the exactly same error.
We had a audit by some expert on kubernetes, without them finding the issue or seeing a critical issue over our cluster/k8s configuration
What works:
All our pods, some requesting into mySql database, micro service, web front, are all working fine.
All our business tool (dozen of, including Talend and some custom software) are using the oracle database without issue.
Our own flux handling node container are working fine with the oracle database as long they are into a docker env, and not a kube one.
To resume: We have a mysterious issue when trying to connect to an oracle database from a kubernetes env, where pods are randomly unable to reach the database
We are looking for any hint we can have
I am developing the Kubernetes helm for deploying the Python application. Within python application i have a Database that has to be connected.
I want to run the Database scripts that would create db, create user, create table or any alter Database column and any sql script. I was thinking this can be run as a initContainer but that it is not recommended way since this will be running every time even when there is no db scripts also to run.
Below is the solution i am looking for:
Create Kubernetes job to run the scripts which will connect to postgres db and run the scripts from the files. Is there way that in Kunernetes Job to connect to Postgres service and run the sql scripts?
Please suggest any good approach for sql script to be run in kubernetes which we can monitor also with pod.
I would recommend you to simply use the idea of 'postgresql' sub-chart along with your newly developed app helm chart (check here how to use it within the section called "Use of global variables").
It uses the concept of 'initContainers' instead of Job, to let you initialize on startup a user defined schema/configuration of database from the custom *.sql script.
I am trying to connect Cassandra which is inside a docker container, from a Node js application which is also present in another docker container.
My question is What is the best way to do it?
Still now I am able to create both of the container using docker-compose.
There are many tutorials on connecting Docker containers. See:
https://deis.com/blog/2016/connecting-docker-containers-1/
https://docs.docker.com/engine/userguide/networking/default_network/container-communication/
https://docs.docker.com/engine/userguide/networking/
I am trying to evaluate couchbase`s performance on multiple nodes. I have a Client that generates data for me based on some schema(for 1 node currently, local). But I want to know how I can horizontally scale Couchbase and how it works. Like If I have multiple machines or AWS instances or Windows Azure how can I configure Couchbase to shard the data and than I can evaluate its performance for multiple nodes. Any suggestions and details as to how I can do this?
I am not (yet) familiar with Azure but you can find a very good white paper about Couchbase on AWS:
Running Couchbase on AWS
Let's talk about the cluster itself, you just need to
install Couchbase on multiple nodes
create a "cluster" on one of then
then you simply have to add other nodes to the cluster and rebalance.
I have created an Ansible script that use exactly the steps to create a cluster from command line, see
Create a Couchbase cluster with Ansible
Once you have done that your application will leverage all the nodes automatically, and you can add/remove nodes as you need.
Finally if you want to learn more about Couchbase architecture, how sharding, failover, data consistency, indexing work, I am inviting your to look at this white paper:
Couchbase Server: An Architectural Overview