Nginx multiple load balancers or single load blanacer [closed] - node.js

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
When deploying an application with multiple tiers is it preferable have individual Nginx load-balancers for API and Web servers? Or a single LB serving both the API and Web servers?

I would go with the simpler solution of a single load balancer until it's clear that they need be separate.
If Nginx is the load balancer, you can use different logging and configurations to customize differences between the "web" backend and the "api" backend.

So there is a lot to consider when node balancing, with node I personally use pm2 in cluster mode (for machine local clusters) and nigix as the overall load balancer (and static host).
Remember when load balancing, depending on the app, sessions and communication between the nodes requires infrastructure (redis, mongodb)
pm2 (locally) can deploy a node app to each cpu core and manage load balancing all in one command pm2 start app.js -i 4 this can be spread across multiple nodes.

Related

For a small production environment is it better to use only masters k8s or some mini k8s solutions? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 days ago.
Improve this question
I have a scenario of a small air-gap production environment with only three Linux servers (CentOS or RHEL).
I want to deploy a small k8s cluster on them.
I have two approaches for now:
Installing a pure k8s cluster with only master nodes and untainting them from NoSchedule to run all pods on them.
Installing a mini cluster solution using k3s, k0s, or microk8s and configuring all nodes as master and workers
If I use the first approach (I know it's a bad practice) is it the correct way to run pods on masters?
If I want to use the second one who is the best and easiest to install in different air-gap environments and maintain them? (I used k8s and okd 3 in production but not them)
Lastly, what do you think is the best approach from those two, or are there better ones for my scenario?
Thanks in advance for the help

Cluster mode in nodejs using PM2 [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
We are planning to use kubernetes to deploy node application in a AWS clustered environment. Just needed some advice about if its good practice to use nodejs clustered module for a distributed deployment in AWS. Or single process for single container is good in AWS.
It's really not about "good" or "bad".
Using PM2 would mean you'd ask Kubernetes for multiple CPUs for your pod.
Not using PM2 would means you'd ask Kubernetes for one (or less) CPU for your pod, which would be easier for Kubernetes to schedule (possibly on multiple nodes).
Having one fat pod on one node is less reliable than having multiple smaller pods distributed across multiple nodes.
Hope this helps!

Big Data integration testing best practice [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I am looking around for some resources on what best practices are for a AWS based data ingestion pipeline that is using Kafka, storm, spark (streaming and batch) which read from and write to Hbase using various micro services to expose the data layer. For my local env I am thinking of creating either docker or vagrant images that will allow me to interact with the env. My issue becomes as to how to standup something for a functional end to end environment which is closer to prod, the drop dead way would be to have an always on environment but that gets expensive. Along the same lines in terms of a perf environment it seems like I might have to punt and have service accounts that can have the 'run of the world' but other accounts that will be limited via compute resources so they don't overwhelm the cluster.
I am curious how others have handled the same problem and if I am thinking of this backwards.
AWS also provides a Docker Service via EC2 Containers. If your local deployment using Docker images is successful, you can check out AWS EC2 Container Service (https://aws.amazon.com/ecs/).
Also, check out storm-docker (https://github.com/wurstmeister/storm-docker), provides easy to use docker-files for deploying storm clusters.
Try hadoop mini clusters. It has support for most of the tools you are using.
Mini Cluster

Scaling MongoDB on OpenShift vs using MongoLab [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Coming from a "traditional" development background, I cringe whenever I see PaaS NoSQL offerings. The idea of hosting your data far from your application simply does not feel right. But PaaS providers like MongoLab are here and are seemingly very successful.. so I think to myself, it must be working.. I should consider it.
I'm building an application using NodeJS and MongoDB and will be hosting it on OpenShift. Ideally, I have both Web Servers and a Mongo cluster setup that I can easily scale them horizontally... all hosted on OpenShift.
Does it make sense to host/scale Mongo on OpenShift? Should I go with a PaaS like MongoLabs?
UPDATE: I'm asking about the architectural reasons why one chose to host data away from your app in a PaaS-type offering vs hosting it yourself in a service like OpenShift. The specific services I'm listing here are irrelevant as it could apply to other hosting service, NoSQL database, or PaaS provider.
MongoLab is actually a DBaaS (DataBase as a Service) not a PaaS, just for clarification.
The reasons for hosting a database etc offsite is similar to hosting files offsite with say Amazon S3. You are looking for a service that specializes in what you are using it for. MongoLab specializes in MongoDB, sharding, replication, large data sets etc. They would be a great provider if you need those services. If not, then the MongoDB instance on OpenShift should be fine, you can even use a scaled application to get into it's own gear, but we do not support sharding or replication for MongoDB at this time.

Which Amazon EC2 plan to choose for 60 concurrent node.js servers? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a local web service which is using 60 running node.js servers. Now I want web service to drag over on Amazon EC2.
Which the best plan to choose to run 60 concurrent node.js servers?
As I understand it, for optimal performance need to run one node.js server for one CPU core.
If I use the plan Cluster Compute Eight Extra Large Instance, where 88 EC2 Compute Units, will be issue in performance?
This would totally depend on what the performance constraints of you apps are (i.e. are you memory-bound, CPU-bound, or IO-bound)?
You best starting point is to just look at your current server and find an equivalent EC2 instance type. Of course you should consider that if you current server uses higher end hardware at all, that the performance would not be as good on the virtualized EC2 which runs on commodity hardware, so perhaps you want to go with something with a little higher RAM, CPU, etc. than what you currently have.
Of course this above advice is based on the use of a single server. IMO, that is really not the use case that EC2 was created to resolve. To truly take advantage of the EC2 infrastructure, you should think about how you can horizontally scale your services. In some cases it can be more cost-effective to have a fleet of lower-price instances perforing your work rather than a single monolithic larger instance.

Resources