Auto scaling resources for foxx/arangodb on mesos - arangodb

Is it possible to separately autoscale foxx and arangodb independently of each other in liu of trying to strike balance, and sure enough autoscale right amount of ram/storage/cpu? Simply if it's a good idea to try and autoscale deployment is answer good enough.

You are not very specific about what you mean by saying "scaling ArangoDB".
In general, you can add more DB server nodes (primaries) independent of the number of coordinator nodes, if that is what you're asking. Foxx is executed on coordinators in a cluster. Your data is stored on DB servers. The cluster configuration is managed by agent nodes. An agency count of 3 is recommended (it must always be an odd number).

yes you can scale the different roles of an ArangoDB cluster independently to serve different use cases best (Agents, Coordinators, DBservers).
If you are using Foxx a lot, then you can increase the number of coordinator instances where the Foxx services live. You should be able to do this for Mesos by accessing the ArangoDB WebUI -> Nodes and in the upper right corner you have a button for Coordinators and one for DBservers. Just click on the "+" sign and a new coordinator will get started.

Related

How does peer to peer architecture work in Cassandra?

How the peer-to-peer Cassandra architecture really works ? I mean :
When the request hits the Cluster, it must hit some machine based on an IP, right ?
So which machine it will hit first ? : one of the nodes, or something in the Cluster who is responsible to balance and redirect the request to the right node ?
Could you describe what it is ? And how this differ from the Master/Folowers architecture ?
For the purposes of my answer, I will use the Java driver as an example since it is the most popular.
When you connect to a cluster using one of the driver, you need to configure it with details of your cluster including:
Contact points - the entry point to your cluster which is a comma-separated list of IPs/hostnames for some of the nodes in your cluster.
Login credentials - username and password if authentication is enabled on your cluster.
SSL/TLS certificate and credentials - if encryption is enabled on your cluster.
When your application starts, a control connection is established with the first available node in the list of contact points. The driver uses this control connection for admin tasks such as:
get topology information about the cluster including node IPs, rack placement, network/DC information, etc
get schema information such as keyspaces and tables
subscribe to metadata changes including topology and schema updates
When you configure the driver with a load-balancing policy (LBP), the policy will determine which node the driver will pick as the coordinator for each and every single query. By default, the Java driver uses a load balancing policy which picks nodes in the local datacenter. If you don't specify which DC is local to the app, the driver will set the local DC to the DC of the first contact point.
Each time a driver executes a query, it generates a query plan or a list of nodes to contact. This list of nodes has the following characteristics:
A query plan is different for each query to balance the load across nodes in the cluster.
A query plan only lists available nodes and does not include nodes which are down or temporarily unavailable.
Nodes in the local DC are listed first and if the load-balancing policy allows it, remote nodes are included last.
The driver tries to contact each node in the query plan in the order they are listed. If the first node is available then the driver uses it as the coordinator. If the first node does not respond (for whatever reason), the driver tries the next node in the query plan and so on.
Finally, all nodes are equal in Cassandra. There is no active-passive, no leader-follower, no primary-secondary and this makes Cassandra a truly high availability (HA) cluster with no single point-of-failure. Any node can do the work of any other node and the load is distributed equally to all nodes by design.
If you're new to Cassandra, I recommend having a look at datastax.com/dev which has lots of free hands-on interactive learning resources. In particular, the Cassandra Fundamentals learning series lets you learn the basic concepts quickly.
For what it's worth, you can also use the Stargate.io data platform. It allows you to connect to a Cassandra cluster using APIs you're already familiar with. It is fully open-source so it's free to use. Here are links to the Stargate tutorials on datastax.com/dev: REST API, Document API, GraphQL API, and more recently gRPC API. Cheers!
Working with Cassandra, we have to remember two very important things: data is partitioned (split into chunks) and data is replicated (each chunk is stored on a few different servers). Partitioning is needed for scalability purposes while Replication serves High Availability. Given that Cassandra is designed to handle petabytes of data under huge pressure (dozens of millions of queries per second), and there is no single server able to handle such the load, each cluster server is responsible only for a range of data, not for the whole dataset. A node storing data you need for a particular query is called a "replica node". Notice that the different queries there will have different replica nodes.
Together, it brings a few implications:
We have to reach multiple servers during a single query to assure the data is consistent (read) / write data to all responsible servers (write).
How do we know which node is right for that particular query? What happens if a query hits a "wrong" node? How do we configure the application so it sends queries to the replica nodes?
Funny enough, as a developer you have to do one and only one thing: understand partitions and partition keys, and then Cassandra will take care of all the potential issues. Simple as that. When you design a table, you have to declare partition keys and the data placement will be based on that - automagically. Next thing, you have to always specify partition keys while doing your queries. That's it, your job is done, get yourself some coffee!
Meanwhile, Cassandra starts her job. Cassandra nodes are smart, they know data placement, they know what servers are responsible for the data you are writing, and they know the partitions - in Cassandra language it's called token-aware. That does not matter which server will receive the query, as literally every server is able to answer it. Any node that got the request (it's called query coordinator because it coordinates the query operations) will find replica nodes based on the placement of the partitions. With that, the query coordinator will execute the query, making proper calls to the replicas - the coordinator knows which nodes to ask because you did your part of the job and specified partition key value in the query, which is used for the routing.
In short, you can ask any of your cluster nodes to write/read your data, Cassandra is decentralized and you'll get it done. But how do we make it better and get directly to the replica to avoid bothering nodes that don't store our data?
So which machine it will hit first ?
The travel of a request starts much earlier than we could think of - when your application starts, a Cassandra driver connects to a cluster and reads information about data placement: which partition is stored on which nodes, It means that driver knows which node has to be contacted for different queries. You got it right, a driver is token-aware too!
Token-aware drivers understand data placement and will route a query to a proper replica node. Answering the question: under normal circumstances, your query will first hit one of the replica nodes, this node will get answers or write data to the other replica nodes and that's it, we are good. In some rare situations, your query may hit a "wrong" non-replica server, but it doesn't really matter as it also will do the job, with just a minor delay - for example, if your Replication Factor = 3 (you have three replicas), and your query got to a "wrong" node, it will have to ask all three replicas while hitting the "right one" still require 2 network operations. It's not a big deal though as all the operations are done in parallel.
how this differ from the Master/Folowers architecture
With leader/follower architecture, you can read from any server but you can write only to a leader server, which gives two issues:
Your app needs to know who is the leader (or you need to have a special proxy)
Single Point of Failure (SPoF) - if the leader is down, you can't write to the DB at all
With Cassandra's peer-to-peer architecture you can write to any of the cluster nodes, even if there are thousands of them. Of course, there is no SPoF.
P.S. Cassandra is an extremely powerful technology, but great power comes with great responsibility, it's quite complex too. If you plan to work with it, you better invest some time into learning to use it properly. I do suggest taking a Developer Path on the academy.datastax.com (it's free!) or at least watch DataStax "Intro to Cassandra" workshop
It is based on the driver that you used to connect to the Cassandraâ„¢ cluster. Again, all nodes in the datacenter are one and same. It would connect to any of the nodes the localdatacenter that you have provided in driver configs based on the contact points configuration (i.e. datastax-java-driver.basic.contact-points in Java Driver).
For example, the Java driver (& most drivers logic will be the same) uses system.peers.rpc-address to connect to newly discovered nodes. For special network topologies, an address translation component can be plugged in.
advanced.address-translator in the configuration.
none by default. Also available: EC2-specific (for deployments that span multiple regions), or write your own.
Each node in the Cassandra cluster is uniquely identified by an IP address that the driver will use to establish connections.
for contact points, these are provided as part of configuring the CqlSession object;
for other nodes, addresses will be discovered dynamically, either by inspecting system.peers on already connected nodes, or via push notifications received on the control connection when new nodes are discovered by gossip.
More info can be found here.
It seems you are asking how specifically Cassandra selects which Node gets hit with data and which ones doesn't.
There are two sides to this: the client and the servers
On the client
When a CQL Connection is established the client (if implemented in the client library and configured) usually also retrieves the Topology from the Cluster. A topology is the information about the token ownership inside the ring as well as information about quorums etc..
So the client itself can already make a decision on the next request what Node to contact for a certain amount of information due to Consistent Hashing of the primary keys in Cassandra. The client is aware who would be the right choice of Node to contact.
But still the client can choose not to use this information and just send the information to any node of the ring - the nodes will then forward the requests to the appropriate token owners -> See the next section.
In the Cluster
The same applies to the nodes themselves. If a client sends a request to a node it will simply look up the owner nodes in it's topology table and forward the request to exactly the nodes that do own this token.
It will always forward it to all of them so the data is consistent across the cluster. Depending on the replication factor it will return a success response to the client if the required replication is acknowledged by the cluster (eg. LOCAL_QUORUM with RF=3 will return a success response when 2 nodes acknowledge the receipt while the 3rd node is still pending).
If a node is detected as down or can't be reached the Command that would have been sent to the node is saved in the local hints table - a buffer that keeps all the operations that haven't been successfully sent to other nodes.
You can read more on Hints in the Cassandra Docs
Compared to a Leader/Follower architecture the Cassandra model is actually simpler and depends mostly on all involved nodes seeing all the mutation commands happening to the data they "own" via the tokens.

Is it possible to isolate spark cluster nodes for each individual application

We have a spark cluster comprising of 16 nodes. Is it possible to limit nodes 1 & 2 for application 'A'; nodes 3,4,5 for application 'B'; nodes 10,11,12,15 for application 'C'; and so on?
From the documentation, I understand that we can set some properties to control spark executor cores, number of executors to be launched, memories etc. But, I am curious to know if I can achieve the above use case.
One obvious way to do that is to configure 3 different clusters with the desired topology, otherwise you're out of luck, spark does not have any provision,
because it is usually a bad idea and generally against the design principles of spark and clustering in general. Why? If you assign application A to specific hosts, but it gets idle, while application B is running at 100%, you have 2 idle hosts that could be working for B, so you would be wasting costly computing resources. Usually, what you want is to assign a certain number of resources per application and let the system decide how to allocate them (scheduling.. plain spark is pretty elementary, but running under YARN and Mesos you can be more sophisticated).
Another reason why it's a bad idea is that you don't want rules that specify a specific host or set of hosts. What if you assign node 1&2 to application A and they both go down? Beside not using your resources efficiently, tying your app to specific hosts makes it also difficult to make them resilient to failure by rescheduling them on other hosts.
You may have other ways to do something similar though, if you're running spark under YARN or Mesos, you can define queues or quotas and limit the amount of resources that each application can use at a given time.
In general, it depends on the reason, why do you want to statically allocate resources to applications. If it's for resource management, you should instead looking at schedulers and queues. If it's for security, you should have multiple clusters, keeping in mind that you'd be losing in performance.

Selecting a node size for a GKE kubernetes cluster

We are debating the best node size for our production GKE cluster.
Is it better to have more smaller nodes or less larger nodes in general?
e.g. we are choosing between the following two options
3 x n1-standard-2 (7.5GB 2vCPU)
2 x n1-standard-4 (15GB 4vCPU)
We run on these nodes:
Elastic search cluster
Redis cluster
PHP API microservice
Node API microservice
3 x seperate Node / React websites
Two things to consider in my opinion:
Replication:
services like Elasticsearch or Redis cluster / sentinel are only able to provide reliable redundancy if there are enough Pods running the service: if you have 2 nodes, 5 elasticsearch Pods, well chances are 3 Pods will be on one node and 2 on the other: you maximum replication will be 2. If you happen to have 2 replica Pods on the same node and it goes down, you lose the whole index.
[EDIT]: if you use persistent block storage (this best for persistence but is complex to setup since each node needs its own block, making scaling tricky), you would not 'lose the whole index', but this is true if you rely on local storage.
For this reason, more nodes is better.
Performance:
Obviously, you need enough resources. Smaller nodes have lower resources, so if a Pod starts getting lots of traffic, it will be more easily reaching its limit and Pods will be ejected.
Elasticsearch is quite a memory hog. You'll have to figure if running all these Pods require bigger nodes.
In the end, as your need grow, you will probably want to use a mix of different capacity nodes, which in GKE will have labels for capacity which can be used to set resources quotas and limits for memory and CPU. You can also add your own labels to insure certain Pods end up on certain types of nodes.

Whats the best way to mirror a live Cassandra cluster for analytics tasks?

Assuming a live cluster with several DCs, whats the best way to setup some nodes that are dedicated for analytic queries?
Analytic nodes will be hosted in a separate (routed) network and must not write any data back to the production nodes. They also must not be counted against for any CL. This especially applies to EACH_QUORUM that will be used for some writes. Analytics nodes may be offline at any time.
All solutions I've looked into seem to have their own drawbacks.
1) Take snapshots on production and transfer to independent analytics cluster
Significant update delay
IO intensive either on network or disk (e.g. rsync)
Lots of duplicate data due to different replication factors (3:1 prod. vs analytics)
Mismatch in SSTable row ranges and cluster topology on analytics cluster may require to use sstableloader
2) Use write survey mode to establish read-only nodes
Not 100% sure how this could be done for setting up multiple survey nodes to cover the whole ring
Queries can only be executed against each node locally as they could not be part of a coordinated execution
3) Add regular DC dedicated for analytics
EACH_QUORUM will fail in case analytics cluster is not available
Queries on production should not be served from analytics
Would require a way to prevent users on analytics to be able to execute queries or updates on production
Any other options or existing tools that could be used?

Minimizing Hazelcast chatter and overhead

Our app needs to run both on a large number of machines and on a single standalone machine. It has three distinct clusters, each performing a mostly isolated function. Cluster A is the main one, and clusters B & C are independent, but they both need access to a map in A to know where to route requests. Access needs to be super-fast.
Which setup should I choose?
Each cluster has its own Hazelcast instance. Clusters B & C are also lite members of the A instance.
Each cluster has its own Hazelcast instance. Clusters B & C use a Hazelcast client to talk to A.
One giant instance for all clusters.
I'm concerned about chatter and overhead as the clusters get larger, to potentially hundreds of machines. Which setup is most scalable?
Also, is there a writeup anywhere which details the messages that Hazelcast passes around? I'd like to know exactly what happens when a key gets added or removed, for example.
Try to avoid lite-member setup (1) as it is harder to maintain clusters with lite-members.
If all these machines/nodes are on the same local network and if # of nodes is around 50, you can go with (3).. all in one cluster. Otherwise I would go with (2) as you can scale clients really well and they are very lite-weight.

Resources