I'm reading up about DHTs, but struggle to find information on what the consequences are for DHT values when a node fails.
As far as I understand, without redundancy of data (hash table values) the failure of a single node would simply make the values stored in that node unavailable. But if I wanted to use DHTs as storage for any system, I would like that system to be able to rely on the availability of all storage at any time, right? Maybe data redundancy is outsourced to be an independent problem here, but then this would mean that the aspect of decentralization of a DHT introduces additional points of failures, which seems like a huge downside of DHTs.
So how are values kept accessible, if the node responsible for those values fails?
As far as I understand, without redundancy of data (hash table values) the failure of a single node would simply make the values stored in that node unavailable.
That is tautological. Yes, if you choose no redundancy then there is no redundancy.
But if I wanted to use DHTs as storage for any system, I would like that system to be able to rely on the availability of all storage at any time, right?
That depends on how much availability you actually need. No system is 100% reliable.
And DHTs usually are not used as a storage system. Not for long-lived bulk data anyway. It should be considered a dynamic value lookup system, similar to DNS, but distributed and peer-to-peer.
So how are values kept accessible, if the node responsible for those values fails?
The simplest approach is to publish the data with redundancy, i.e. write it to multiple nodes. Either to the N nodes closest to the target ID or with some other deterministic key derivation that can choose multiple addresses.
The responsibility of republishing the data to compensate for churn of storage nodes can also lie with the originator of the data. This keeps the implementation complexity and the security/game-theoretic aspects simple.
Storage nodes themselves could also perform redundancy republishing to ensure that data remains available in the absence of the originating node. The problem with this approach is that it is difficult to secure and incentivize correctly on public networks, especially when there are multiple implementations.
In closed environments this is more feasible.
Related
We are currently working on a design using Azure functions with Azure storage queue binding.
Each message in the queue represents a complete transaction. An Azure function will be bound to that queue so that the function will be triggered as soon as there is a new message in the queue.
The function will then commit the transaction in a SQL DB.
The first-cut implementation is also complete; and it's working fine. However, on retrospective, we are considering the following:
In a typical DAL, there are well-established design patterns using entity framework, repository patterns, etc. However, we didn't find a similar guidance/best practices when implementing DAL within a server-less code.
Therefore, my question is: should such patterns be implemented with Azure functions (this would be challenging :) ), or should the server-less code be kept as light as possible or this is not a use-case for azure functions, at all?
It doesn't take anything too special. We're using a routine set of library DLLs for all kinds of things -- database, interacting with other parts of Azure (like retrieving Key Vault secrets for connection strings), parsing file uploads, business rules, and so on. The libraries are targeting netstandard20 so we can more easily migrate to Functions v2 when the right triggers become available.
Mainly just design your libraries so they're highly modularized, so you can minimize how much you load to get the job done (assuming reuse in other areas of the system is important, which it usually is).
It would be easier if dependency injection was available today. See this for a few ways some of us have hacked it together until we get official DI support. (DI is on the roadmap for Functions, I believe the 3.0 release.)
At first I was a little worried about startup time with the library approach, but the underlying WebJobs stack itself is already pretty heavy, and Functions startup performance seems to vary wildly anyway (on the cheaper tiers, at least). During testing, one of our infrequently-executed Functions has varied from just ~300ms to a peak of about ~3800ms to parse the exact same test file, with all but ~55ms spent on startup).
should such patterns be implemented with Azure functions (this would
be challenging :) ), or should the server-less code be kept as light
as possible or this is not a use-case for azure functions, at all?
My answer is NO.
There should be patterns to follow, but the traditional repository patterns and CRUD operations do not seem to be valid in the cloud era.
Many strong concepts we were raised up to adhere to, became invalid these days.
Denormalizing the data base became something not only acceptable but preferable.
Now designing a pattern will depend on the database you selected for your solution and also depends of the type of your application and the type of your data.
This is a link for general guideline when you do Table Storage design Guidelines.
Is your application read-heavy or write-heavy ? The design will vary accordingly.
Are you using Azure Tables or Mongo? There are design decisions based on that. Indexing is important in Mongo while there is non in Azure table that you can do.
Sharding consideration.
Redundancy Consideration.
In modern development/Architecture many principles has changed, each Microservice has its own database that might be totally different that any other Microservices'.
If you read along the guidelines that I provided, you will see what I mean.
Designing your Table service solution to be read efficient:
Design for querying in read-heavy applications. When you are designing your tables, think about the queries (especially the latency sensitive ones) that you will execute before you think about how you will update your entities. This typically results in an efficient and performant solution.
Specify both PartitionKey and RowKey in your queries. Point queries such as these are the most efficient table service queries.
Consider storing duplicate copies of entities. Table storage is cheap so consider storing the same entity multiple times (with different keys) to enable more efficient queries.
Consider denormalizing your data. Table storage is cheap so consider denormalizing your data. For example, store summary entities so that queries for aggregate data only need to access a single entity.
Use compound key values. The only keys you have are PartitionKey and RowKey. For example, use compound key values to enable alternate keyed access paths to entities.
Use query projection. You can reduce the amount of data that you transfer over the network by using queries that select just the fields you need.
Designing your Table service solution to be write efficient:
Do not create hot partitions. Choose keys that enable you to spread your requests across multiple partitions at any point of time.
Avoid spikes in traffic. Smooth the traffic over a reasonable period of time and avoid spikes in traffic.
Don't necessarily create a separate table for each type of entity. When you require atomic transactions across entity types, you can store these multiple entity types in the same partition in the same table.
Consider the maximum throughput you must achieve. You must be aware of the scalability targets for the Table service and ensure that your design will not cause you to exceed them.
Another good source is this link:
I am currently working on a web-based MMORPG game and would like to setup an auto-scaling strategy based on Docker and DigitalOcean droplets.
However, I am wondering how I could manage to do so:
My game server would have to be splittable across different Docker containers BUT every game server instance should act as if it was only one gigantic game server. That means that every modification happening in one (character moving) should also be mirrored in every other game server.
I am trying to get this to work (at least conceptually) but can't find a way to synchronize all my instances properly. Should I use a master only broadcasting events or is there an alternative?
I was wondering the same thing about my MySQL database: since every game server would have to read/write from/to the db, how would I make it scale properly as the game gets bigger and bigger? The best solution I could think of was to keep the database on a single server which would be very powerful.
I understand that this could be easy if all game servers didn't have to "share" their state but this is primarily thought so that I can scale quickly in case of a sudden spike of activity.
(There will be different "global" game servers like A, B, C... but each of those global game servers should be, behind the scenes, composed of 1-X docker containers running the "real" game server so that the "global" game server is only a concept)
The problem you state is too generic and it's difficult to give a concrete response. However let me be reckless and give you some general-purpose scaling advices:
Remove counters from databases. Instead primary keys that are auto-incremented IDs, try to assign random UUIDs.
Change data that must be validated against a central point by data that is self contained. For example, for authentication, instead of having the User Credentials in a DB, use JSON Web Tokens that can be verified by any host.
Use techniques such as Consistent Hashing to balance the load without need of load balancers. Of course use hashing functions that distribute well, to avoid/minimize collisions.
The above advices are basically about changing the design to migrate from stateful to stateless in as much as aspects as you can. If you anyway need to provide stateful parts, try to guess which entities will have more chance to share stateful data and allocate them in the same (or nearly server). For example, if there are cities in your game, try to allocate in the same server the users that are in the same city, since they are more willing to interact between them (and share stateful data) than users that are in different cities.
Of course if the city is too big and it's very crowded, you will probably need to partition the city in more servers to avoid overloading the server.
Your question is too broad and a general scaling problem as others have mentioned. It'd have been helpful if you'd stated more clearly what your system requirements are.
If it has to be real-time, then you can choose Redis as your main DB but then you'd need slaves (for replication) and you would not be able to scale automatically as you go*, since Redis doesn't support that. I assume that's not a good option when you're working with games (Sudden spikes are probable)
*there seems to be some managed solutions, you need to check them out
If it can be near real-time, using Apache Kafka can prove to be useful.
There's also a highly scalable DB which has everything you need called CockroachDB (I'm a contributor, yay!) but you need to run tests to see if it meets your latency requirements.
Overall, going with a very powerful server is a bad choice, since there's a ceiling and it'd cost you more to scale vertically.
There's a great benefit in scaling horizontally such an application. I'll try to write down some ideas.
Option 1 (stateful):
When planning stateful applications you need to take care about synchronisation of the state (via PubSub, Network Broadcasting or something else) and be aware that every synchronisation will take time to occur (when not blocking each operation). If this is ok for you, lets go ahead.
Let's say you have 80k operations per second on your whole cluster. That means that every process need to synchronise 80k state changes per second. This will be your bottleneck. Handling 80k changes per second is quiet a big challenge for a Node.js application (because it's single threaded and therefore blocking).
At the end you'll need to provision precisely the maximum amount of changes you want to be able to sync and perform some tests with different programming languages. The overhead of synchronising needs to be added to the general work load of the application. It could be beneficial to use some multithreaded language like C, Java/Scala or Go.
Option 2 (stateful with routing):*
In some cases it's feasible to implement a different kind of scaling.
When for example your application can be broken down into areas of a map, you could start with one app replication which holds the full map and when it scales up, it shares the map in a proportional way.
You'll need to implement some routing between the application servers, for example to change the state in city A of world B => call server xyz. This could be done automatically but downscaling will be a challenge.
This solution requires more care and knowledge about the application and is not as fault tolerant as option 1 but it could scale endlessly.
Option 3 (stateless):
Move the state to some other application and solve the problem elsewhere (like Redis, Etcd, ...)
The Azure Service Fabric appears to be focused on scenarios in which all data can fit within RAM and persistence is used as a backing store. Reliable Services are designed to store information in Reliable Collections, which use a log-checkpoint system where logged information is written into RAM. Meanwhile, for Reliable Actors, the default actor state provider is "the distributed Key-Value store provided by Service Fabric platform." This seems to indicate that the same limitations would apply.
There may, however, be situations in which one would like to use the Service Fabric for "hot data" but write "cold data" to some form of permanent storage. What are best practices for handling this transition?
In Orleans, this seems to be handled automatically, using a persistence store such as Azure tables. But it seems that a principal design purpose of the Service Fabric and the Reliable Collections are to avoid needing external services, thus enhancing data locality. The current documentation anticipates the possibility that one would want to move data into some permanent store for disaster recovery and analytics, but it does not discuss the possibility of moving data back and forth between persistence-backed in-memory actors and more permanent forms of storage.
A possible answer is that the Service Fabric already does this. Maybe a Reliable Dictionary has some built-in mechanism for switching between persistence-backed in-memory storage and permanent storage.
Or, maybe the answer is that one must manage this oneself. One approach might be for an Actor to keep track of how "hot" it is and switch its persistence store as necessary. But this sacrifices one of the benefits of the Actor model, the automatic allocation and deallocation of actors. Similarly, we might periodically remove items from the Reliable Dictionary and add it to some other persistence store, and then add them back. Again, though, this requires knowledge of when it makes sense to make the transition.
A couple of examples may help crystallize this:
(1) Suppose that we are implementing a multiplayer game with many different "rooms." We don't need all the rooms in memory at once, but we need to move them into memory and use local persistence as a backup once players join them.
(2) Suppose that we are implementing an append-only B-Tree as part of a database. The temptation would be to have each B-Tree node be a stateful actor. We would like hot b-trees to remain in memory but of course the entire index can't be in memory. It seems that this is a core scenario that is already implemented for things like DocumentDB, but it's not clear to me from the documentation how one would do this.
A related question that I found is here. But that question focuses on when to use Azure Service Fabric vs. external services. My question is on whether there is a need to transition between them, or whether Azure Service Fabric already has all the capability needed here.
The Key-Value store state provider does not require everything to be kept in memory. This provider actually stores the state of all actors on the local disk and the state is also replicated to the local disk on other nodes. So the KVS store is considered a persistent and reliable store.
In addition to that, the state of active actors is also stored in memory. When an actor hasn't been used in a while, it gets deactivated and garbage collected. When this happens, the in-memory copy is freed and only the copy on disk remains. When the actor is activated again, the state is fetched from disk and remains in memory as long as the actor is active.
Also, KVS is not the only built-in state provider. We also have the VolatileActorStateProvider (http://azure.microsoft.com/en-gb/documentation/articles/service-fabric-reliable-actors-platform/#actor-state-provider-choices). This is the state provider that keeps everything in memory.
The KvsActorStateProvider does indeed store actor state in a KeyValueStore which is a similar structure to the ReliableDictionary.
The first question I'd ask is whether you need to relegate old actors state to cold storage? The limitation of keeping everything in memory doesn't limit you to a total number of actors, but a total number per replica. So you must first consider the partitioning strategy so that your actors are distributed across a number of different replicas. As your demands grow you can then add more machines to the cluster and the ServiceFabric will orchestrate movements of the replicas to the new machines. For more information on partitioning of the Actor service, see http://azure.microsoft.com/en-gb/documentation/articles/service-fabric-reliable-actors-platform/
If you do want to use cold storage after some time, then you have a couple of options. Firstly, you could decorate your actors with a custom ActorStateProviderAttribute that returns your own implementation of an IActorStateProvider that can handle persistence as you decide.
Alternatively, you could handle it entirely within your Actor implementation. Hook into the Actor Lifecycle and in OnDeactivateAsync such that when the instance is garbage collected, or use an Actor Reminder for some specified time in the future, to serialise the state and store in cold storage such as blob or table storage and null out the State property. The ActivateAsync override can then be used to retrieve this state from offline storage and deserialise.
When running a web application in a farm that uses a distributed datastore that's eventually consistent (CouchDB in my case), should I be ensuring that a given user is always directed to same the datastore instance?
It seems to me that the alternate approach, where any web request can use any data store, adds significant complexity to deal with consistency issues (retries, checks, etc). On the other hand, if a user in a given session is always directed to the same couch node, won't my consistency issues revolve mostly around "shared" user data and thus be greatly simplified?
I'm also curious about strategies for directing users but maybe I'll keep that for another question (comments welcome).
According to the CAP Theorem, distributed systems can either have complete consistency (all nodes see the same data at the same time) or availability (every request receives a response). You'll have to trade one for the other during a partition or datastore instance failure.
Should I be ensuring that a given user is always directed to same the datastore instance?
Ideally, you should not! What will you do when the given instance fails? A major feature of a distributed datastore is to be available in spite of network or instance failures.
If a user in a given session is always directed to the same couch node, won't my consistency issues revolve mostly around "shared" user data and thus be greatly simplified?
You're right, the architecture would be a lot more simpler that way, but again, what would you do if that instance fails? A lot of engineering effort has gone into distributed systems to allow multiple instances to reply to a query. I am not sure about CouchDB, but Cassandra allows you to choose your consistency model, you'll have to tradeoff availability for higher degree of consistency. The client is configured to request servers in a round-robin fashion by default, which distributes the load.
I would recommend you read the Dynamo paper. The authors describe a lot of engineering details behind a distributed database.
I need to create incremental reports in the table storage. I need to be able to update the same records from several different worker role instances (different roles with several instances each).
My reports consist mainly of values that I need to increment after I parse the raw data I initially stored.
The optimistic solution I found is to use a retry mechanism: Try to update the record. If you get a 412 result code (you don't have the latest ETAG value), retry. This solution becomes less efficient and more costly the more users you have and the more data you need to update simultaneously (my case exactly).
Another solution that comes to mind is to have only one instance of one worker role that can possibly update any given record. This is very problematic because this means that I will by-design create bottlenecks in my architecture, which is the opposite of the scale I want to reach with Azure.
If anyone here has some best practices in mind for such a use case, I would love to hear it.
Most cloud storages (Table Storage is one of those) do not offer scalable writes on a single entity/blob/whatever. There is no quick-fix for this limitation, as this limitation comes from the core tradeoff that have being made to create cloud storage in the first place.
Basically, a storage unit (entity/blob/whatever) can be updated about once every 20ms, and that's about it. Having a dedicated worker or not will not change anything to this aspect.
Instead, you need to address your task from from a different angle. For counters, the most usual approach is the use of sharded counters (link is for GAE, but you can implement an equivalent behavior on Azure).
Also, another way to ease the pain to go for an asynchronous architecture ala CQRS where the performance constraints you put on the update latency of entities is significantly relaxed.
I believe the approach needs re-architecture. In order to ensure scalability and limit amount of contention, you want to make sure that every write can work optimistically by providing unique Table/PartitionKey/RowKey
If you need those values for reports to be merged together, have a separate process/worker that will post-aggregated/merge the records for reporting purposes. You can use a queue or a timing mechanism to start aggregation/merging