Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 10 months ago.
Improve this question
I have the following problem:
I need to build a high-performing, multi-threaded HTTP server that can process large amounts of data with very low latency.
The overall data set is very very large (10+ GB) but most requests will only require access to a subset of that data. Waiting for DB access will be too slow, the data must be held in memory.
Each web request will only be performing read-operations on the data, however there will be a background worker thread that is responsible for managing updates to the data periodically.
My basic approach:
I've chosen actix web server as it has a good feature set and seems to perform best on the benchmarking I've looked at.
The main idea I have is to load all the data on boot into some shared state, into a data structure that is heavily optimised for the read operations.
Then I want to provide some kind of interface that each request handler can use to query that data and get immutable references to different parts of it depending on what it needs.
This should avoid race-conditions (as there is only the worker-thread that has write-access) as well as avoiding expensive data copying operations.
Architecture A
My original approach was to create this data inside a module:
static mut DATA: ProgramData;
Then expose public methods for accessing it, but after reading enough warnings about static memory I have abandoned that approach.
Architecture B
This is what I have currently working. I create an empty struct like this (where ProgramData is a custom struct) in the program's main function:
struct ProgramDataWrapper {
data_loaded: bool,
data: ProgramData,
}
Then I pass an Arc<RwLock> smart pointer to a DataService (which is responsible for async loading it, and managing data refreshes over time), and another copy of the Arc pointer is the Actix web state.
So this data should persist through the lifetime of the program because the main method always has a reference to it and it should never be dropped.
I have implemented public methods on this struct to enable querying the data and to get back different parts of it depending on the input parameters to the HTTP Request.
I then pass an Arc<RwLock> into the Actix web state so that every handler has read-only access to it and can query the data using the public functions (the internal data is not public).
The route handler does this by dereferencing the Arc then obtaining a read lock from RwLock, then calling some method like is_ready().
So then, for example, I have an endpoint /ready that will return true/false to the load-balancer to communicate the data is in memory and this instance is ready to start receiving requests.
I've noticed though that when the worker thread gets a write-lock on the data structure that no other route handlers can access it as they are blocked and the entire application freezes until the data is updated. This is because the entire ProgramDataWrapper struct is locked, including its public methods.
I think I could get around this by putting the RwLock on the ProgramData object itself, so that while the worker thread is assembling the new data other parts of the data can still get read locks on the ProgramDataWrapper objects and access the public interface.
Then it should be a short amount of time, once the data is ready, to get a write-lock on the data and only copy in the new bits of data then release it immediately.
Architecture C
The other idea I had is to use mpsc channels.
When I create the DataService, it can create a send-receive pair, keep the recv end and pass back the send half to the main method. This can then clone the send channel into the Actix web state, so that every route handler has a way to send data to the Data Service.
What I was thinking then, is to create a struct like this:
TwoWayData<T, U> {
query: T,
callback: std::sync::misc::Sender<U>,
}
Then in the route handler, I can create a Send-Receive pair of the above type.
I can send a message to the data service (because I have access to a pointer to the clone of the sender from the top main function), and include as that payload the object to send data back to the route handler.
Something like:
#[get("/stuff")]
pub async fn data_ready(data: web::Data<Arc<Sender<TwoWayData<DataQuery, DataResponse>>>>) -> impl Responder {
let (sx, rx): (Sender<TwoWayData<DataQuery, DataResponse>>, Receiver<TwoWayData<DataQuery, DataResponse>>) = channel();
data.send(TwoWayData {
query: "Get me some data",
callback: sx.clone(),
});
}
Then the data service can just listen to incoming messages, extract the query and process it, and send the result back down the channel it has just received.
My Question
If you're still with me, I really appreciate that.
My questions are this:
Is there a large overhead to the mspc channel that will slow down my program communicating large amounts of data over mspc channels?
Is it even possible to send the callback in the way I want to allow two-way communication? And if not, what is the accepted way of doing this?
I know this is a matter of opinion, but which of these two approaches is a more standard way of solving this type of problem, or does it just come down to personal preference / the technical requirements of the problem?
A. I would disregard static mut entirely since it is unsafe and easy to get wrong. The only way I would consider it is as static DATA: RwLock<ProgramData>, but then it is the same as option B except it is less flexible to testing, discrete data sets, etc.
B. Using an Arc<RwLock> is a very common and understandable pattern and I would consider it my first option when sharing mutable data across threads. It is also a very performant option if you keep your writing critical section small. You may reach for some other concurrent data-structure if its infeasible to clone the whole dataset for each update and in-place updates are long and/or non-trivial. At 10+ GB of data, I'd have to take a good look at your data, access, and update patterns to decide on a "best" course of action. Perhaps you can use many smaller locks within your structure, or use a DashMap, or combination thereof. There are many tools available and you may need to make something custom if you're striving for the lowest latency.
C. This looks a bit convoluted but glossing over the specifics is pretty much an "actor model", or at least based on the principles of message passing. If you wanted the data to behave as a separate "service" that can govern itself and provides more control over how the queries are processed then you can use an actor framework like Actix (originally built for Actix-Web but they've since drifted apart enough that there's no longer any meaningful relation). I personally don't use actors since they tend to be an obscuring layer of abstraction, but its up to you. It will likely be slower than accessing the data directly and you'll still need to internally decide on a concurrency mechanism as mentioned above.
Related
I am creating a webserver using tokio. Whenever a client connection comes in, a green thread is created via tokio::spawn.
The main function of my web server is proxy. Target server information for proxy is stored as a global variable, and for proxy, all tasks must access the data. Since there are multiple target servers, they must be selected by round robin. So the global variable (struct) must have information of the recently selected server(by index).
Concurrency problems occur because shared information can be read/written by multiple tasks at the same time.
According to the docs, there seems to be a way to use Mutex and Arc or a way to use channel to solve this.
I'm curious which one you usually prefer, or if there is another way to solve the problem.
If it's shared data, you generally do want Arc, or you can leak a box to get a 'static reference (assuming that the data is going to exist until the program exits), or you can use a global variable (though global variables tends to impede testability and should generally be considered an anti-pattern).
As far as what goes in the Arc/Box/global, that depends on what your data's access pattern will be. If you will often read but rarely write, then Tokio's RwLock is probably what you want; if you're going to be updating the data every time you read it, then use Tokio's Mutex instead.
Channels make the most sense when you have separate parts of the program with separate responsibilities. It doesn't work as well to update multiple workers with the same changes to data, because then you get into message ordering problems that can result in each worker's state disagreeing about something. (You get many of the problems of a distributed system without any of the benefits.)
Channels can work if there is a single entity responsible for maintaining the data, but at that point there isn't much benefit over using some kind of mutual exclusion mechanism; it winds up being the same thing with extra steps.
When you use Node's EventEmitter, you subscribe to a single event. Your callback is only executed when that specific event is fired up:
eventBus.on('some-event', function(data){
// data is specific to 'some-event'
});
In Flux, you register your store with the dispatcher, then your store gets called when every single event is dispatched. It is the job of the store to filter through every event it gets, and determine if the event is important to the store:
eventBus.register(function(data){
switch(data.type){
case 'some-event':
// now data is specific to 'some-event'
break;
}
});
In this video, the presenter says:
"Stores subscribe to actions. Actually, all stores receive all actions, and that's what keeps it scalable."
Question
Why and how is sending every action to every store [presumably] more scalable than only sending actions to specific stores?
The scalability referred to here is more about scaling the codebase than scaling in terms of how fast the software is. Data in flux systems is easy to trace because every store is registered to every action, and the actions define every app-wide event that can happen in the system. Each store can determine how it needs to update itself in response to each action, without the programmer needing to decide which stores to wire up to which actions, and in most cases, you can change or read the code for a store without needing to worrying about how it affects any other store.
At some point the programmer will need to register the store. The store is very specific to the data it'll receive from the event. How exactly is looking up the data inside the store better than registering for a specific event, and having the store always expect the data it needs/cares about?
The actions in the system represent the things that can happen in a system, along with the relevant data for that event. For example:
A user logged in; comes with user profile
A user added a comment; comes with comment data, item ID it was added to
A user updated a post; comes with the post data
So, you can think about actions as the database of things the stores can know about. Any time an action is dispatched, it's sent to each store. So, at any given time, you only need to think about your data mutations a single store + action at a time.
For instance, when a post is updated, you might have a PostStore that watches for the POST_UPDATED action, and when it sees it, it will update its internal state to store off the new post. This is completely separate from any other store which may also care about the POST_UPDATED event—any other programmer from any other team working on the app can make that decision separately, with the knowledge that they are able to hook into any action in the database of actions that may take place.
Another reason this is useful and scalable in terms of the codebase is inversion of control; each store decides what actions it cares about and how to respond to each action; all the data logic is centralized in that store. This is in contrast to a pattern like MVC, where a controller is explicitly set up to call mutation methods on models, and one or more other controllers may also be calling mutation methods on the same models at the same time (or different times); the data update logic is spread through the system, and understanding the data flow requires understanding each place the model might update.
Finally, another thing to keep in mind is that registering vs. not registering is sort of a matter of semantics; it's trivial to abstract away the fact that the store receives all actions. For example, in Fluxxor, the stores have a method called bindActions that binds specific actions to specific callbacks:
this.bindActions(
"FIRST_ACTION_TYPE", this.handleFirstActionType,
"OTHER_ACTION_TYPE", this.handleOtherActionType
);
Even though the store receives all actions, under the hood it looks up the action type in an internal map and calls the appropriate callback on the store.
Ive been asking myself the same question, and cant see technically how registering adds much, beyond simplification. I will pose my understanding of the system so that hopefully if i am wrong, i can be corrected.
TLDR; EventEmitter and Dispatcher serve similar purposes (pub/sub) but focus their efforts on different features. Specifically, the 'waitFor' functionality (which allows one event handler to ensure that a different one has already been called) is not available with EventEmitter. Dispatcher has focussed its efforts on the 'waitFor' feature.
The final result of the system is to communicate to the stores that an action has happened. Whether the store 'subscribes to all events, then filters' or 'subscribes a specific event' (filtering at the dispatcher). Should not affect the final result. Data is transferred in your application. (handler always only switches on event type and processes, eg. it doesn't want to operate on ALL events)
As you said "At some point the programmer will need to register the store.". It is just a question of fidelity of subscription. I don't think that a change in fidelity has any affect on 'inversion of control' for instance.
The added (killer) feature in facebook's Dispatcher is it's ability to 'waitFor' a different store, to handle the event first. The question is, does this feature require that each store has only one event handler?
Let's look at the process. When you dispatch an action on the Dispatcher, it (omitting some details):
iterates all registered subscribers (to the dispatcher)
calls the registered callback (one per stores)
the callback can call 'waitfor()', and pass a 'dispatchId'. This internally references the callback of registered by a different store. This is executed synchronously, causing the other store to receive the action and be updated first. This requires that the 'waitFor()' is called before your code which handles the action.
The callback called by 'waitFor' switches on action type to execute the correct code.
the callback can now run its code, knowing that its dependancies (other stores) have already been updated.
the callback switches on the action 'type' to execute the correct code.
This seems a very simple way to allow event dependancies.
Basically all callbacks are eventually called, but in a specific order. And then switch to only execute specific code. So, it is as if we only triggered a handler for the 'add-item' event on the each store, in the correct order.
If subscriptions where at a callback level (not 'store' level), would this still be possible? It would mean:
Each store would register multiple callbacks to specific events, keeping reference to their 'dispatchTokens' (same as currently)
Each callback would have its own 'dispatchToken'
The user would still 'waitFor' a specific callback, but be a specific handler for a specific store
The dispatcher would then only need to dispatch to callbacks of a specific action, in the same order
Possibly, the smart people at facebook have figured out that this would actually be less performant to add the complexity of individual callbacks, or possibly it is not a priority.
I'm designing a large-scale project, and I think I see a way I could drastically improve performance by taking advantage of multiple cores. However, I have zero experience with multiprocessing, and I'm a little concerned that my ideas might not be good ones.
Idea
The program is a video game that procedurally generates massive amounts of content. Since there's far too much to generate all at once, the program instead tries to generate what it needs as or slightly before it needs it, and expends a large amount of effort trying to predict what it will need in the near future and how near that future is. The entire program, therefore, is built around a task scheduler, which gets passed function objects with bits of metadata attached to help determine what order they should be processed in and calls them in that order.
Motivation
It seems to be like it ought to be easy to make these functions execute concurrently in their own processes. But looking at the documentation for the multiprocessing modules makes me reconsider- there doesn't seem to be any simple way to share large data structures between threads. I can't help but imagine this is intentional.
Questions
So I suppose the fundamental questions I need to know the answers to are thus:
Is there any practical way to allow multiple threads to access the same list/dict/etc... for both reading and writing at the same time? Can I just launch multiple instances of my star generator, give it access to the dict that holds all the stars, and have new objects appear to just pop into existence in the dict from the perspective of other threads (that is, I wouldn't have to explicitly grab the star from the process that made it; I'd just pull it out of the dict as if the main thread had put it there itself).
If not, is there any practical way to allow multiple threads to read the same data structure at the same time, but feed their resultant data back to a main thread to be rolled into that same data structure safely?
Would this design work even if I ensured that no two concurrent functions tried to access the same data structure at the same time, either for reading or for writing?
Can data structures be inherently shared between processes at all, or do I always explicitly have to send data from one process to another as I would with processes communicating over a TCP stream? I know there are objects that abstract away that sort of thing, but I'm asking if it can be done away with entirely; have the object each thread is looking at actually be the same block of memory.
How flexible are the objects that the modules provide to abstract away the communication between processes? Can I use them as a drop-in replacement for data structures used in existing code and not notice any differences? If I do such a thing, would it cause an unmanageable amount of overhead?
Sorry for my naivete, but I don't have a formal computer science education (at least, not yet) and I've never worked with concurrent systems before. Is the idea I'm trying to implement here even remotely practical, or would any solution that allows me to transparently execute arbitrary functions concurrently cause so much overhead that I'd be better off doing everything in one thread?
Example
For maximum clarity, here's an example of how I imagine the system would work:
The UI module has been instructed by the player to move the view over to a certain area of space. It informs the content management module of this, and asks it to make sure that all of the stars the player can currently click on are fully generated and ready to be clicked on.
The content management module checks and sees that a couple of the stars the UI is saying the player could potentially try to interact with have not, in fact, had the details that would show upon click generated yet. It produces a number of Task objects containing the methods of those stars that, when called, will generate the necessary data. It also adds some metadata to these task objects, assuming (possibly based on further information collected from the UI module) that it will be 0.1 seconds before the player tries to click anything, and that stars whose icons are closest to the cursor have the greatest chance of being clicked on and should therefore be requested for a time slightly sooner than the stars further from the cursor. It then adds these objects to the scheduler queue.
The scheduler quickly sorts its queue by how soon each task needs to be done, then pops the first task object off the queue, makes a new process from the function it contains, and then thinks no more about that process, instead just popping another task off the queue and stuffing it into a process too, then the next one, then the next one...
Meanwhile, the new process executes, stores the data it generates on the star object it is a method of, and terminates when it gets to the return statement.
The UI then registers that the player has indeed clicked on a star now, and looks up the data it needs to display on the star object whose representative sprite has been clicked. If the data is there, it displays it; if it isn't, the UI displays a message asking the player to wait and continues repeatedly trying to access the necessary attributes of the star object until it succeeds.
Even though your problem seems very complicated, there is a very easy solution. You can hide away all the complicated stuff of sharing you objects across processes using a proxy.
The basic idea is that you create some manager that manages all your objects that should be shared across processes. This manager then creates its own process where it waits that some other process instructs it to change the object. But enough said. It looks like this:
import multiprocessing as m
manager = m.Manager()
starsdict = manager.dict()
process = Process(target=yourfunction, args=(starsdict,))
process.run()
The object stored in starsdict is not the real dict. instead it sends all changes and requests, you do with it, to its manager. This is called a "proxy", it has almost exactly the same API as the object it mimics. These proxies are pickleable, so you can pass as arguments to functions in new processes (like shown above) or send them through queues.
You can read more about this in the documentation.
I don't know how proxies react if two processes are accessing them simultaneously. Since they're made for parallelism I guess they should be safe, even though I heard they're not. It would be best if you test this yourself or look for it in the documentation.
I have a fairly involved download process I want to perform in a background thread. There are some natural dependencies between steps in this process. For example, I need to complete the downloads of both Table A and Table B before setting the relationships between them (I'm using Core Data).
I thought first of putting each dependent step in its own NSOperation, then creating a dependency between the two operations (i.e. download the two tables in one operation, then set the relationship between them in the next, dependent operation). However, each NSOperation requires it's own NSManagedContext, so this is no good. I don't want to save the background context until both tables have been downloaded and their relationships set.
I've therefore concluded this should all occur inside one NSOperation, and that I should use notifications or some other mechanism to call the dependent method when all the conditions for running it have been met.
I'm an iOS beginner, however, so before I venture down this path, I wouldn't mind advice on whether I've reached the right conclusion.
Given your validation requirements, I think it will be easiest inside of one operation, although this could turn into a bit of a hairball as far as code structure goes.
You'll essentially want to make two wire fetches to get the entire dataset you require, then combine the data and parse it at one time into Core Data.
If you're going to use the asynchronous API's this essentially means structuring a class that waits for both operations to complete and then launches another NSOperation or block which does the parse and relationship construction.
Imagine this order of events:
User performs some action (button tap, etc.)
Selector for that action fires two network requests
When both requests have finished (they both notify a common delegate) launch the parse operation
Might look something like this in code:
- (IBAction)someAction:(id)sender {
//fire both network requests
request1.delegate = aDelegate;
request2.delegate = aDelegate;
}
//later, inside the implementation of aDelegate
- (void)requestDidComplete... {
if (request1Finished && request2Finished) {
NSOperation *parse = //init with fetched data
//launch on queue etc.
}
}
There's two major pitfalls that this solution is prone to:
It keeps the entire data set around in memory until both requests are finished
You will have to constantly switch on the specific request that's calling your delegate (for error handling, success, etc.)
Basically, you're implementing operation dependencies on your own, although there might not be a good way around that because of the structure of NSURLConnection.
I have four threads in a C++/CLI GUI I'm developing:
Collects raw data
The GUI itself
A background processing thread which takes chunks of raw data and produces useful information
Acts as a controller which joins the other three threads
I've got the raw data collector working and posting results to the controller, but the next step is to store all of those results so that the GUI and background processor have access to them.
New raw data is fed in one result at a time at regular (frequent) intervals. The GUI will access each new item as it arrives (the controller announces new data and the GUI then accesses the shared buffer). The data processor will periodically read a chunk of the buffer (a seconds worth for example) and produce a new result. So effectively, there's one producer and two consumers which need access.
I've hunted around, but none of the CLI-supplied stuff sounds all that useful, so I'm considering rolling my own. A shared circular buffer which allows write-locks for the collector and read locks for the gui and data processor. This will allow multiple threads to read the data as long as those sections of the buffer are not being written to.
So my question is: Are there any simple solutions in the .net libraries which could achieve this? Am I mad for considering rolling my own? Is there a better way of doing this?
Is it possible to rephrase the problem so that:
The Collector collects a new data point ...
... which it passes to the Controller.
The Controller fires a GUI "NewDataPointEvent" ...
... and stores the data point in an array.
If the array is full (or otherwise ready for processing), the Controller sends the array to the Processor ...
... and starts a new array.
If the values passed between threads are not modified after they are shared, this might save you from needing the custom thread-safe collection class, and reduce the amount of locking required.