Freemarker can support multithreading
In a multithreaded environment Configuration instances, Template instances and data-models should be handled as immutable (read-only) objects.
But can it use parallel execution as pebblet when rendering?
provide an ExecutorService to the PebbleEngine and make use of the parallel tag, you can have multiple threads render different sections of your template at one time. This is especially useful if one section of your template is costly and will otherwise block the rendering of the rest of the template.
I also found an old Freemarker extensions which I wonder if it can allow it:
This library provides two ways to render asynchronous models: futures and callables. Futures are asynchronous computation results and callables are tasks that should be computed asynchronously. Callables require configuration of an executor service
Related
In Documentation, Dart is Single Threaded but to perform two operations at a time we use future objects which work same as thread.
Use Future objects (futures) to perform asynchronous operations.
If Dart is single threaded then why it allows to perform asynchronous operations.
Note: Asynchronous operations are parallel operations which are called threads
You mentioned that :
Asynchronous operations are parallel operations which are called threads
First of all, Asynchronous operations are not exactly parallel or even concurrent. Its just simply means that we do not want to block our flow of execution(Thread) or wait for the response until certain work is done. But the way we implement Asynchronous operations could decide either it is parallel or concurrent.
Parallellism vs Concurrency ?
Parallelism is actually doing lots of things simultaneously at the
same time. ex - You are walking and at the same time you're digesting
you food. Both tasks are completely running parallel and exactly at the
same time.
While
Concurrency is the illusion of Parallelism.Tasks seems to be Executed
parallel but they aren't. It like handing lots of things at a time but
only doing one task at a specific time. ex - You are walking and suddenly stop to tie your show lace. After tying your shoe lace you again start walking.
Now coming to Dart, Future Objects along with async and await keywords are used to perform asynchronous task. Here asynchronous doesn't means that tasks will be executed parallel or concurrent to each other. Instead in Dart even the asynchronous task is executed on the same thread which means that while we wait for another task to be completed, we will continue executing our synchronous code . Future Objects are used to represent the result of task which will be done at some time in future.
If you want to really execute your task concurrently then consider using Isolates(Which runs in separate thread and doesn't shares it memory with the main thread(or spawning thread).
Why? Because it is a necessity. Some operations, like http requests or timers, are asynchronous in nature.
There are isolates which allow you to execute code in a different process. The difference to threads in other programming languages is that isolates do not share memory with each other (which would lead to concurrency issues), they only communicate through messages.
To receive these messages (or wrapped in a Future, the result of it), Dart uses an event loop.
The Event Loop and Dart
Are Futures in Dart threads?
Dart is single threaded, but it can call native code(like c/c++) to perform asynchronous operations, which can introduce new thread.
In Flutter, Flutter engine is implement in c++, which provide the low-level implementation of Flutter’s core API, including asynchronous tasks like file and network I/O through new thread underneath.
Like Dart, JavaScript is also single threaded, I find this video very helpful to understand "Single Threaded" thing. what the heck is event loop
Here are a few notes:
Asynchronous doesn't mean multi-threaded. It means the code is not run at the same time. Usually asyncronous just means that it is scheduled to be run on the same thread (Isolate) after other tasks have finished.
Dart isn't actually single threaded. You can create another thread by creating another Isolate. However, within an Isolate the Dart code runs on a single thread and separate Isolates don't share memory. They can only communicate by messages.
A Future says that a value (or an error) will be returned at some point in the future. It doesn't say which thread the work is done on. Most futures are done on the current Isolate, but some futures (IO, for example) can be done on separate threads.
See this answer for links to more resources.
I have an article explaining this https://medium.com/#truongsinh/flutter-dart-async-concurrency-demystify-1cc739aaae57
In short, Flutter/Dart is not technically single-threaded, even though Dart code is executed in a single thread. Dart is a concurrent language with message passing pattern, that can take full advantage of modern multi-core architecture, without worrying about lock or mutex. Blocking in Dart can be either I/O-bound or CPU-bound, which should be solved, respectively, by Future and Dart’s Isolate/Flutter’s compute.
In Scala you can use a global ExecutionContext if you don't need to define your own by importing scala.concurrent.ExecutionContext.Implicits.global.
My question is why ForkJoinPool was chosen for this executor instead of ThreadPoolExecutor.
My understanding is that the fork-join framework is excellent at recursively decomposing problems. You're supposed to write code that breaks a task into halves, so that half can be executed in-thread and the other half can be executed by another thread. This seems like a very particular programming model and not one that would be generally applicable to handling execution of general asynchronous tasks across a wide range of possible applications.
So why was ForkJoinPool chosen for the default execution context that most people will likely use? Does the work-stealing design result in improved performance even if you don't use the full fork-join paradigm?
I can't speak for the scala designers, but idiomatic use of scala Futures often involves creation of a lot of very small, short-lived tasks (e.g. every map call creates a new task) and so the work-stealing design is appropriate.
If you care about these kind of precise details you might prefer to use scalaz-concurrent's Future, which uses trampolines to avoid creating extra tasks for each map step and makes execution contexts explicit (and AIUI defaults to a ThreadPoolExecutor).
I've been reading a lot about multi-threaded rendering. People have been proposing all kinds of weird and wonderful schemes for submitting work to the GPU with threads in order to speed up their frame rates and get more stuff rendered, but I'm having a bit of a conceptual problem with the whole thing and I thought I'd run it by the gurus here to see what you think.
As far as I know, the basic unit of concurrency on a GPU is the Warp. That is to say, it's down at the pixel level rather than higher up at the geometry submission level. So given that the unit of concurrency on the GPU is the warp, the driver must be locked down pretty tightly with mutexes to prevent multiple threads screwing up each other's submissions. If this is the case, I don't see where the benefit is of coding to D3D or OpenGL multi-threading primitives.
Surely the most efficient method of using your GPU in a multi-threading scenario is at the higher, abstract level, where you're collecting together batches of work to do, before submitting it? I mean rather than randomly interleving commands from multiple threads, I would have thought a single block accepting work from multiple threads, but with a little intelligence inside of it to make sure things are ordered for better performance before being submitted to the renderer, would be a much bigger gain if you wanted to work with multiple threads.
So, whither D3D/OpenGL multi-threaded rendering support in the actual API?
Help me with my confusion!
Your question comes from a misunderstanding of the difference between "make their renderers multi-threaded" and "multithreaded rendering".
A "renderer", or more precisely a "rendering system," does more than just issue rendering commands to the API. It has to shuffle memory around. It may have to load textures dynamically into and out-of graphics memory. It may have to read data back after some rendering process. And so forth.
To make a renderer multithreaded means exactly that: to make the rendering system make use of multiple threads. This could be threading scene graph management tasks like building the list of objects to render (frustum culling, BSPs, portals, etc). This could be having a thread dedicated to texture storage management, which swaps textures in and out as needed, loading from disk and such. This could be as in the D3D11 case of command lists, where you build a series of rendering commands in parallel with other tasks.
The process of rendering, the submission of actual rendering commands to the API, is not threaded. You generally have one thread who is responsible for the basic glDraw* or ::DrawIndexedPrimitive work. D3D11 command lists allow you to build sequences of these commands, but they are not executed in parallel with other rendering commands. It is the rendering thread and the main context that is responsible for actually issuing the command list; the command list is just there to make putting that list together more thread-friendly.
In Direct3D 11 you generally create deferred contexts to which you make draw calls from your worker threads. Once work is complete and you are ready to render, you generate a command list from each deferred context and execute it on the immediate (front thread) context. This allows the draw calls to be composed in multiple threads whilst preserving correct ordering of the draw calls etc.
I am following the async module's each method (https://github.com/caolan/async#each). It says the method iterates over the array parallely. "Parallely" is the word that confuses me. AFAIK, in now way JavaScript can execute code parallely because it has a single-threaded model.
The examples shown in the each method focuses on the IO scenarios. I am using the "each" method just to add numbers of the array. If parallelism exists, can I prove this using my example?
Thanks for reading.
The 'parallel' in the async documentation doesn't refer to 'parallel' in terms of concurrency (like multiple processes or threads being run at the same time), but 'parallel' in terms of each step being independent of the other steps (the opposite operation would be eachSeries, where each step is run only after the previous has finished).
The parallel version would only make sense if the steps perform some kind of I/O, which (because of Node's asynchronous nature) could run parallel to each other: if one step has to wait for I/O, the other steps can happily continue to send/receive data.
If the steps are mainly cpu-bound (that is, performing lots of calculations), it's not going to provide you any better performance because, like you say, Node runs the interpreter in a single thread, and that's not something that async changes.
Like robertklep said, it is more of concurrent instead of parallel. You are not going to achieve much performance gain by doing compute heavy code in parallel. It is useful when you have to do parallel I/O (communicating with an external web service for all the items of an array, for example).
I'm having issues with Entity Framework and multiple threads and I am wondering if there is a solution that keeps the ability to lazy load. From my understanding the data context is not thread safe which is why when I have multiple threads using the same data context I get various data readers error. The solution to this problem is to use a separate data context for each connection to the database and then destroy the data context. Unfortunately destroying my data context then prevents me from doing lazy loading.
Is there a pattern to allow me to have a shared context across my application, but still properly handle multiple threads?
No, there is no such solution. Your choices in multithreaded application are:
Context per thread
Single context producing unproxied detached entities (no lazy loading, no change tracking) with synchronization for each access to that context.
Doing the second approach with proxied attached entities is way to disaster. It would require to detect all hidden interactions with the context and make related code also synchronized. You will probably end with single threaded process running in multiple switching threads.