How PLINQ is better than traditional threading? [closed] - multithreading

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Currently I have n suppliers web service which gives me search result for particular product. I am creating n threads myself and merging the final results returned by supplier. I have just come to know about PLINQ. I want to know if it would help the performance. If yes, how.

Better? Depends on what that means for you. PLINQ is definitely cleaner and more maintainable code for a lot of use cases. On the performance side depends on what you compare it against.
In your case if you are creating n threads by hand i would say you might be slower because PLINQ will use the threadpool and avoid some thread creation overhead.

Related

Why was sleep_ms() deprecated? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I know there is a function sleep() that is more flexible, as it accepts a Duration as a parameter. Obviously, in many cases that could be more convenient. But from my personal experience with other languages, most of the time I need to put a thread to sleep for milliseconds or seconds, rather than hours or days. So in most cases something like this:
thread::sleep_ms(500);
would be much more convenient than this:
use std::time::Duration;
thread::sleep(time::Duration::from_millis(500));
This question isn't about which of the two functions is better. I personally think there is room for both, but that is subjective, of course.
My question is, was there any special, non-obvious reason for the deprecation of sleep_ms()? I tried to find info on the topic, but to no avail.
Thanks to dratenik's and eggyal's comments, I believe it is correct to summarize that the main reason for deprecating sleep_ms() is that the Rust core devs believe the right way to go should be to work on adding less verbose ways to pass a Duration value.
Here is what is comming:
#![feature(duration_constants)]
use std::time::Duration;
thread::sleep(2 * Duration::SECOND);

What is the best way to generate and run concurent threads from a for loop over an Arc<Mutex<Vec>>? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Im running both a uni and multi threaded version of an application. There is no speed advantage. That said, what is the best way to access an Arc<Mutex<Vec>> and process each entry concurrently?
You cannot process an Arc<Mutex<Vec<T>>> concurrently - the mutex wraps the entire vector, so no other thread other than the one that locked it will be able to access it.
If you know the number of elements up front, you can use an Arc<Vec<Mutex<T>>>. This has a mutex per-element, so threads will lock only the elements. However you won't be able to grow or shrink the Vec since its shared.
There are also more specialized structures in the Concurrency section of http://lib.rs, with varying semantics, that may fit your needs.

What is a simple definition for a CPU intensive application to answer in an interview? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I got asked during an online interview:- What is CPU Intensive Application? Answer it in 3-4 lines (briefly). I just want the simple definition that can be explained with few-real world examples.
Doesn't get any simpler than this forever loop:
while (true) {}
To explain this, just remember that nodejs is single-threaded, so anything that blocks the event-loop is bound to turn the process into a CPU-hungry beast. Of course the kernel is free to schedule the nodejs process as it sees fit, but the truth of the matter is that only that loop will get a chance to do anything meaningful in your program.
See also Don't Block the Event Loop

Which one is better approach for doing CRUD operation using CouchDB? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Can anybody help me to find out which is the better approach for doing queries in case of CouchDB?
Is it making javascript views having a map-reduce function?
Is it doing mongo expression in Couch DB?
You mostly want to use Mango. A lot of efforts are put into this new feature to make queries easier to do.
It offers a lot of query functionalities and in the end, it's backed by views.
For some specific queries, you might have to do map-reduce functions.

Examples of planning and search usage [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
What are applications where search techniques or more specifically planning techniques are used? I am most interested in examples in use.
I know that A* is used for path planning in Robotics, that planning is used in logistics (details would be great) but what other usages are there?
For Search in general Google, etc come to mind with their inverted indices. Again, where else is it used?
For planning examples, including logistics challenges, take a look at this list. Each use case comes with multiple datasets and a problem definition.

Resources