How do I avoid stack overflow at compile time? [closed] - rust

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
General performance advice for Rust is to try to avoid placing things on the heap if possible.
An issue I am having is that I do not know where/when the size limit of a function stack will be reached, until my program panics unpredictably at runtime.
Two examples are:
Parsing deeply nested structs from JSON using Serde.
Creating many futures inside a function.
Questions:
Can I avoid this by detecting it at compile time?
How can I know what the limit of the stack is whilst I am writing code? Do others just know the exact size of their variables?
Why do people advise to try to avoid the heap?

Related

What is the best way to generate and run concurent threads from a for loop over an Arc<Mutex<Vec>>? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Im running both a uni and multi threaded version of an application. There is no speed advantage. That said, what is the best way to access an Arc<Mutex<Vec>> and process each entry concurrently?
You cannot process an Arc<Mutex<Vec<T>>> concurrently - the mutex wraps the entire vector, so no other thread other than the one that locked it will be able to access it.
If you know the number of elements up front, you can use an Arc<Vec<Mutex<T>>>. This has a mutex per-element, so threads will lock only the elements. However you won't be able to grow or shrink the Vec since its shared.
There are also more specialized structures in the Concurrency section of http://lib.rs, with varying semantics, that may fit your needs.

What is a simple definition for a CPU intensive application to answer in an interview? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I got asked during an online interview:- What is CPU Intensive Application? Answer it in 3-4 lines (briefly). I just want the simple definition that can be explained with few-real world examples.
Doesn't get any simpler than this forever loop:
while (true) {}
To explain this, just remember that nodejs is single-threaded, so anything that blocks the event-loop is bound to turn the process into a CPU-hungry beast. Of course the kernel is free to schedule the nodejs process as it sees fit, but the truth of the matter is that only that loop will get a chance to do anything meaningful in your program.
See also Don't Block the Event Loop

How to create threads in haskell? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
How to create threads and assign tasks to them? Is there any way to do it, like usage of
thread.start_new_thread ( function, args[, kwargs] )
in Python?
thanks in advance
Haskell threads can be spawned using forkIO.
I recommend also reading the GHC concurrency guide, since it has all the relevant pointers.

How PLINQ is better than traditional threading? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Currently I have n suppliers web service which gives me search result for particular product. I am creating n threads myself and merging the final results returned by supplier. I have just come to know about PLINQ. I want to know if it would help the performance. If yes, how.
Better? Depends on what that means for you. PLINQ is definitely cleaner and more maintainable code for a lot of use cases. On the performance side depends on what you compare it against.
In your case if you are creating n threads by hand i would say you might be slower because PLINQ will use the threadpool and avoid some thread creation overhead.

Destroy a large amount of data as quickly as possible? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
How would you go about securely destroying several hundred gigabytes of arbitrary data as quickly as possible?
Incinerating hard drives is a slow, manual (and therefore insecure) process.
Physically destroying the drives does not (necessarily) take a significant amount of time. Consider, for example, http://www.redferret.net/?p=14528 .
I know the answer but this seems like one of those questions best left unanswered unless you know why it's being asked.

Resources