Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
There is a saying that mark-compact is slower than mark-copy. But why? As both algorithm need to move living objects.
As with just about anything to do with garbage collection, it is complicated.
The intuitive reasoning is that mark compact requires an extra pass across all objects (reachable AND unreachable) to find the reachable objects that need to be compacted. In addition, it is (often) more work to fix up pointers in objects during compaction than during a from-space to to-space copy. (Or if you don't compact, you pay performance penalties in other areas.)
However, in "Garbage Collection: Algorithms for Automatic Dynamic Memory Management", Jones and Lins present a theoretical analysis which shows that the relative efficiency of mark-sweep versus single generation copying collectors depends on the "residency" (r) of your application; i.e. the ratio of live data to the heap size. Above a certain residency r*, mark-sweep is more efficient.
(Note that r* will be less than 0.5. A single generation copying collector only works for r less than 0.5.)
If you really want to understand, I recommend that you buy the book. Or better still, the more recent, "The Garbage Collection Handbook: The Art of Automatic Memory Management" by Jones, Hosking and Moss.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
As long as there are smart pointer types in Rust, are ownership and borrowing semantics in Rust really needed? If yes, in what cases how do they used?
Yes!
Smart pointers come with quite a performance overhead. In many cases, a quick reference is all that is needed, for example if you want pass a value to a function without moving it into the function. Creating a smart pointer just for that use case would be really hurtful for performance.
Rusts primary goals are performance and safety, and that's why Rust has ownership and borrowing semantics. Otherwise there are many languages that follow the everything-is-a-smartpointer principle (actually, many of them go even one step further and use a garbage collector). It's a valid principle for memory safety, but comes with a performance hit.
Almost all languages are either memory safe or fast. Rust is unique in that sense as it tries to be both. And references/lifetimes are some of the principles that helped it to achieve that goal, at the cost of a steeper learning curve.
EDIT: Avoiding dangling pointers is just a small part of what the borrow checker can be used for. There are many more reasons to have it, like mutability, fearless concurrency, slicing, compile time ownership checks (like avoiding duplicate pin usage in embedded) and so on. How much power the borrow checker really has is probably not even completely understood yet, it was discovered that many usecases benefit from it for which it was never intended. It's just a really useful tool in general.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What factors are important? How do you know if a given programming language is "simple" or "simpler" than another language?
I'm not sure if this is a fair question to ask, since different languages serve different purposes and it might not really be comparing apples to apples.
However, with that said, memory management would come to mind. One can argue that Java is a "simpler" language than C++, since it has a garbage collector that can deal with some of the complexities around memory management, instead of forcing you to do it yourself.
In my perspective, these are the points that define the complexity of a language.
Variation of syntax from common pseudocode and constructs
Ease of developing a structure for real-life entities like objects
Methods of structure enforcement at compile time.
Memory management strategy allocation/deallocation
Code reusability
Ease of code headers and directives management
Inbuilt libraries
Relative installation package sizes
Data exchange capabilities like over network of files
Process handling like thread management
Relative brevity of the code
Speed of compilation
Developer community size and documentation
OpenSource implementations
Platform dependence
And many more could be added to this list.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am a firm believer in using immutability where possible so that classical synchronization is not needed for multi-threaded programs. This is one of the core concepts used in functionally languages.
I was wondering what people think of this for CUDA programs, I know developing for GPUs is different from developing for CPUs and being a GPU n00b I'd like more knowledgeable people to give me their opinion on the matter at hand.
Thanks,
Gabriel
In CUDA programming, immutability is also beneficial, and sometimes even necessary.
On block-wise communication, immutability may allow you to skip some __syncthreads().
On grid-wise communication, there is no whole-grid synchronize instruction at all. That is why in general case, to have a guarantee that a change of one block is visible by another block requires kernel termination. This is because blocks may scheduled in such a way that they actually run in sequence (e.g. weak GPU, unable to run more blocks in parallel)
Partial communication is however possible through atomic operations and __threadfence(). You can implement, for example, task queues, permitting blocks to fetch new assigments from there in a safe way. These kind of operations should however be done rarely as atomics may be time consuming (although with global L2 caching it is now better than on the older GPUs)
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
This question has been flagged as irrelevant so I guess this has no real worth to anyone so I tried removing the question but the system won't let me so I am now truncating the content of this post ;)
I think you need to run the actual numbers for both scenarios:
On the fly
how long does one image take to generate and do you want the client to wait that long
do you need to pay by cpu utilization, number of CPUs etc. and what will this cost for X images thumbnailed Y times over 1 year
Stored
how much space will this use and what will it cost
how many files are there? Is the number bigger than the number of inodes in the destination file system, or is the total estimated size bigger than the file system
It^s mostly an economics question, there is no general yes/no answer. When in doubt, I'd probably go with storing them since it's a computation intensive tasks and it's not very efficient to do it over and over again. You could also do a hybrid solution like generate a thumbnail on the fly when it is first requested, then cache it until it wasn't used for certain a number of days.
TL;DR: number of inodes is probably your least concern.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I was asked in the interview regarding O.S
we make virtual memory out of hard disk than why is accesing swap faster than accessing hard disk.
Please help me understanding the concept.Or else redirect me to the proper forum.
First, as #Celada said, there's chances your data will in memory (has not been swaped out) when you map your file to memory or put your data in memory. This may be faster than you directly access your file or your data.
Second, OSes have very efficient swap algorithm that probably better than yours. So for example, if you need to read a very large file(maybe 2GB large or more), you need to do kind of swapping yourself and probably much slower than using OS swap.
Third, in practice, system administrators usually put /swap in a separate partitions or even separate disk or even faster device, so you can take advantage of it.
The hypothesis is nonsense. Accessing swap is not faster than accessing the hard disk if the swap is on the hard disk.