Smart pointers and ownership in Rust... why? [closed] - rust

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 months ago.
Improve this question
As long as there are smart pointer types in Rust, are ownership and borrowing semantics in Rust really needed? If yes, in what cases how do they used?

Yes!
Smart pointers come with quite a performance overhead. In many cases, a quick reference is all that is needed, for example if you want pass a value to a function without moving it into the function. Creating a smart pointer just for that use case would be really hurtful for performance.
Rusts primary goals are performance and safety, and that's why Rust has ownership and borrowing semantics. Otherwise there are many languages that follow the everything-is-a-smartpointer principle (actually, many of them go even one step further and use a garbage collector). It's a valid principle for memory safety, but comes with a performance hit.
Almost all languages are either memory safe or fast. Rust is unique in that sense as it tries to be both. And references/lifetimes are some of the principles that helped it to achieve that goal, at the cost of a steeper learning curve.
EDIT: Avoiding dangling pointers is just a small part of what the borrow checker can be used for. There are many more reasons to have it, like mutability, fearless concurrency, slicing, compile time ownership checks (like avoiding duplicate pin usage in embedded) and so on. How much power the borrow checker really has is probably not even completely understood yet, it was discovered that many usecases benefit from it for which it was never intended. It's just a really useful tool in general.

Related

How does Rust enforce/implement RAII [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm working on a (maybe) serious programming language and want to learn about implementing memory management. I want this language to enforce RAII, similar to Rust, but, unlike rust, this language is Object-Oriented and I hope I can implement objects that manage their own memory (like Boxes in Rust). Can anyone go into detail about how Rust handles references to Heap memory?
I think the most obvious way to implement classes is:
Your class variables are implemented as pointers, like in C# and Java.
There is a single owner of the object and all class variables have move semantics in order to enforce this, like in Rust.
Memory is a resource that needs to be cleaned up, so all class variables, after calling the destructor (if any) of the referent object, also call the deallocation routine of your memory allocator, like in C++.
You introduce lifetimes in your type system in order to ensure that lending/borrowing the object doesn't allow any non-owning references to outlive it, like in Rust.

What exactly does it mean for a programming language to be simple? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
What factors are important? How do you know if a given programming language is "simple" or "simpler" than another language?
I'm not sure if this is a fair question to ask, since different languages serve different purposes and it might not really be comparing apples to apples.
However, with that said, memory management would come to mind. One can argue that Java is a "simpler" language than C++, since it has a garbage collector that can deal with some of the complexities around memory management, instead of forcing you to do it yourself.
In my perspective, these are the points that define the complexity of a language.
Variation of syntax from common pseudocode and constructs
Ease of developing a structure for real-life entities like objects
Methods of structure enforcement at compile time.
Memory management strategy allocation/deallocation
Code reusability
Ease of code headers and directives management
Inbuilt libraries
Relative installation package sizes
Data exchange capabilities like over network of files
Process handling like thread management
Relative brevity of the code
Speed of compilation
Developer community size and documentation
OpenSource implementations
Platform dependence
And many more could be added to this list.

Learning functional programming after other programming paradigms [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have taught myself C, Python, Java and a few other procedural or object oriented languages to an intermediate degree from resouces on the internet (thanks SO :D) and in books. When I tried to learn Haskell, I couldn't wrap my head around what the code actually did.
Is there a better functional language for someone coming from a background in procedural or object oriented programming to learn? Are there any resources meant for people in my situation?
Thanks!
It's probably varies with people (and this question is bound to get closed over that), but the way I see it: there isn't a stair you need to step on before you can be within reach to Haskell.
So I'd say you're not driven temporarily off necessarily by the language, but by your sources of learning. For the only truly gentle introduction, I recommend LYAH. It keeps things within reasonable difficulty and it has some really entertaining points every now and then.
However, if you still want to almost-soften your transition, you can check out F# which isn't a functional language but it will give you a good taste of FP, and it will be very familiar to you because you still live in an OO world.
You can also check out basically any other functional language and it will give you some of the mindset (Scala, ML, etc.).
Keep in mind that I say "almost-soften", because Haskell is very different (especially because of purity), and that gives you a very logical and mathematical mindset to things and that has been very different for me than any other language I learned. It's incredible. It was much beyond learning different syntax, it's a way to think about things and I can always find myself learning more and a truly amazing part of it is that (since it's so logical, mathematical, reasonable, etc.) the new ways of thinking I acquire with Haskell don't leave me both when I use other languages and even in my personal daily life.
That being said, the only thing truly horrible with Haskell is that it ruined me for other languages. I used to like C#... :(

CUDA, Immutability vs. Mutability [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am a firm believer in using immutability where possible so that classical synchronization is not needed for multi-threaded programs. This is one of the core concepts used in functionally languages.
I was wondering what people think of this for CUDA programs, I know developing for GPUs is different from developing for CPUs and being a GPU n00b I'd like more knowledgeable people to give me their opinion on the matter at hand.
Thanks,
Gabriel
In CUDA programming, immutability is also beneficial, and sometimes even necessary.
On block-wise communication, immutability may allow you to skip some __syncthreads().
On grid-wise communication, there is no whole-grid synchronize instruction at all. That is why in general case, to have a guarantee that a change of one block is visible by another block requires kernel termination. This is because blocks may scheduled in such a way that they actually run in sequence (e.g. weak GPU, unable to run more blocks in parallel)
Partial communication is however possible through atomic operations and __threadfence(). You can implement, for example, task queues, permitting blocks to fetch new assigments from there in a safe way. These kind of operations should however be done rarely as atomics may be time consuming (although with global L2 caching it is now better than on the older GPUs)

Possible optimizations in Haskell that are not yet implemented in GHC? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
So, purely functional languages have their own class of potentials due to the clear separation between pure and impure code. I have seen several features that are somewhat simpler to implement in Haskell like Nested Data Parallelism or Stream Fusion.
My question is, what are other improvements/optimizations that are more or less unique to Haskell in terms of feasibility/simplicity but not yet implemented? (I mostly care about GHC, but also love to hear about others)
One optimization I'd love to see in GHC is supercompilation. That seems unlikely in the near-future of GHC, though, because it's whole-program optimization, and GHC is very focused on module-at-a-time compilation.
Basically, supercompilation is executing as much of a program as possible at compile time. It naturally subsumes inlining, deforestation, specialization, and any number of other techniques. Early experimental results have been promising, but it's a very expensive process. It's hard to see it being a practical optimization, but the concept is ridiculously awesome.
Another issue that SPJ states in his paper on modular supercompilation is combining supercompilation with unboxing. Possibilities for unboxing in supercompiled program are significantly reduced. This causes decrease in performance in comparison with unoptimized program passed through GHC strict-analyser/unboxer. See http://research.microsoft.com/en-us/um/people/simonpj/papers/supercompilation/
Another powerful but also "not yet ready for production use" technique is worker-wrapper transformation.

Resources