This question already has answers here:
Is it possible to control the size of an array using the type parameter of a generic?
(2 answers)
Closed 2 years ago.
Does Rust language support constant values in generic code similar to c++ way? Seems that the language overview doesn't advertise it. Parameterizing types with constants in C++ allows to create objects with preallocated buffers of different size depending on client's needs (types like stlsoft::auto_buffer).
If not, then what is the best practices to implement similar designs in Rust?
No, this is not supported in a type-safe way. We would need type-level numeric literals, like GHC recently added, for that.
However, you can use Rust macros. With a macro you can create "templates" that are parameterized over arbitrary expressions, including constants, which would allow you to do what you want here. Note that you may find bugs and limitations in the macro system if you try this at the moment.
Related
I'm having problem understanding the usefulness of Rust enums after reading The Rust Programming Language.
In section 17.3, Implementing an Object-Oriented Design Pattern, we have this paragraph:
If we were to create an alternative implementation that didn’t use the state pattern, we might instead use match expressions in the methods on Post or even in the main code that checks the state of the post and changes behavior in those places. That would mean we would have to look in several places to understand all the implications of a post being in the published state! This would only increase the more states we added: each of those match expressions would need another arm.
I agree completely. It would be very bad to use enums in this case because of the reasons outlined. Yet, using enums was my first thought of a more idiomatic implementation. Later in the same section, the book introduces the concept of encoding the state of the objects using types, via variable shadowing.
It's my understanding that Rust enums can contain complex data structures, and different variants of the same enum can contain different types.
What is a real life example of a design in which enums are the better option? I can only find fake or very simple examples in other sources.
I understand that Rust uses enums for things like Result and Option, but those are very simple uses. I was thinking of some functionality with a more complex behavior.
This turned out to be a somewhat open ended question, but I could not find a useful response after searching Google. I'm free to change this question to a more closed version if someone could be so kind as to help me rephrase it.
A fundamental trade-off between these choices in a broad sense has a name: "the expression problem". You should find plenty on Google under that name, both in general and in the context of Rust.
In the context of the question, the "problem" is to write the code in such a way that both adding a new state and adding a new operation on states does not involve modifying existing implementations.
When using a trait object, it is easy to add a state, but not an operation. To add a state, one defines a new type and implements the trait. To add an operation, naively, one adds a method to the trait but has to intrusively update the trait implementations for all states.
When using an enum for state, it is easy to add a new operation, but not a new state. To add an operation, one defines a new function. To add a new state, naively, one must intrusively modify all the existing operations to handle the new state.
If I explained this well enough, hopefully it should be clear that both will have a place. They are in a way dual to one another.
With this lens, an enum would be a better fit when the operations on the enum are expected to change more than the alternatives. For example, suppose you were trying to represent an abstract syntax tree for C++, which changes every three years. The set of types of AST nodes may not change frequently relative to the set of operations you may want to perform on AST nodes.
With that said, there are solutions to the more difficult options in both cases, but they remain somewhat more difficult. And what code must be modified may not be the primary concern.
I was reading about value and reference types and a question i couldn't find a clear answer was why primitives like int/double, etc are not reference types, like strings for example.
I know strings/arrays/other objects can be pretty big compared to ints (which i saw was the primary pro of reference), so the only reason not to make those primitives reference type would be because it would be an over-kill?
This is only the case in some programming languages, and this is typically done as an optimization (in order to avoid the need to perform memory dereferences or allocations for such simple types). However, there are languages that make basic numeric types and programmer-defined objects look and behave identically, sometimes selecting between a true object and a simple object automatically in the compiler or interpreter to maintain efficiency when the object-like capabilities are not used.
Python and Scala are examples where basic integers and regular objects are indistinguishable; Java, C++, and C are examples where builtin types are distinct from programmer-defined types.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
What are the underlying differences between F# tuples and structs. They can both hold multiple variables, but I just want to know when one is used over the other.
To be more specific, I am trying to pass a bunch of parameters through a bunch of functions with a new variable being accumulated each time. For example, I start with param1 and pass it into func1 which returns (param1,param2). This tuple (or struct) then gets passed to func2 which returns (param1,param2,param3), and so on.
My current thoughts are this: with a tuple, I can always hold just the right amount of arguments, but I give up a uniform format for the data, and in the end, I would have to pack and repack a tuple of about 10 elements. With a struct, I have the advantage of uniformity of the parameters, but the problems is, I would have to null specify the parameters in the beginning.
In F#, tuples are represented using Tuple<T1, T2> which is a reference type. On the other hand, structures are value types and so they are allocated on stack rather than on the heap (which may sometimes be faster). So my general rules are:
Tuples have nice syntactic support in F#, so use tuples by default because they make your code nicer. In most cases, the performance is similar and you do not need to worry about it (it depends on the use - tuples are not always slower).
When your tuples get more complicated (say, more than 3 elements), it makes sense to use a type with named members (like a record or an object type).
When allocating a large array of tuples or structs, it is better to use structs. You can either define your own struct or use standard .NET structs like KeyValuePair<K, V>.
To give an anecdotal evidence, in Deedle we are using structs for the internals (time-series is stored as an array of structs), but not for the public API, which uses tuples.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Seeking programming language. Must have the following qualities (in order of ascending length of feature in characters):
Compiled
Namespaces
Garbage collection
Omits OOP features!
Fixed number of types
Available on Mac OS X
First-class functions
Dynamic typing preferred
Closures (lexical scoping)
Can interface with C libraries (ncurses, etc)
Availability on linux a plus but not necessary
--
To give a little more context, I want to be able to use it to write command-line utilities for linux/BSD/Mac, which may or may not use existing C libraries (such as ncurses, etc).
Update for clarification:
Namespaces: I want to avoid having to name my function string_strip when I could create a new namespace called string and define in it a function named strip.
Omits OOP Features: There's definitely a difference between a language having a feature and me not using it, versus the language intentionally omitting it. If I wanted to use Go but without touching anything OOP-related, I couldn't use most of the standard library.
Fixed number of types: Why would a languages without OOP give you the option of creating a custom "type"? What does type even mean without OOP? It would probably just be used for composition of types, ie. a Person = struct { Name, Age }, whereas you could do this with a Hash or Map just fine.
Dynamic typing preferred: Type inference is fine, I guess......
I'm not sure what you mean by namespaces, but aren't you describing Scheme?
Well, I'll try to put forth some languages that fit almost every single requirement:
Haskell (which is statically typed)
specifically the GHC distribution - it's compiled (or can emit LLVM code)
it uses modules which are kind of like Namespaces
it's garbage collected, it is not an OO language
I don't particularly understand 'fixed number of types', as Haskell gives you types, but you can create more, and Haskell supports algebraic types and pattern matching
it's available on all Win/Mac/Linux
it has first class functions and closures (functional language after all)
and it can interface with C libraries.
Erlang
it has a bytecode compiler, and if you're on an Intel x86-family CPU, there is a native compiler called HiPE.
Dynamically typed
Not an OO language, it's near-functional
Has 8 primitives and 2 compound types - if you want a collection you're building a list or tuple of them
Is garbage collected
Has (immutable) closures
Has first class functions
Windows, Mac, Linux supported
Has packages which act as the namespace protectors
C bindings - Erlang has port drivers and Erlang Native Interface.
Check out Racket (based on Scheme).
It has an FFI. I've created FFI bindings for SQLite and ODBC with it, and I've found the FFI to be useful and convenient.
"Namespaces" is ambiguous to me. Racket has a module system, and it also has what it calls namespaces, which are first-class top-level environment objects.
It does not have "a fixed number of types". I don't understand that requirement at all.
What exactly is the difference? It seems like the terms can be used somewhat interchangeably, but reading the wikipedia entry for Objective-c, I came across:
In addition to C’s style of procedural
programming, C++ directly supports
certain forms of object-oriented
programming, generic programming, and
metaprogramming.
in reference to C++. So apparently they're different?
Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data.
Metaprogramming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates programs.
Generic Programming: Writing a program that creates, transforms, filters, aggregates and otherwise manipulates data, but makes only the minimum assumptions about the structure of the data, thus maximizing reuse across a wide range of datatypes.
As was already mentioned in several other answers, the distinction can be confusing in C++, since both Generic Programming and (static/compile time) Metaprogramming are done with Templates. To confuse you even further, Generic Programming in C++ actually uses Metaprogramming to be efficient, i.e. Template Specialization generates specialized (fast) programs from generic ones.
Also note that, as every Lisp programmer knows, code and data are the same thing, so there really is no such thing as "metaprogramming", it's all just programming. Again, this is a bit hard to see in C++, since you actually use two completely different programming languages for programming (C++, an imperative, procedural, object-oriented language in the C family) and metaprogramming (Templates, a purely functional "accidental" language somewhere in between pure lambda calculus and Haskell, with butt-ugly syntax, since it was never actually intended to be a programming language.)
Many other languages use the same language for both programming and metaprogramming (e.g. Lisp, Template Haskell, Converge, Smalltalk, Newspeak, Ruby, Ioke, Seph).
Metaprogramming, in a broad sense, means writing programs that yield other programs. E.g. like templates in C++ produce actual code only when instantiated. One can interpret a template as a program that takes a type as an input and produces an actual function/class as an output. Preprocessor is another kind of metaprogramming. Another made-up example of metaprogramming:a program that reads an XML and produces some SQL scripts according to the XML. Again, in general, a metaprogram is a program that yields another program, whereas generic programming is about parametrized(usually with other types) types(including functions) .
EDITED after considering the comments to this answer
I would roughly define metaprogramming as "writing programs to write programs" and generic programming as "using language features to write functions, classes, etc. parameterized on the data types of arguments or members".
By this standard, C++ templates are useful for both generic programming (think vector, list, sort...) and metaprogramming (think Boost and e.g. Spirit). Furthermore, I would argue that generic programming in C++ (i.e. compile-time polymorphism) is accomplished by metaprogramming (i.e. code generation from templated code).
Generic programming usually refers to functions that can work with many types. E.g. a sort function, which can sort a collection of comparables instead of one sort function to sort an array of ints and another one to sort a vector of strings.
Metaprogramming refers to inspecting, modifying or creating classes, modules or functions programmatically.
Its best to look at other languages, because in C++, a single feature supports both Generic Programming and Metaprogramming. (Templates are very powerful).
In Scheme / Lisp, you can change the grammar of your code. People probably know Scheme as "that prefix language with lots of parenthesis", but it also has very powerful metaprogramming techniques (Hygenic Macros). In particular, try / catch can be created, and even the grammar can be manipulated to a point (For example, here is a prefix to infix converter if you don't want to write prefix code anymore: http://github.com/marcomaggi/nausicaa). This is accomplished through metaprogramming, code that writes code that writes code. This is useful for experimenting with new paradigms of programming (the AMB operator plays an important role in non-deterministic programming. I hope AMB will become mainstream in the next 5 years or so...)
In Java / C#, you can have generic programming through generics. You can write one generic class that supports the types of many other classes. For instance, in Java, you can use Vector to create a Vector of Integers. Or Vector if you want it specific to your own class.
Where things get strange, is that C++ templates are designed for generic programming. However, because of a few tricks, C++ templates themselves are turing-complete. Using these tricks, it is possible to add new features to the C++ language through meta-programming. Its convoluted, but it works. Here's an example which adds multiple dispatch to C++ through templates. http://www.eptacom.net/pubblicazioni/pub_eng/mdisp.html . The more typical example is Fibonacci at compile time: http://blog.emptycrate.com/node/271
Generic programming is a very simple form of metacoding albeit not usually runtime. It's more like the preprocessor in C and relates more to template programming in most use cases and basic implementations.
You'll find often in typed languages that you'll create a few implementations of something where only the type if different. In languages such as Java this can be especially painful since every class and interface is defining a new type.
You can generate those classes by converting them to a string literal then replacing the class name with a variable to interpolate.
Where generics are used in runtime it's a bit different, in that case it's simply variable programming, programming using variables.
The way to envisage that is simple, take to files, compare them and turn anything different into a variable. Now you have only one file that is reusable. You only have to specify what's different, hence the name variable.
How generics came about it that not everything can be made variable like the variable type you expect or a cast type. Often there would by a lot of file duplication where the only thing that was variable was the variable types. This was a very common source of duplication. Although there are ways around it or to mitigate it they aren't particularly convenient. Generics have come along as a kind of variable variable to allow making the variable type variable. Because the variable type is something normally expressing in the programming language that can now be specified in runtime it is also considered metacoding, albeit a very simple case.
The effect of not having variability where you need it is to unroll your variables, that is you are forced instead of having a variable to make an implementation for every possible would be variable value.
As you can imagine that is quite expensive. This would be very common when using any kind of reusage object storage library. These will accept any object but in most cases people only want to sore one type of objdct. If you put in a Shop object which extends Object then want to retrieve it, the method signature on the storage object will return simply Object but your code will expect a Shop object. This will break compilation with the downgrade of the object unless you cast it back up to shop. This raises another conundrum as without generics there is no way to check for compatibility and ensure the object you are storing is a Shop class.
Java avoids metaprogramming and tries to keep the language simple using OOP principles of polymorphism instead to make flexible code. However there are some pressing and reoccurring problems that through experience have presented themselves and are addressed with the addition of minimal metaprogramming facilities. Java does not want to be a metaprogramming language but sparingly imports concepts from there to solve the most nagging problems.
Programming languages that offer lavage metacoding facilities can be significantly more productive than languages than avoid it barring special cases, reflection, OOP polymorphism, etc. However it often also takes a lot more skill and expertise to generate un=nderstandable, maintaiable and bug free code. There is also often a performance penalty for such languages with C++ being a bit of an oddball because it is compiled to native.