Tuples in .net 4.0.When Should I use them - c#-4.0

I have come across Tuples in net 4.0. I have seen few example on msdn,however it's still not clear to me about the purpose of it and when to use them.
Is it the idea that if i want to create a collections of mix types I should use a tuple?
Any clear examples out there I can relate to?
When did you last use them?
Thanks for any suggestions

Tuples are just used on the coding process by a developer. If you want to return two informations instead of one, then you can use a Tuple for fast coding, but I recoment you make yourself a type that will contain both properties, with appropriate naming, and documentation.
Tuples are not used to mix types as you imagine. Tuples are used to make compositions of other types. e.g. a type that holds both an int and a string can be represented by a tuple: Tuple<int,string>.
Tuples exists in a lot of sizes, not only two.
I don't recommend using tuples in your final code, since their meaning is not clear.

Tuples can be used as multipart keys for dictionaries or grouping statements because being value types they understand equality. Avoid using them to move data around because they have poor language support in C# and named values (classes, structs) are better (simpler) than ordered values (tuples).

Related

Python can store different data types in a list or linkedlist

Python can store multiple data types object inside a list while we can not do the same for Java and C++.
What are the additional functions used by python to do that? And from where we can study about the same.
To be fair, you are allowed to do the same in Java. Just you need to have a more generic type.
For example,
In python you can write
array = []
array.append(1)
array.append("hi")
The equivalent code in Java would be:
List<Object> list = new ArrayList<>();
list.add(1);
list.add("hii");
This two pieces of code are functionally equivalent, however in Java we require a cast to the desired type we want when we fetch from the list. In Python, the type is deduced in runtime so we don't have to do any explicit casting.
Python is a fully object oriented language, and also dynamically typed. So most things in Python behave the same way. Maps, Sets, and more complex data structures allow you to mix value types easily. Even key types for collections can be a mix of different data types as long the proper contract is implemented. To learn more about the python type system, look here; https://blog.daftcode.pl/first-steps-with-python-type-system-30e4296722af
I think everything in Python is dealt as an object and while we are storing data in Python, we are basically storing the references of the data. This may be the reason.

Haskell data structure that is efficient for swapping elements?

I am looking for a Haskell data structure that stores an ordered list of elements and that is time-efficient at swapping pairs of elements at arbitrary locations within the list. It's not [a], obviously. It's not Vector because swapping creates new vectors. Which data structure is efficient at this?
The most efficient implementations of persistent data structures, which exhibit O(1) updates (as well as appending, prepending, counting and slicing), are based on the Array Mapped Trie algorithm. The Vector data-structures of Clojure and Scala are based on it, for instance. The only Haskell implementation of that data-structure that I know of is presented by the "persistent-vector" package.
This algorithm is very young, it was only first presented in the year 2000, which might be the reason why not so many people have ever heard about it. But the thing turned out to be such a universal solution that it got adapted for Hash-tables soon after. The adapted version of this algorithm is called Hash Array Mapped Trie. It is as well used in Clojure and Scala to implement the Set and Map data-structures. It is also more ubiquitous in Haskell with packages like "unordered-containers" and "stm-containers" revolving around it.
To learn more about the algorithm I recommend the following links:
http://blog.higher-order.net/2009/02/01/understanding-clojures-persistentvector-implementation.html
http://lampwww.epfl.ch/papers/idealhashtrees.pdf
Data.Sequence from the containers package would likely be a not-terrible data structure to start with for this use case.
Haskell is a (nearly) pure functional language, so any data structure you update will need to make a new copy of the structure, and re-using the data elements is close to the best you can do. Also, the new list would be lazily evaluated and typically only the spine would need to be created until you need the data. If the number of updates is small compared to the number of elements, you could make a difference list that checks a sparse set of updates first, and only then looks in the original vector.

Erlang: Extracting values from a Key/Value tuple

I have a tuple list that look like this:
{[{<<"id">>,1},
{<<"alerts_count">>,0},
{<<"username">>,<<"santiagopoli">>},
{<<"facebook_name">>,<<"Santiago Ignacio Poli">>},
{<<"lives">>,{[{<<"quantity">>,8},
{<<"max">>,8},
{<<"unlimited">>,true}]}}]}
I want to know how to extract properties from that tuple. For example:
get_value("id",TupleList), %% should return 1.
get_value("facebook_name",TupleList), %% should return "Santiago Ignacio Poli".
get_value("lives"), %% should return another TupleList, so i can call
get_value("quantity",get_value("lives",TupleList)).
I tried to match all the "properties" to a record called "User" but I don't know how to do it.
To be more specific: I used the Jiffy library (github.com/davisp/jiffy) to parse a JSON. Now i want to obtain a value from that JSON.
Thanks!
The first strange thing is that the tuple contains a single item list: where [{Key, Value}] is embedded in {} for no reason. So let's reference all that stuff you wrote as a variable called Stuff, and pull it out:
{KVList} = Stuff
Good start. Now we are dealing with a {Key, Value} type list. With that done, we can now do:
lists:keyfind(<<"id">>, 1, KVList)
or alternately:
proplists:get_value(<<"id">>, KVList)
...and we would get the first answer you asked about. (Note the difference in what the two might return if the Key isn't in the KVList before you copypasta some code from here...).
A further examination of this particular style of question gets into two distinctly different areas:
Erlang docs regarding data functions that have {Key, Value} functions (hint: the lists, proplists, orddict, and any other modules based on the same concept is a good candidate for research, all in the standard library), including basic filter and map.
The underlying concept of data structures as semantically meaningful constructs. Honestly, I don't see a lot of deliberate thought given to this in the functional programming world outside advanced type systems (like in Haskell, or what Dialyzer tries hard to give you). The best place to learn about this is relational database concepts -- once you know what "5NF" really means, then come back to the real world and you'll have a different, more insightful perspective, and problems like this won't just be trivial, they will beg for better foundations.
You should look into proplists module and their proplist:get_value/2 function.
You just need to think how it should behave when Key is not present in the list (or is the default proplists behavior satisfying).
And two notes:
since you keys are binnary, you should use <<"id">> in your function
proplists works on lists, but data you presented is list inside one element tuple. So you need to extract this you Data.
{PropList} = Data,
Id = proplists:get_value(<<"id">>, PropList),

UML metamodel: derived, derived union and subsetting

If you have ever worked with the metamodel of UML, you propably know the concepts of unions and subsets - As far as I understand it:
Attributes and associations of an element/class marked as "derived union" cannot be used directly. In more specific sub-classes, you can possibly find subsets of them that can be used, as long as they are not marked as derived unions themselves.
"derived" (without union) attributes and associations have also subsets in more specific classes, but unlike above you can use them directly without having to look for subsets in more specific classes
My questions:
Does this make sense or am I on the wrong track here?
What is the meaning of the "/" (slash) you can find in front of some
attributes/associations, that they have subsets in child-classes?
E.g. /general : Classifier[*]
An union property is a property that consists of multiple other properties. You can only understand the union, when you combine all subsets. A list is almost by definition an union.
Almost, because it might be uninitialized.
A derived union is a property requiring a specific collection of subsets. I would not talk about accessing them directly, but about how direct you can understand them. You need all information before you can make an interpretation.
The difference between the two that a derived union requires a specific subset and an union might have a subset and might have different subsets in different contexts. A very simple example being the fields on a form. All required fields show the definition of a derived union. All other fields are part of the complete union.
Derived unions can contain derived unions in their subsets. It directs the creation of classes and their instances, it does not make them impossible.
All derived features require other features to be known. Temperature can be read directly, but to know if someone has fever requires more knowledge, like the time of the day, the place of collecting information etc..
The slash implies that it is being derived.

What's the most idiomatic approach to multi-index collections in Haskell?

In C++ and other languages, add-on libraries implement a multi-index container, e.g. Boost.Multiindex. That is, a collection that stores one type of value but maintains multiple different indices over those values. These indices provide for different access methods and sorting behaviors, e.g. map, multimap, set, multiset, array, etc. Run-time complexity of the multi-index container is generally the sum of the individual indices' complexities.
Is there an equivalent for Haskell or do people compose their own? Specifically, what is the most idiomatic way to implement a collection of type T with both a set-type of index (T is an instance of Ord) as well as a map-type of index (assume that a key value of type K could be provided for each T, either explicitly or via a function T -> K)?
I just uploaded IxSet to hackage this morning,
http://hackage.haskell.org/package/ixset
ixset provides sets which have multiple indexes.
ixset has been around for a long time as happstack-ixset. This version removes the dependencies on anything happstack specific, and is the new official version of IxSet.
Another option would be kdtree:
darcs get http://darcs.monoid.at/kdtree
kdtree aims to improve on IxSet by offering greater type-safety and better time and space usage. The current version seems to do well on all three of those aspects -- but it is not yet ready for prime time. Additional contributors would be highly welcomed.
In the trivial case where every element has a unique key that's always available, you can just use a Map and extract the key to look up an element. In the slightly less trivial case where each value merely has a key available, a simple solution it would be something like Map K (Set T). Looking up an element directly would then involve first extracting the key, indexing the Map to find the set of elements that share that key, then looking up the one you want.
For the most part, if something can be done straightforwardly in the above fashion (simple transformation and nesting), it probably makes sense to do it that way. However, none of this generalizes well to, e.g., multiple independent keys or keys that may not be available, for obvious reasons.
Beyond that, I'm not aware of any widely-used standard implementations. Some examples do exist, for example IxSet from happstack seems to roughly fit the bill. I suspect one-size-kinda-fits-most solutions here are liable to have a poor benefit/complexity ratio, so people tend to just roll their own to suit specific needs.
Intuitively, this seems like a problem that might work better not as a single implementation, but rather a collection of primitives that could be composed more flexibly than Data.Map allows, to create ad-hoc specialized structures. But that's not really helpful for short-term needs.
For this specific question, you can use a Bimap. In general, though, I'm not aware of any common class for multimaps or multiply-indexed containers.
I believe that the simplest way to do this is simply with Data.Map. Although it is designed to use single indices, when you insert the same element multiple times, most compilers (certainly GHC) will make the values place to the same place. A separate implementation of a multimap wouldn't be that efficient, as you want to find elements based on their index, so you cannot naively associate each element with multiple indices - say [([key], value)] - as this would be very inefficient.
However, I have not looked at the Boost implementations of Multimaps to see, definitively, if there is an optimized way of doing so.
Have I got the problem straight? Both T and K have an order. There is a function key :: T -> K but it is not order-preserving. It is desired to manage a collection of Ts, indexed (for rapid access) both by the T order and the K order. More generally, one might want a collection of T elements indexed by a bunch of orders key1 :: T -> K1, .. keyn :: T -> Kn, and it so happens that here key1 = id. Is that the picture?
I think I agree with gereeter's suggestion that the basis for a solution is just to maintiain in sync a bunch of (Map K1 T, .. Map Kn T). Inserting a key-value pair in a map duplicates neither the key nor the value, allocating only the extra heap required to make a new entry in the right place in the index. Inserting the same value, suitably keyed, in multiple indices should not break sharing (even if one of the keys is the value). It is worth wrapping the structure in an API which ensures that any subsequent modifications to the value are computed once and shared, rather than recomputed for each entry in an index.
Bottom line: it should be possible to maintain multiple maps, ensuring that the values are shared, even though the key-orders are separate.

Resources