I am having a groovy map which holds this kinda values
Map testMap = [1:[1,1,1],2:[2,2,2]]
Here when I call collect function like this
testMap.collect {it.value}
I getting the output like
[[1, 1, 1], [2, 2, 2]]
But I want the output as [1,1,1,2,2,2]
is their any groovy method to achieve this, without using each method
You can use groovy's flatten() method.
testMap.collect {it.value}.flatten()
There is even simplier solution
[1:[1,1,1],2:[2,2,2]].collectMany{it.value}
A couple of the solutions here use the collect family, which is ok if you're doing mutations on the data, but in this case it's just grabbing all the values which would be better served using the spread operator on the values of the map
testMap*.value.flatten()
or with functions on the map
testMap.values().flatten()
Note it's value when spreading over each element of the map, and values() when asking the map directly in one go for the entries values.
Both of these read more as "getting values out of testMap" rather than collect which is usually used for mutations on the map.
It's a matter of style, but if you only use collect when you're going to mutate the data, you'll have more readable code.
Related
I'm currently researching different ways to combine, organize and manipulate data for different purposes.
Just today I found this zip() / iter() technique of creating a LIST containing TUPLES while also being able to specify how many elements are in each TUPLE. However, I'm unable to fully understand part of the syntax.
Here is the code:
mylist = [1,2,3,4,5,6]
converted = [x for x in zip(*[iter(mylist)]*2)]
print(converted)
This is the output (which is what I want):
[(1, 2), (3, 4), (5, 6)]
What I'm trying to grasp is the first asterisk. I understand that it's most likely in relation to the '*2' telling the 'iter' or 'zip' function how many elements each tuple should contain, however, I'm trying to grasp the need for it's placement.
Any help is greatly appreciated.
Also, if you know of another technique to accomplish this and feel like sharing, I'd greatly appreciate learning from you.
Thanks again in advance guys!
Basically iter(mylist) makes an iterator object for the list, then it's put into a list [iter(mylist)] which is multiplied by 2 basically making a list that contains two references to the same iterator object: [iter(mylist)]*2 -> [<list_iterator object at 0x7f8fbc1ac610>, <list_iterator object at 0x7f8fbc1ac610>]
The first asterisk unpacks the list as arguments into the zip() function.
To make it easier to understand as I'm not very good at explaining things, this code does the same as yours:
mylist = [1,2,3,4,5,6]
iterators = [iter(mylist)] * 2
converted = [x for x in zip(*iterators)]
print(converted)
So it makes an iterator, then it makes a list that contains two references to the same iterator object by multiplying it by 2.
And then it unpacks the list to be used as arguments for the zip() function
I hope this cleared it up at least a little for you as I'm not very good at explaining.
I am a student doing scientific calculations recently,usually,I use odeint function to solve Differential equations,now I need to solve a differential equation system with 100 variables.If I follow my previous programming style in python,I will act like this:
def XFunction(X,t,sets):
x1,x2,x3,x4,,,,,,x100=X
lambd=sets
return np.array([equation1,equation2,equation3,,,,,equation100])
But this method takes too long, is there a more efficient way to do this?
Yes, using integer suffixes like that indicates that you probably want to use sequence like a list or array, but a mapping, like a dict could also work. So instead of x1,x2,x3..., you write X[0], X[1], X[2]... when you need them without pulling them out into locals first. X might already be an array in your program.
If it's just an iterable and not a sequence, you can save it in a list first,
X = [*X]
Which lets you use the subscript operator X[i].
You don't normally "declare" variables in Python, that's implied by assignment, although you can declare without assignment by giving it a (type) annotation.
The [equation1, ...] part could perhaps be done with a list comprehension, which is like a mathematical set comprehension, but ordered.
Here's a stupid example with a single map and filter step. (You can have multiple filters or no filters, but you must use at least one loop.)
[x**2 for x in X if x % 2 == 0]
This list comprehension would generate a list of all squares of the elements of X where the element was even.
I don't know what set of formulae you need for your application, but if it can be parameterized by X, you can do it this way.
I have a list of tuples of integers [(2,10), [...] (4,11),(3,9)].
Tuples are added to the list as well as deleted from the list regularly. It will contain up to ~5000 Elements.
In my code I need to use this list sometimes sorted according to the first and sometimes to the second tuple-element. Hence ordering of the list will change drastically. Resorting might take place at any time.
Pythons tinsort is only fast when list are already sorted heavily. So this general approach of frequent resorting might be inefficient. A better approach would be to use two naturally sorted data-structures like the SortedList. But here I would need two lists (one for the first tuple element, and one for the second) as well as a dictionary to create the mapping of the above tuples.
What is the pythonic way to solve this?
In Java I would do it like this:
TreeSet<Integer> leftTupleEntry = new Treeset<Integer>();
TreeSet<Integer> rightTupleEntry = new Treeset<Integer>();
Hashmap<Integer, Integer> tupleMap = new HashMap<Integer,Integer>()
And have both sorting strategies in the best runtime complexity class as well as the necessary connection between both numbers.
When I need to sort it according to first tuple I need to access the whole list (as i need to calculate a cumulative sum, and other operations)
When I need to sort according to second element, I'm only interested in the smallest elements, which then is usually followed by the deletion of these respective tuples.
Typically after any insertation a new sort according to the first element is requested.
first_element_list = sorted([i[0] for i in list_tuple])
second_element_list = sorted([i[1] for i in list_tuple])
What I did:
I use the SortedKeyList and sorted according to the first tuple element. Inserting into this list is O(log(n)). Reading from it is O(log(n)) too.
from operator import itemgetter
from sortedcontainers import SortedKeyList
self.list = SortedKeyList(key=itemgetter(0))
self.list.add((1,4))
self.list.add((2,6))
When I need the argmin according to the second tuple element I used
np.argmin(self.list, axis=0)[0]
Which is O(n). Not optimal.
I want to understand, What is the internal design strategy so that, it can not allow element insertion in set.
Following link describe that set is implemented using dictionary,where every element of set is a key.
https://docs.python.org
So,why is it not supporting similar operation like update in dictionary.
yes you can, look:
>>> a=set()
>>> a.add(1)
>>> a
{1}
>>> a.update([2,3,4,5])
>>> a
{1, 2, 3, 4, 5}
>>>
I'll take a shot at this...Sets are implemented using dictionaries, but the function is a bit different. However, what do you mean by "can not allow element insertion"? You can insert elements using .update() and .add() (see the documentation: https://docs.python.org/3/library/stdtypes.html#set).
Unless you are referring to immutable sets (i.e. frozenset), in which case that is an entirely different function whose goal is to be, well, immutable, so it doesn't allow updating values.
I would like to prepend an element to an iterator. Specifically, I would like to create an iterator that steps through the sequence [2, 3, 5, 7, 9, ...] up to some maximum. The best I've been able to come up with is
range_step_inclusive(2,2,1).chain(range_step_inclusive(3, max, 2))
But the first iterator is kind of a hack to get the single element 2 as an iterator. Is there a more idiomatic way of creating a single-element iterator (or of prepending an element to an iterator)?
This is the exact use case of std::iter::once.
Creates an iterator that yields an element exactly once.
This is commonly used to adapt a single value into a chain of other kinds of iteration. Maybe you have an iterator that covers almost everything, but you need an extra special case. Maybe you have a function which works on iterators, but you only need to process one value.
range_step_inclusive is long gone, so let's also use the inclusive range syntax (..=):
iter::once(2).chain((3..=max).step_by(2))
You can use the Option by-value iterator, into_iter:
Some(2).into_iter().chain((3..).step_by(2))
It is not less boilerplate, but I guess it is more clear:
Repeat::new(2i).take(1).chain(range_step_inclusive(3, max, 2))
Repeat::new will create an endless iterator from the value you provide. Take will yield just the first value of that iterator. The rest is nothing new.
You can run this example on the playpen following the link: http://is.gd/CZbxD3