Consider a simple nested config
p = {'a': {'b': 1.0, 'c': 2.0}}
jax.tree_flatten(p)
p = {'a': {'b': 1.0, 'c': 2.0}}
jax.tree_flatten(p)
([1.0, 2.0], PyTreeDef({'a': {'b': *, 'c': *}}))
How can I get some kind of labels like ['a.b', 'a.c'] or anything else reasonable in the order is for tree_flatten?
There is no mechanism for this built in to jax.tree_util. In a way, the question is ill-posed: tree flattening is applicable to a far more general class of objects than nested dicts as in your example; you can even define pytree flattening for any arbitrary object (see https://jax.readthedocs.io/en/latest/pytrees.html#extending-pytrees), and it's not clear to me how you'd construct labels for flattened objects in this general case.
If you're only concerned with nested dicts and you want to generate these kinds of flattened labels, your best bet is probably to write your own Python code to construct the flattened keys and values; for example something like this might work:
p = {'a': {'b': 1.0, 'c': 2.0}}
def flatten(p, label=None):
if isinstance(p, dict):
for k, v in p.items():
yield from flatten(v, k if label is None else f"{label}.{k}")
else:
yield (label, p)
print(dict(flatten(p)))
# {'a.b': 1.0, 'a.c': 2.0}
Related
Merge two dictionaries of dictionaries
My question is similar to this one, but the answers don't produce the right result (for me?).
Take these dictionaries:
a = {'a': {'a': 1}}
b = {'a': {'b': 2}}
I want to produce:
c = {'a': {'a': 1, 'b': 2}}
Using the answers from the quoted question, these all produce:
c = a.copy()
c.update(b)
>>
c == {'a': {'b': 2}
Consider that a and b might be more complex than this, for example:
a = {'a': {'aa': {'aaa': 1}, 'bb': {'bbb': 2}}}
b = {'a': {'bb': {'aaa': 1}, 'bb': {'bbb': 2}}}
In this case you can use
>>> a['a'].update(b['a'])
>>> a
{'a': {'a': 1, 'b': 2}}
Element in dictionary is also dictionary, so you can treat that element as dictionary.
As for more complex example I don't know what result should be. But in general, you can access elements in element as dictionary in nested for loops.
I have the following for instance:
x = [{'A':1},{'A':1},{'A':2},{'B':1},{'B':1},{'B':2},{'B':3},{'C':1},{'D':1}]
and I would like to get a dictionary like this:
x = [{'A': [1,2], 'B': [1,2,3], 'C':[1], 'D': [1]}]
Do you have any idea how I could get this please?
You could use a collections.defaultdict of sets to collect unique values, then convert the final result to a dictionary with values as lists using a dict comprehension:
from collections import defaultdict
lst = [{'A':1},{'A':1},{'A':2},{'B':1},{'B':1},{'B':2},{'B':3},{'C':1},{'D':1}]
result = defaultdict(set)
for dic in lst:
for key, value in dic.items():
result[key].add(value)
print({key: list(value) for key, value in result.items()})
Output:
{'A': [1, 2], 'B': [1, 2, 3], 'C': [1], 'D': [1]}
Although its probably better to add your data directly to the defaultdict to begin with, instead of creating a list of singleton dictionaries(don't recommend this data structure) then converting the result.
Using dict.setdefault
Ex:
x = [{'A':1},{'A':1},{'A':2},{'B':1},{'B':1},{'B':2},{'B':3},{'C':1},{'D':1}]
res = {}
for i in x:
for k, v in i.items():
res.setdefault(k, set()).add(v)
#or res = [{k: list(v) for k, v in res.items()}]
print(res)
Output:
{'A': {1, 2}, 'B': {1, 2, 3}, 'C': {1}, 'D': {1}}
I have the following data:
a = {1: {'data': 243}, 2: {'data': 253}, 4: {'data':243}}
And I want to turn it around, so that the key is the values, and the data values is the keys. So first try:
b = dict(map(lambda id: (a[id]['data'], id, a))
But when I do this, the 1 gets overwritten by the 4, so result will be:
{243: 4, 253: 2}
So what I would like to get is a structure like this:
{243: [1, 4], 253: [2]}
How do I do this?
I felt the below code is more readable and simpler way of approaching your problem.
from collections import defaultdict
a = {1: {'data': 243}, 2: {'data': 253}, 4: {'data':243}}
result = defaultdict(list)
for k, v in a.items():
result[v['data']].append(k)
print(result)
Output:
defaultdict(<class 'list'>, {243: [1, 4], 253: [2]})
This can be done with a dict comprehension and itertools.groupby(), but since dicts are not ordered, we must work with a sorted list, because groupby expects pre-sorted input.
from itertools import groupby
a = {1: {'data': 243}, 2: {'data': 253}, 4: {'data': 243}}
# key extractor function suitable for both sorted() and groupby()
keyfunc = lambda i: i[1]['data']
{g[0]: [i[0] for i in g[1]] for g in groupby(sorted(a.items(), key=keyfunc), key=keyfunc)}
here g is a grouping tuple (key, items), where
g[0] is whatever keyfunc extracts (in this case the 'data' value), and
g[1] is an iterable over dict items, i.e. (key, value) tuples, hence the additional list comprehension to extract the keys only.
result:
{243: [1, 4], 253: [2]}
My attempt to programmatically create a dictionary of lists is failing to allow me to individually address dictionary keys. Whenever I create the dictionary of lists and try to append to one key, all of them are updated. Here's a very simple test case:
data = {}
data = data.fromkeys(range(2),[])
data[1].append('hello')
print data
Actual result: {0: ['hello'], 1: ['hello']}
Expected result: {0: [], 1: ['hello']}
Here's what works
data = {0:[],1:[]}
data[1].append('hello')
print data
Actual and Expected Result: {0: [], 1: ['hello']}
Why is the fromkeys method not working as expected?
When [] is passed as the second argument to dict.fromkeys(), all values in the resulting dict will be the same list object.
In Python 2.7 or above, use a dict comprehension instead:
data = {k: [] for k in range(2)}
In earlier versions of Python, there is no dict comprehension, but a list comprehension can be passed to the dict constructor instead:
data = dict([(k, []) for k in range(2)])
In 2.4-2.6, it is also possible to pass a generator expression to dict, and the surrounding parentheses can be dropped:
data = dict((k, []) for k in range(2))
Try using a defaultdict instead:
from collections import defaultdict
data = defaultdict(list)
data[1].append('hello')
This way, the keys don't need to be initialized with empty lists ahead of time. The defaultdict() object instead calls the factory function given to it, every time a key is accessed that doesn't exist yet. So, in this example, attempting to access data[1] triggers data[1] = list() internally, giving that key a new empty list as its value.
The original code with .fromkeys shares one (mutable) list. Similarly,
alist = [1]
data = dict.fromkeys(range(2), alist)
alist.append(2)
print(data)
would output {0: [1, 2], 1: [1, 2]}. This is called out in the dict.fromkeys() documentation:
All of the values refer to just a single instance, so it generally doesn’t make sense for value to be a mutable object such as an empty list.
Another option is to use the dict.setdefault() method, which retrieves the value for a key after first checking it exists and setting a default if it doesn't. .append can then be called on the result:
data = {}
data.setdefault(1, []).append('hello')
Finally, to create a dictionary from a list of known keys and a given "template" list (where each value should start with the same elements, but be a distinct list), use a dictionary comprehension and copy the initial list:
alist = [1]
data = {key: alist[:] for key in range(2)}
Here, alist[:] creates a shallow copy of alist, and this is done separately for each value. See How do I clone a list so that it doesn't change unexpectedly after assignment? for more techniques for copying the list.
You could use a dict comprehension:
>>> keys = ['a','b','c']
>>> value = [0, 0]
>>> {key: list(value) for key in keys}
{'a': [0, 0], 'b': [0, 0], 'c': [0, 0]}
This answer is here to explain this behavior to anyone flummoxed by the results they get of trying to instantiate a dict with fromkeys() with a mutable default value in that dict.
Consider:
#Python 3.4.3 (default, Nov 17 2016, 01:08:31)
# start by validating that different variables pointing to an
# empty mutable are indeed different references.
>>> l1 = []
>>> l2 = []
>>> id(l1)
140150323815176
>>> id(l2)
140150324024968
so any change to l1 will not affect l2 and vice versa.
this would be true for any mutable so far, including a dict.
# create a new dict from an iterable of keys
>>> dict1 = dict.fromkeys(['a', 'b', 'c'], [])
>>> dict1
{'c': [], 'b': [], 'a': []}
this can be a handy function.
here we are assigning to each key a default value which also happens to be an empty list.
# the dict has its own id.
>>> id(dict1)
140150327601160
# but look at the ids of the values.
>>> id(dict1['a'])
140150323816328
>>> id(dict1['b'])
140150323816328
>>> id(dict1['c'])
140150323816328
Indeed they are all using the same ref!
A change to one is a change to all, since they are in fact the same object!
>>> dict1['a'].append('apples')
>>> dict1
{'c': ['apples'], 'b': ['apples'], 'a': ['apples']}
>>> id(dict1['a'])
>>> 140150323816328
>>> id(dict1['b'])
140150323816328
>>> id(dict1['c'])
140150323816328
for many, this was not what was intended!
Now let's try it with making an explicit copy of the list being used as a the default value.
>>> empty_list = []
>>> id(empty_list)
140150324169864
and now create a dict with a copy of empty_list.
>>> dict2 = dict.fromkeys(['a', 'b', 'c'], empty_list[:])
>>> id(dict2)
140150323831432
>>> id(dict2['a'])
140150327184328
>>> id(dict2['b'])
140150327184328
>>> id(dict2['c'])
140150327184328
>>> dict2['a'].append('apples')
>>> dict2
{'c': ['apples'], 'b': ['apples'], 'a': ['apples']}
Still no joy!
I hear someone shout, it's because I used an empty list!
>>> not_empty_list = [0]
>>> dict3 = dict.fromkeys(['a', 'b', 'c'], not_empty_list[:])
>>> dict3
{'c': [0], 'b': [0], 'a': [0]}
>>> dict3['a'].append('apples')
>>> dict3
{'c': [0, 'apples'], 'b': [0, 'apples'], 'a': [0, 'apples']}
The default behavior of fromkeys() is to assign None to the value.
>>> dict4 = dict.fromkeys(['a', 'b', 'c'])
>>> dict4
{'c': None, 'b': None, 'a': None}
>>> id(dict4['a'])
9901984
>>> id(dict4['b'])
9901984
>>> id(dict4['c'])
9901984
Indeed, all of the values are the same (and the only!) None.
Now, let's iterate, in one of a myriad number of ways, through the dict and change the value.
>>> for k, _ in dict4.items():
... dict4[k] = []
>>> dict4
{'c': [], 'b': [], 'a': []}
Hmm. Looks the same as before!
>>> id(dict4['a'])
140150318876488
>>> id(dict4['b'])
140150324122824
>>> id(dict4['c'])
140150294277576
>>> dict4['a'].append('apples')
>>> dict4
>>> {'c': [], 'b': [], 'a': ['apples']}
But they are indeed different []s, which was in this case the intended result.
You can use this:
l = ['a', 'b', 'c']
d = dict((k, [0, 0]) for k in l)
You are populating your dictionaries with references to a single list so when you update it, the update is reflected across all the references. Try a dictionary comprehension instead. See
Create a dictionary with list comprehension in Python
d = {k : v for k in blah blah blah}
You could use this:
data[:1] = ['hello']
I have a dictionary of objects,
e.g.
{'a': (one, two, three), 'b': (four, five, six)},
and i want to know how to pull out specific parts of each object in the dictionary so that i end up with a list of things that are in a certain position in each object.
For example ending up with; [two, five] (second position in each object)
How do you index the object so that this is possible?
You can't do this directly with an index operation, but the usual Pythonic approach is to use a list comprehension; e.g.
>>> D = {'a': ('one', 'two', 'three'), 'b': ('four', 'five', 'six')}
>>> [val[1] for val in D.values()]
['two', 'five']
Keep in mind that dictionaries are inherently unordered, so the order of the result is ambiguous in this case.
If you want a dictionary of the results, you can use a dictionary comprehension, e.g.
>>> {key:val[1] for key, val in D.items()}
{'a': 'two', 'b': 'five'}
For more information, you might check out the Python List Comprehension Docs.