I have a dictionary which contains key-value pairs where the key is a string and the value is stored as a list.
I am looking to get the intersection of all the elements in the lists of each entry in the dictionary.
For instance, if I had a dictionary like this:
athletes = {"athlete_A" : [16,43,34,23], "athlete_B": [23,60,80,75]}
I would like to get the list [23]. I can find solutions on intersection of dictionaries, but I don't seem to find how to work with only the values of a dict.
You can use functools.reduce and set intersection:
from functools import reduce
reduce(set.intersection, map(set, athletes.values()))
# {23}
If there are duplicates within your lists and you want to catch all (e.g. if two 23s occur in each list), you can use Counter intersection instead:
from collections import Counter
[*reduce(Counter.__and__, map(Counter, athletes.values())).elements()]
# [23]
Create set from the first athlete's list
A_as_set = set(athletes['athlete_A'])
intersection = A_as_set.intersection(athletes['athlete_B'])
intersection_as_list = list(intersection)
Source HERE for intersection and HERE for dictionary
Related
I have the following list:
original_list = [('Anger', 'Envy'), ('Anger', 'Exasperation'), ('Joy', 'Zest'), ('Sadness', 'Suffering'), ('Joy', 'Optimism'), ('Surprise', 'Surprise'), ('Love', 'Affection')]
I am trying to create a random list comprising of the 2nd element of the tuples (of the above list) using the random method in such a way that duplicate values appearing as the first element are only considered once.
That is, the final list I am looking at, will be:
random_list = [Exasperation, Suffering, Optimism, Surprise, Affection]
So, in the new list random_list, strings Envy and Zest are eliminated (as they are appearin the the original list twice). And the process has to randomize the result, i.e. with each iteration would produce a different list of Five elements.
May I ask somebody to show me the way how may I do it?
You can use dictionary to filter the duplicates from original_list (shuffled before with random.sample):
import random
original_list = [
("Anger", "Envy"),
("Anger", "Exasperation"),
("Joy", "Zest"),
("Sadness", "Suffering"),
("Joy", "Optimism"),
("Surprise", "Surprise"),
("Love", "Affection"),
]
out = list(dict(random.sample(original_list, len(original_list))).values())
print(out)
Prints (for example):
['Optimism', 'Envy', 'Surprise', 'Suffering', 'Affection']
I have list of values:
list = [value1, value2, value3]
And a list of dictionaries where on specific keys I must set the corresponding values:
dictionaries = [{"key1":{"key2":{"key3":position_value1}}},{"key1":{"key2":{"key3":position_value2}}}]
I'm trying to assign the values avoiding solutions that requires do explicit iteration over the numerical indexes of list and dictionaries.
I find the next pseudo-solution iterating over two iterables at the same time using for-each loops
for (dict, value) in zip(dictionaries, list):
dict['key1']['key2']['key3'] = value
print(dictionaries)
But doesn't work due to, all dictionaries take only the last value of the list of values, obtaining the next result:
[{"key1":{"key2":{"key3":position_value3}}},{"key1":{"key2":{"key3":position_value3}}}]
It's important to note that when creating the list of dictionaries the dict.copy() method was used, but maybe this doesn't take affect in the reference allocated in nested dictionaries.
Dictionary list creation
base_dict = {"key1": {"key2":{"key3": None}}}
dictionaries = [base_dict.copy() for n in range(3)]
I appreciate any compact solution, even solutions based on unpacking.
base_dict = {"key1": {"key2":{"key3": None}}}
dictionaries = [base_dict.copy() for n in range(3)]
Will create shallow copies of base_dict. That means that while these are indepenedent at the top level, their values are copied by reference; hence, the inner dictionaries {"key2":{"key3": None}} are still all the same object. When rebinding key3, all references will be affected.
You can avoid that by making a deepcopy:
from copy import deepcopy
dictionaries = [deepcopy(base_dict) for _ in range(3)]
I have an extensive list with tuples of pairs. It goes like this:
travels =[(passenger_1, destination_1), (passenger_2, destination_2),(passenger_1, destination_2)...]
And so on. Passengers and destinations may repeat and even the same passenger-destination tuple may repeat.
I want to make a comprehensive dict thay have as key each passenger and as value its most recurrent destination.
My first try was this:
dictionary = {k:v for k,v in travels}
but each key overwrites the last. I was hoping to get multiple values for each key so then i could count for each key. Then I tried like this:
dictionary = {k:v for k,v in travels if k not in dictionary else dictionary[k].append(v)}
but i can't call dictionary inside its own definition. Any ideas on how can i get it done? It's important that it's done comprehensively and not by loops.
That is how it can be done with for loop:
result = dict()
for passenger, destination in travels:
result.setdefault(passenger, list()).append(destination)
result is a single dictionary where keys are passengers, values are lists with destinations.
I doubt you can do the same with a single dictionary comprehesion expression since inside comprehension you can just generate elements but can not freely modify them.
EDIT.
If you want to (or have to) use comprehension expression no matter what then you can do it like this (2 comprehensions and no explicit loops):
result = {
passenger: [destination_
for passenger_, destination_
in travels
if passenger_ == passenger]
for passenger, dummy_destination
in travels}
This is a poor algorithm to get what you want. Its efficiency is O(n^2) while efficiency of the first method is O(n).
I am familiar with the struct construct from MATLAB, specifically array of structs. I am trying to do that with dictionary in Python. Say I have a initialized a dictionary:
samples = {"Name":"", "Group":"", "Timeseries":[],"GeneratedFeature":[]}
and I am provided with another dictionary called fileList whose keys are group names and each value is a tuples of file-paths. Each file path will generate one sample in samples by populating the Timeseries item. Further some processing will make GeneratedFeature. The name part will be determined by the filepath.
Since I don't know the contents of fileList a priori, in MATLAB if samples were a struct and fileList just a cell array:
fileList={{'Group A',{'filepath1','filepath2'}};{'Group B',{'filepath1', 'filepath2'}}}
I would just set a counter k=1 and run a for loop (with a different index) and do something like:
k=1;
for i=1:numel(fileList)
samples(k).Group=fileList{i}{1};
for j=1:numel(fileList{i}{2})
samples(k).Name=makeNameFrom(fileList{1}{2}{j})
.
.
end
k=k+1
end
But I don't know how to do this in python. I know I can keep the two for loop approach with
for (group, samples) in fileList:
for sample in samples:
But how to tell python that samples is allowed to be an array/list? Is there a more pythonic approach than doing for loop?
You could store your dictionary itself in a list and simply append new dictionaries in every iteration of the loop:
samplelist = []
samplelist.append(samples.copy()) % dictionary copy needed when duplicating
Accessing the elements in the list would then work as follows (For example the 'Name' field of the i-th sample):
samples_i_name = samplelist[i]["Name"]
A list of all names would be accessible by a simple list comprehension:
namelist = [samplelist[i]["Name"] for i in range(len(samplelist))]
I'm very new in python (I usually write in php). I want to understand how to store information in an associative array, and if you can explain me whats the difference of "tuples", "arrays", "dictionary" and "list" will be wonderful (I tried to read different source but I still not caching it).
So This is my code:
#!/usr/bin/python3.4
import csv
import string
nidless_keys = dict()
nidless_keys = ['test_string1','test_string2'] #this contain the string to
# be searched in linesreader
data = {'type':[],'id':[]} #here I want to store my information
with open('path/to/csv/file.csv',newline="") as csvfile:
linesreader = csv.reader(csvfile,delimiter=',',quotechar="|")
for row in linesreader: #every line in this csv have a url like
#www.test.com/?test_string1&id=123456
current_row_string = str(row)
for needle in nidless_keys:
current_needle = str(needle)
if current_needle in current_row_string:
data[current_needle[current_row_string[-8:]]) += 1 # also I
#need to count per every id how much rows there are.
In conclusion:
my_data_stored = [current_needle][current_row_string[-8]]
current_row_string[-8] is a url which the last 8 digit of the url is an ID.
So the array should looks like this at the end of the script:
test_string1 = 123456 = 20
= 256468 = 15
test_string2 = 123155 = 10
Edit 1:
Which type I need here to store the information?
Can you tell me how to resolve this script?
It seems you want to count how many times an ID in combination with a test string occurs.
There can be multiple ID/count combinations associated with every test string.
This suggests that you should use a dictionary indexed by the test strings to store the results. In that dictionary I would suggest to store collections.Counter objects.
This way, you would have to add a special case when a key in the results dictionary isn't found to add an empty Counter. This is a common problem, so there is a specialized form of dictionary in the collections module called defaultdict.
import collections
import csv
# Using a tuple for the keys so it cannot be accidentally modified
keys = ('test_string1', 'test_string2')
result = collections.defaultdict(collections.Counter)
with open('path/to/csv/file.csv',newline="") as csvfile:
linesreader = csv.reader(csvfile,delimiter=',',quotechar="|")
for row in linesreader:
for key in keys:
if key in row:
id = row[-6:] # ID's are six digits in your example.
# The first index is into the dict, the second into the Counter.
result[key][id] += 1
There is an even easier way, by using regular expressions.
Since you seem to treat every row in a CSV file as a string, there is little need to use the CSV reader, so I'll just read the whole file as text.
import re
with open('path/to/csv/file.csv') as datafile:
text = datafile.read()
pattern = r'\?(.*)&id=(\d+)'
The pattern is a regular expression. This is a large topic in and of itself, so I'll only cover briefly what it does. (You might also want to check out the relevant HOWTO) At first glance it looks like complete gibberish, but it is actually a complete language.
In looks for two things in a line. Anything between ? and &id=, and a sequence of digits after &id=.
I'll be using IPython to give an example.
(If you don't know it, check out IPython. It is great for trying things and see if they work.)
In [1]: import re
In [2]: pattern = r'\?(.*)&id=(\d+)'
In [3]: text = """www.test.com/?test_string1&id=123456
....: www.test.com/?test_string1&id=123456
....: www.test.com/?test_string1&id=234567
....: www.test.com/?foo&id=234567
....: www.test.com/?foo&id=123456
....: www.test.com/?foo&id=1234
....: www.test.com/?foo&id=1234
....: www.test.com/?foo&id=1234"""
The text variable points to the string which is a mock-up for the contents of your CSV file.
I am assuming that:
every URL is on its own line
ID's are a sequence of digits.
If these assumptions are wrong, this won't work.
Using findall to extract every match of the pattern from the text.
In [4]: re.findall(pattern, test)
Out[4]:
[('test_string1', '123456'),
('test_string1', '123456'),
('test_string1', '234567'),
('foo', '234567'),
('foo', '123456'),
('foo', '1234'),
('foo', '1234'),
('foo', '1234')]
The findall function returns a list of 2-tuples (that is key, ID pairs). Now we just need to count those.
In [5]: import collections
In [6]: result = collections.defaultdict(collections.Counter)
In [7]: intermediate = re.findall(pattern, test)
Now we fill the result dict from the list of matches that is the intermediate result.
In [8]: for key, id in intermediate:
....: result[key][id] += 1
....:
In [9]: print(result)
defaultdict(<class 'collections.Counter'>, {'foo': Counter({'1234': 3, '123456': 1, '234567': 1}), 'test_string1': Counter({'123456': 2, '234567': 1})})
So the complete code would be:
import collections
import re
with open('path/to/csv/file.csv') as datafile:
text = datafile.read()
result = collections.defaultdict(collections.Counter)
pattern = r'\?(.*)&id=(\d+)'
intermediate = re.findall(pattern, test)
for key, id in intermediate:
result[key][id] += 1
This approach has two advantages.
You don't have to know the keys in advance.
ID's are not limited to six digits.
A brief summary of the python data types you mentioned:
A dictionary is an associative array, aka hashtable.
A list is a sequence of values.
An array is essentially the same as a list, but limited to basic datatypes. My impression is that they only exists for performance reasons, don't think I've ever used one. If performance is that critical to you, you probably don't want to use python in the first place.
A tuple is a fixed-length sequence of values (whereas lists and arrays can grow).
Lets take them one by one.
Lists:
List is a very naive kind of data structure similar to arrays in other languages in terms of the way we write them like:
['a','b','c']
This is a list in python , but seems very similar to array structure.
However there is a very large difference in the way lists are used in python and the usual arrays.
Lists are heterogenous in nature. This means that we can store any kind of data simultaneously inside it like:
ls = [1,2,'a','g',True]
As you can see, we have various kinds of data within a list and is a valid list.
However, one important thing about them is that we can access the list items using zero based indices. So we can write:
print ls[0],ls[3]
output: 1 g
Dictionary:
This datastructure is similar to a hash map data structure. It contains a (key,Value) pair. An empty dictionary looks like:
dc = {}
Now, to store a key,value pair, e.g., ('potato',3),(tomato,5), we can do as:
dc['potato'] = 3
dc['tomato'] = 5
and we saved the data in the dictionary dc.
The important thing is that we can even store another data structure element like a list within a dictionary like:
dc['list1'] = ls , where ls is the list defined above.
This shows the power of using dictionary.
In your case, you have difined a dictionary like this:
data = {'type':[],'id':[]}
This means that your dictionary will consist of only two keys and each key corresponds to a list, which are empty for now.
Talking a bit about your script, the expression :
current_row_string[-8:]
doesn't make a sense. The index should have been -6 instead of -8 that would give you the id part of the current row.
This part is the id and should have been stored in a variable say :
id = current_row_string[-6:]
Further action can be performed as seen the answer given by Roland.