Problem:
Consider a python list of lists that contains a sequence of chars:
[['A', 'B'],['A','B','C'],['B','A'],['C','A','B'],['D'],['D'],['Ao','B']]
The goal is to return the unique lists, regardless of order:
[['A','B'],['A','B','C'],['D'],['Ao','B']]
Attempt:
I'm able to achieve my goal using many if/else statements with try/exceptions. What would be the most pythonic (faster) way to approach this problem? Thanks!
def check_duplicates(x,list_):
for li in list_:
if compare(x,li):
return True
def compare(s, t):
t = list(t) # make a mutable copy
try:
for elem in s:
t.remove(elem)
except ValueError:
return False
return not t
vars_list = [['A', 'B'],['A','B','C'],['B','A'],['C','A','B'],['D'],['D'],['Ao','B']]
second_list = []
for i in vars_list:
if check_duplicates(i,second_list):
continue
else:
second_list.append(i)
print(i)
Assuming that the elements of the nested lists are hashable, you can isolate the unique collections by constructing a set of frozensets from the nested list:
unique_sets = {frozenset(l) for l in vars_list}
# {frozenset({'D'}),
# frozenset({'A', 'B'}),
# frozenset({'A', 'B', 'C'}),
# frozenset({'Ao', 'B'})}
If you need a list-of-lists as the output, you can obtain one trivially with [list(s) for s in unique_sets].
I have a text file which contains duplicate car registration numbers with different values, like so:
EDF768, Bill Meyer, 2456, Vet_Parking
TY5678, Jane Miller, 8987, AgHort_Parking
GEF123, Jill Black, 3456, Creche_Parking
ABC234, Fred Greenside, 2345, AgHort_Parking
GH7682, Clara Hill, 7689, AgHort_Parking
JU9807, Jacky Blair, 7867, Vet_Parking
KLOI98, Martha Miller, 4563, Vet_Parking
ADF645, Cloe Freckle, 6789, Vet_Parking
DF7800, Jacko Frizzle, 4532, Creche_Parking
WER546, Olga Grey, 9898, Creche_Parking
HUY768, Wilbur Matty, 8912, Creche_Parking
EDF768, Jenny Meyer, 9987, Vet_Parking
TY5678, Jo King, 8987, AgHort_Parking
JU9807, Mike Green, 3212, Vet_Parking
I want to create a dictionary from this data, which uses the registration numbers (first column) as keys and the data from the rest of the line for values.
I wrote this code:
data_dict = {}
data_list = []
def createDictionaryModified(filename):
path = "C:\Users\user\Desktop"
basename = "ParkingData_Part3.txt"
filename = path + "//" + basename
file = open(filename)
contents = file.read()
print(contents,"\n")
data_list = [lines.split(",") for lines in contents.split("\n")]
for line in data_list:
regNumber = line[0]
name = line[1]
phoneExtn = line[2]
carpark = line[3].strip()
details = (name,phoneExtn,carpark)
data_dict[regNumber] = details
print(data_dict,"\n")
print(data_dict.items(),"\n")
print(data_dict.values())
The problem is that the data file contains duplicate values for the registration numbers. When I try to store them in the same dictionary with data_dict[regNumber] = details, the old value is overwritten.
How do I make a dictionary with duplicate keys?
Sometimes people want to "combine" or "merge" multiple existing dictionaries by just putting all the items into a single dict, and are surprised or annoyed that duplicate keys are overwritten. See the related question How to merge dicts, collecting values from matching keys? for dealing with this problem.
Python dictionaries don't support duplicate keys. One way around is to store lists or sets inside the dictionary.
One easy way to achieve this is by using defaultdict:
from collections import defaultdict
data_dict = defaultdict(list)
All you have to do is replace
data_dict[regNumber] = details
with
data_dict[regNumber].append(details)
and you'll get a dictionary of lists.
You can change the behavior of the built in types in Python. For your case it's really easy to create a dict subclass that will store duplicated values in lists under the same key automatically:
class Dictlist(dict):
def __setitem__(self, key, value):
try:
self[key]
except KeyError:
super(Dictlist, self).__setitem__(key, [])
self[key].append(value)
Output example:
>>> d = dictlist.Dictlist()
>>> d['test'] = 1
>>> d['test'] = 2
>>> d['test'] = 3
>>> d
{'test': [1, 2, 3]}
>>> d['other'] = 100
>>> d
{'test': [1, 2, 3], 'other': [100]}
Rather than using a defaultdict or messing around with membership tests or manual exception handling, use the setdefault method to add new empty lists to the dictionary when they're needed:
results = {} # use a normal dictionary for our output
for k, v in some_data: # the keys may be duplicates
results.setdefault(k, []).append(v) # magic happens here!
setdefault checks to see if the first argument (the key) is already in the dictionary. If doesn't find anything, it assigns the second argument (the default value, an empty list in this case) as a new value for the key. If the key does exist, nothing special is done (the default goes unused). In either case though, the value (whether old or new) gets returned, so we can unconditionally call append on it (knowing it should always be a list).
You can't have a dict with duplicate keys for definition!
Instead you can use a single key and, as the value, a list of elements that had that key.
So you can follow these steps:
See if the current element's key (of your initial set) is in the final dict. If it is, go to step 3
Update dict with key
Append the new value to the dict[key] list
Repeat [1-3]
If you want to have lists only when they are necessary, and values in any other cases, then you can do this:
class DictList(dict):
def __setitem__(self, key, value):
try:
# Assumes there is a list on the key
self[key].append(value)
except KeyError: # If it fails, because there is no key
super(DictList, self).__setitem__(key, value)
except AttributeError: # If it fails because it is not a list
super(DictList, self).__setitem__(key, [self[key], value])
You can then do the following:
dl = DictList()
dl['a'] = 1
dl['b'] = 2
dl['b'] = 3
Which will store the following {'a': 1, 'b': [2, 3]}.
I tend to use this implementation when I want to have reverse/inverse dictionaries, in which case I simply do:
my_dict = {1: 'a', 2: 'b', 3: 'b'}
rev = DictList()
for k, v in my_dict.items():
rev_med[v] = k
Which will generate the same output as above: {'a': 1, 'b': [2, 3]}.
CAVEAT: This implementation relies on the non-existence of the append method (in the values you are storing). This might produce unexpected results if the values you are storing are lists. For example,
dl = DictList()
dl['a'] = 1
dl['b'] = [2]
dl['b'] = 3
would produce the same result as before {'a': 1, 'b': [2, 3]}, but one might expected the following: {'a': 1, 'b': [[2], 3]}.
You can refer to the following article:
http://www.wellho.net/mouth/3934_Multiple-identical-keys-in-a-Python-dict-yes-you-can-.html
In a dict, if a key is an object, there are no duplicate problems.
For example:
class p(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
def __str__(self):
return self.name
d = {p('k'): 1, p('k'): 2}
You can't have duplicated keys in a dictionary. Use a dict of lists:
for line in data_list:
regNumber = line[0]
name = line[1]
phoneExtn = line[2]
carpark = line[3].strip()
details = (name,phoneExtn,carpark)
if not data_dict.has_key(regNumber):
data_dict[regNumber] = [details]
else:
data_dict[regNumber].append(details)
It's pertty old question but maybe my solution help someone.
by overriding __hash__ magic method, you can save same objects in dict.
Example:
from random import choices
class DictStr(str):
"""
This class behave exacly like str class but
can be duplicated in dict
"""
def __new__(cls, value='', custom_id='', id_length=64):
# If you want know why I use __new__ instead of __init__
# SEE: https://stackoverflow.com/a/2673863/9917276
obj = str.__new__(cls, value)
if custom_id:
obj.id = custom_id
else:
# Make a string with length of 64
choice_str = "abcdefghijklmopqrstuvwxyzABCDEFJHIJKLMNOPQRSTUVWXYZ1234567890"
obj.id = ''.join(choices(choice_str, k=id_length))
return obj
def __hash__(self) -> int:
return self.id.__hash__()
Now lets create a dict:
>>> a_1 = DictStr('a')
>>> a_2 = DictStr('a')
>>> a_3 = 'a'
>>> a_1
a
>>> a_2
a
>>> a_1 == a_2 == a_3
True
>>> d = dict()
>>> d[a_1] = 'some_data'
>>> d[a_2] = 'other'
>>> print(d)
{'a': 'some_data', 'a': 'other'}
NOTE: This solution can apply to any basic data structure like (int, float,...)
EXPLANATION :
We can use almost any object as key in dict class (or mostly known as HashMap or HashTable in other languages) but there should be a way to distinguish between keys because dict have no idea about objects.
For this purpose objects that want to add to dictionary as key somehow have to provide a unique identifier number(I name it uniq_id, it's actually a number somehow created with hash algorithm) for themself.
Because dictionary structure widely use in most of solutions,
most of programming languages hide object uniq_id generation inside a hash name buildin method that feed dict in key search
So if you manipulate hash method of your class you can change behaviour of your class as dictionary key
Dictionary does not support duplicate key, instead you can use defaultdict
Below is the example of how to use defaultdict in python3x to solve your problem
from collections import defaultdict
sdict = defaultdict(list)
keys_bucket = list()
data_list = [lines.split(",") for lines in contents.split("\n")]
for data in data_list:
key = data.pop(0)
detail = data
keys_bucket.append(key)
if key in keys_bucket:
sdict[key].append(detail)
else:
sdict[key] = detail
print("\n", dict(sdict))
Above code would produce output as follow:
{'EDF768': [[' Bill Meyer', ' 2456', ' Vet_Parking'], [' Jenny Meyer', ' 9987', ' Vet_Parking']], 'TY5678': [[' Jane Miller', ' 8987', ' AgHort_Parking'], [' Jo King', ' 8987', ' AgHort_Parking']], 'GEF123': [[' Jill Black', ' 3456', ' Creche_Parking']], 'ABC234': [[' Fred Greenside', ' 2345', ' AgHort_Parking']], 'GH7682': [[' Clara Hill', ' 7689', ' AgHort_Parking']], 'JU9807': [[' Jacky Blair', ' 7867', ' Vet_Parking'], [' Mike Green', ' 3212', ' Vet_Parking']], 'KLOI98': [[' Martha Miller', ' 4563', ' Vet_Parking']], 'ADF645': [[' Cloe Freckle', ' 6789', ' Vet_Parking']], 'DF7800': [[' Jacko Frizzle', ' 4532', ' Creche_Parking']], 'WER546': [[' Olga Grey', ' 9898', ' Creche_Parking']], 'HUY768': [[' Wilbur Matty', ' 8912', ' Creche_Parking']]}
def bibek():
test_list=[[]]
x=int(input("Enter the length of String elements using enter -: "))
for i in range(0,x):
a=str(input())
a=list(a)
test_list.append(a)
del(test_list[0]):
def filt(b):
d=['b','i','b']
if b in d:
return True
else:
return False
for t in test_list:
x=filter(filt,t)
for i in x:
print(i)
bibek()
suppose test_list=[['b','i','b'],['s','i','b'],['r','i','b']]
output should be ib since ib is common among all
an option is to use set and its methods:
test_list=[['b','i','b'],['s','i','b'],['r','i','b']]
common = set(test_list[0])
for item in test_list[1:]:
common.intersection_update(item)
print(common) # {'i', 'b'}
UPDATE: now that you have clarified your question i would to this:
from difflib import SequenceMatcher
test_list=[['b','i','b','b'],['s','i','b','b'],['r','i','b','b']]
# convert the list to simple strings
strgs = [''.join(item) for item in test_list]
common = strgs[0]
for item in strgs[1:]:
sm = SequenceMatcher(isjunk=None, a=item, b=common)
match = sm.find_longest_match(0, len(item), 0, len(common))
common = common[match.b:match.b+match.size]
print(common) # 'ibb'
the trick here is to use difflib.SequenceMatcher in order to get the longest string.
one more update after clarification of your question this time using collections.Counter:
from collections import Counter
strgs='app','bapp','sardipp', 'ppa'
common = Counter(strgs[0])
print(common)
for item in strgs[1:]:
c = Counter(item)
for key, number in common.items():
common[key] = min(number, c.get(key, 0))
print(common) # Counter({'p': 2, 'a': 1})
print(sorted(common.elements())) # ['a', 'p', 'p']
I got a question for you, first of all the code here:
from urllib import request
from collections import Counter
from nltk import word_tokenize
URL = 'https://www.gutenberg.org/files/46/46-0.txt'
RESPONSE = request.urlopen(URL)
RAW = RESPONSE.read().decode('utf8')
print('\n')
type(RAW)
print('\n')
len(RAW)
TOKENS = word_tokenize(RAW)
print(type(TOKENS))
X = print(len(TOKENS))
print(TOKENS[:X])
print('\n')
c = Counter(RAW)
print(c.most_common(30))
Here is the first Output, I get. With that one I am satisfied.
['\ufeffThe', 'Project', 'Gutenberg', 'EBook', 'of', 'A', 'Christmas', 'Carol', ',', 'by', 'Charles',...]
Here is the second part of the output which do not makes me satisfied:
[(' strong text', 28438), ('e', 16556), ('t', 11960), ('o', 10940), ('a', 10092), ('n', 8868), ('i', 8791),...]
Here is my question: As you can see I am counting the most frequently occuring strings in a text, but the Problem is I want to count the whole elements of the list of words: The final part of second output should look something like that:
[('Dickens', 28438), ('Project', 16556), ('Gutenberg', 11960),...]
and not as you can see above in the second part of output. I want to show the 30 most frequently used Words in the text, and not parts of elements in elements of the list.
Do you know how I can solve that Problem? Thanks for helping.
Try changing this one
c = Counter(TOKENS)
Here attached your full code with change
from urllib import request
from collections import Counter
from nltk import word_tokenize
URL = 'https://www.gutenberg.org/files/46/46-0.txt'
RESPONSE = request.urlopen(URL)
RAW = RESPONSE.read().decode('utf8')
print('\n')
type(RAW)
print('\n')
len(RAW)
TOKENS = word_tokenize(RAW)
print(type(TOKENS))
X = print(len(TOKENS))
print(TOKENS[:X])
print('\n')
c = Counter(TOKENS)
print(c.most_common(500))
I am trying to produce a vector that represents the match of a string and a list's elements. I have made a function in python3.x:
def vector_build (docs, var):
vector = []
features = docs.split(' ')
for ngram in var:
if ngram in features:
vector.append(docs.count(ngram))
else:
vector.append(0)
return vector
It works fine:
vector_build ('hi my name is peter',['hi', 'name', 'are', 'is'])
Out: [1, 1, 0, 1]
But this function is not scalable to significant data. When its string parameter 'docs' is heavier than 190kb it takes more time that need. So I am trying to replace the for loop with map function like:
var = ['hi', 'name', 'are', 'is']
doc = 'hi my name is peter'
features = doc.split(' ')
vector = list(map(var,if ngram in var in features: vector.append(doc.count(ngram))))
But this return this error:
SyntaxError: invalid syntax
Is there a way to replace that for loop with map, lambda, itertools in order to make the execution faster?
You can use list comprehension for this task. Also, lookups in a set of features should help the function some as well.
var = ['hi', 'name', 'are', 'is']
doc = 'hi my name is peter'
features = doc.split(' ')
features_set = set(features) #faster lookups
vector = [doc.count(ngram) if ngram in features_set else 0 for ngram in var]
print(vector)