See if index is contained within a slice object - python-3.x

I have an enum like this:
class AgeCategory(Enum):
Child = slice(None, 16)
YoungAdult = slice(16, 25)
...
Old = slice(85, None)
It basically provides a range of ages (in years) which go in each category.
I would like a function to check which age range a certain value corresponds to. This was my idea:
def get_category(age: int) -> AgeCategory:
for age_range in AgeCategory:
if age in age_range.value #check if it's in the slice for that enum
return age_range
else:
assert False, "unreachable"
However assert 5 in slice(1,10) fails. OFC I could do something like:
s: slice = age_range.value
if s.start <= age < s.stop: #bounds check
return age_range
But that ignores the step argument and feels like reinventing the wheel a bit.
What's a pythonic way to express these age ranges? They are used as slices like this:
ya_data = np.sum(some_data[AgeCategory.YoungAdult])

In your case it is not so much important, but this should be faster than iterating when working with large slices:
a_slice = slice(4, 15, 3)
def is_in_slice(a_slice, idx):
if idx < a_slice.start or idx >= a_slice.stop:
return False
step = a_slice.step if a_slice.step else 1
if (idx - a_slice.start) % step == 0:
return True
else:
return False
test_array = np.where([is_in_slice(a_slice, idx) for idx in range(20)])[0]
print(test_array)
[ 4 7 10 13]
And test for very big slice:
a_big_slice = slice(0, 1_000_000_000_000, 5)
print(is_in_slice(a_big_slice, 999_000_000_005))
print(is_in_slice(a_big_slice, 999_000_000_004))
True
False

With a sample slice:
In [321]: child=slice(None,16,4)
I was thinking of expanding it to a list or array. But arange can't handle the None:
In [323]: np.arange(child.start,child.stop,child.step)
Traceback (most recent call last):
File "<ipython-input-323-b2d245f287ff>", line 1, in <module>
np.arange(child.start,child.stop,child.step)
TypeError: unsupported operand type(s) for -: 'int' and 'NoneType'
np.r_ can. Here the numpy developers have gone to all the work of translating all the slice options:
In [324]: np.r_[child]
Out[324]: array([ 0, 4, 8, 12])
In [325]: 3 in _
Out[325]: False
In [327]: 4 in __
Out[327]: True
It may not be fastest, but appears to be most general purpose approach - with out a lot work on your part.

Related

Need to fetch 1st value from the dictionary from all the preferred keys in python [duplicate]

What is an efficient way to find the most common element in a Python list?
My list items may not be hashable so can't use a dictionary.
Also in case of draws the item with the lowest index should be returned. Example:
>>> most_common(['duck', 'duck', 'goose'])
'duck'
>>> most_common(['goose', 'duck', 'duck', 'goose'])
'goose'
A simpler one-liner:
def most_common(lst):
return max(set(lst), key=lst.count)
Borrowing from here, this can be used with Python 2.7:
from collections import Counter
def Most_Common(lst):
data = Counter(lst)
return data.most_common(1)[0][0]
Works around 4-6 times faster than Alex's solutions, and is 50 times faster than the one-liner proposed by newacct.
On CPython 3.6+ (any Python 3.7+) the above will select the first seen element in case of ties. If you're running on older Python, to retrieve the element that occurs first in the list in case of ties you need to do two passes to preserve order:
# Only needed pre-3.6!
def most_common(lst):
data = Counter(lst)
return max(lst, key=data.get)
With so many solutions proposed, I'm amazed nobody's proposed what I'd consider an obvious one (for non-hashable but comparable elements) -- [itertools.groupby][1]. itertools offers fast, reusable functionality, and lets you delegate some tricky logic to well-tested standard library components. Consider for example:
import itertools
import operator
def most_common(L):
# get an iterable of (item, iterable) pairs
SL = sorted((x, i) for i, x in enumerate(L))
# print 'SL:', SL
groups = itertools.groupby(SL, key=operator.itemgetter(0))
# auxiliary function to get "quality" for an item
def _auxfun(g):
item, iterable = g
count = 0
min_index = len(L)
for _, where in iterable:
count += 1
min_index = min(min_index, where)
# print 'item %r, count %r, minind %r' % (item, count, min_index)
return count, -min_index
# pick the highest-count/earliest item
return max(groups, key=_auxfun)[0]
This could be written more concisely, of course, but I'm aiming for maximal clarity. The two print statements can be uncommented to better see the machinery in action; for example, with prints uncommented:
print most_common(['goose', 'duck', 'duck', 'goose'])
emits:
SL: [('duck', 1), ('duck', 2), ('goose', 0), ('goose', 3)]
item 'duck', count 2, minind 1
item 'goose', count 2, minind 0
goose
As you see, SL is a list of pairs, each pair an item followed by the item's index in the original list (to implement the key condition that, if the "most common" items with the same highest count are > 1, the result must be the earliest-occurring one).
groupby groups by the item only (via operator.itemgetter). The auxiliary function, called once per grouping during the max computation, receives and internally unpacks a group - a tuple with two items (item, iterable) where the iterable's items are also two-item tuples, (item, original index) [[the items of SL]].
Then the auxiliary function uses a loop to determine both the count of entries in the group's iterable, and the minimum original index; it returns those as combined "quality key", with the min index sign-changed so the max operation will consider "better" those items that occurred earlier in the original list.
This code could be much simpler if it worried a little less about big-O issues in time and space, e.g....:
def most_common(L):
groups = itertools.groupby(sorted(L))
def _auxfun((item, iterable)):
return len(list(iterable)), -L.index(item)
return max(groups, key=_auxfun)[0]
same basic idea, just expressed more simply and compactly... but, alas, an extra O(N) auxiliary space (to embody the groups' iterables to lists) and O(N squared) time (to get the L.index of every item). While premature optimization is the root of all evil in programming, deliberately picking an O(N squared) approach when an O(N log N) one is available just goes too much against the grain of scalability!-)
Finally, for those who prefer "oneliners" to clarity and performance, a bonus 1-liner version with suitably mangled names:-).
from itertools import groupby as g
def most_common_oneliner(L):
return max(g(sorted(L)), key=lambda(x, v):(len(list(v)),-L.index(x)))[0]
What you want is known in statistics as mode, and Python of course has a built-in function to do exactly that for you:
>>> from statistics import mode
>>> mode([1, 2, 2, 3, 3, 3, 3, 3, 4, 5, 6, 6, 6])
3
Note that if there is no "most common element" such as cases where the top two are tied, this will raise StatisticsError on Python
<=3.7, and on 3.8 onwards it will return the first one encountered.
Without the requirement about the lowest index, you can use collections.Counter for this:
from collections import Counter
a = [1936, 2401, 2916, 4761, 9216, 9216, 9604, 9801]
c = Counter(a)
print(c.most_common(1)) # the one most common element... 2 would mean the 2 most common
[(9216, 2)] # a set containing the element, and it's count in 'a'
If they are not hashable, you can sort them and do a single loop over the result counting the items (identical items will be next to each other). But it might be faster to make them hashable and use a dict.
def most_common(lst):
cur_length = 0
max_length = 0
cur_i = 0
max_i = 0
cur_item = None
max_item = None
for i, item in sorted(enumerate(lst), key=lambda x: x[1]):
if cur_item is None or cur_item != item:
if cur_length > max_length or (cur_length == max_length and cur_i < max_i):
max_length = cur_length
max_i = cur_i
max_item = cur_item
cur_length = 1
cur_i = i
cur_item = item
else:
cur_length += 1
if cur_length > max_length or (cur_length == max_length and cur_i < max_i):
return cur_item
return max_item
This is an O(n) solution.
mydict = {}
cnt, itm = 0, ''
for item in reversed(lst):
mydict[item] = mydict.get(item, 0) + 1
if mydict[item] >= cnt :
cnt, itm = mydict[item], item
print itm
(reversed is used to make sure that it returns the lowest index item)
Sort a copy of the list and find the longest run. You can decorate the list before sorting it with the index of each element, and then choose the run that starts with the lowest index in the case of a tie.
A one-liner:
def most_common (lst):
return max(((item, lst.count(item)) for item in set(lst)), key=lambda a: a[1])[0]
I am doing this using scipy stat module and lambda:
import scipy.stats
lst = [1,2,3,4,5,6,7,5]
most_freq_val = lambda x: scipy.stats.mode(x)[0][0]
print(most_freq_val(lst))
Result:
most_freq_val = 5
# use Decorate, Sort, Undecorate to solve the problem
def most_common(iterable):
# Make a list with tuples: (item, index)
# The index will be used later to break ties for most common item.
lst = [(x, i) for i, x in enumerate(iterable)]
lst.sort()
# lst_final will also be a list of tuples: (count, index, item)
# Sorting on this list will find us the most common item, and the index
# will break ties so the one listed first wins. Count is negative so
# largest count will have lowest value and sort first.
lst_final = []
# Get an iterator for our new list...
itr = iter(lst)
# ...and pop the first tuple off. Setup current state vars for loop.
count = 1
tup = next(itr)
x_cur, i_cur = tup
# Loop over sorted list of tuples, counting occurrences of item.
for tup in itr:
# Same item again?
if x_cur == tup[0]:
# Yes, same item; increment count
count += 1
else:
# No, new item, so write previous current item to lst_final...
t = (-count, i_cur, x_cur)
lst_final.append(t)
# ...and reset current state vars for loop.
x_cur, i_cur = tup
count = 1
# Write final item after loop ends
t = (-count, i_cur, x_cur)
lst_final.append(t)
lst_final.sort()
answer = lst_final[0][2]
return answer
print most_common(['x', 'e', 'a', 'e', 'a', 'e', 'e']) # prints 'e'
print most_common(['goose', 'duck', 'duck', 'goose']) # prints 'goose'
Simple one line solution
moc= max([(lst.count(chr),chr) for chr in set(lst)])
It will return most frequent element with its frequency.
You probably don't need this anymore, but this is what I did for a similar problem. (It looks longer than it is because of the comments.)
itemList = ['hi', 'hi', 'hello', 'bye']
counter = {}
maxItemCount = 0
for item in itemList:
try:
# Referencing this will cause a KeyError exception
# if it doesn't already exist
counter[item]
# ... meaning if we get this far it didn't happen so
# we'll increment
counter[item] += 1
except KeyError:
# If we got a KeyError we need to create the
# dictionary key
counter[item] = 1
# Keep overwriting maxItemCount with the latest number,
# if it's higher than the existing itemCount
if counter[item] > maxItemCount:
maxItemCount = counter[item]
mostPopularItem = item
print mostPopularItem
Building on Luiz's answer, but satisfying the "in case of draws the item with the lowest index should be returned" condition:
from statistics import mode, StatisticsError
def most_common(l):
try:
return mode(l)
except StatisticsError as e:
# will only return the first element if no unique mode found
if 'no unique mode' in e.args[0]:
return l[0]
# this is for "StatisticsError: no mode for empty data"
# after calling mode([])
raise
Example:
>>> most_common(['a', 'b', 'b'])
'b'
>>> most_common([1, 2])
1
>>> most_common([])
StatisticsError: no mode for empty data
ans = [1, 1, 0, 0, 1, 1]
all_ans = {ans.count(ans[i]): ans[i] for i in range(len(ans))}
print(all_ans)
all_ans={4: 1, 2: 0}
max_key = max(all_ans.keys())
4
print(all_ans[max_key])
1
#This will return the list sorted by frequency:
def orderByFrequency(list):
listUniqueValues = np.unique(list)
listQty = []
listOrderedByFrequency = []
for i in range(len(listUniqueValues)):
listQty.append(list.count(listUniqueValues[i]))
for i in range(len(listQty)):
index_bigger = np.argmax(listQty)
for j in range(listQty[index_bigger]):
listOrderedByFrequency.append(listUniqueValues[index_bigger])
listQty[index_bigger] = -1
return listOrderedByFrequency
#And this will return a list with the most frequent values in a list:
def getMostFrequentValues(list):
if (len(list) <= 1):
return list
list_most_frequent = []
list_ordered_by_frequency = orderByFrequency(list)
list_most_frequent.append(list_ordered_by_frequency[0])
frequency = list_ordered_by_frequency.count(list_ordered_by_frequency[0])
index = 0
while(index < len(list_ordered_by_frequency)):
index = index + frequency
if(index < len(list_ordered_by_frequency)):
testValue = list_ordered_by_frequency[index]
testValueFrequency = list_ordered_by_frequency.count(testValue)
if (testValueFrequency == frequency):
list_most_frequent.append(testValue)
else:
break
return list_most_frequent
#tests:
print(getMostFrequentValues([]))
print(getMostFrequentValues([1]))
print(getMostFrequentValues([1,1]))
print(getMostFrequentValues([2,1]))
print(getMostFrequentValues([2,2,1]))
print(getMostFrequentValues([1,2,1,2]))
print(getMostFrequentValues([1,2,1,2,2]))
print(getMostFrequentValues([3,2,3,5,6,3,2,2]))
print(getMostFrequentValues([1,2,2,60,50,3,3,50,3,4,50,4,4,60,60]))
Results:
[]
[1]
[1]
[1, 2]
[2]
[1, 2]
[2]
[2, 3]
[3, 4, 50, 60]
Here:
def most_common(l):
max = 0
maxitem = None
for x in set(l):
count = l.count(x)
if count > max:
max = count
maxitem = x
return maxitem
I have a vague feeling there is a method somewhere in the standard library that will give you the count of each element, but I can't find it.
This is the obvious slow solution (O(n^2)) if neither sorting nor hashing is feasible, but equality comparison (==) is available:
def most_common(items):
if not items:
raise ValueError
fitems = []
best_idx = 0
for item in items:
item_missing = True
i = 0
for fitem in fitems:
if fitem[0] == item:
fitem[1] += 1
d = fitem[1] - fitems[best_idx][1]
if d > 0 or (d == 0 and fitems[best_idx][2] > fitem[2]):
best_idx = i
item_missing = False
break
i += 1
if item_missing:
fitems.append([item, 1, i])
return items[best_idx]
But making your items hashable or sortable (as recommended by other answers) would almost always make finding the most common element faster if the length of your list (n) is large. O(n) on average with hashing, and O(n*log(n)) at worst for sorting.
>>> li = ['goose', 'duck', 'duck']
>>> def foo(li):
st = set(li)
mx = -1
for each in st:
temp = li.count(each):
if mx < temp:
mx = temp
h = each
return h
>>> foo(li)
'duck'
I needed to do this in a recent program. I'll admit it, I couldn't understand Alex's answer, so this is what I ended up with.
def mostPopular(l):
mpEl=None
mpIndex=0
mpCount=0
curEl=None
curCount=0
for i, el in sorted(enumerate(l), key=lambda x: (x[1], x[0]), reverse=True):
curCount=curCount+1 if el==curEl else 1
curEl=el
if curCount>mpCount \
or (curCount==mpCount and i<mpIndex):
mpEl=curEl
mpIndex=i
mpCount=curCount
return mpEl, mpCount, mpIndex
I timed it against Alex's solution and it's about 10-15% faster for short lists, but once you go over 100 elements or more (tested up to 200000) it's about 20% slower.
def most_frequent(List):
counter = 0
num = List[0]
for i in List:
curr_frequency = List.count(i)
if(curr_frequency> counter):
counter = curr_frequency
num = i
return num
List = [2, 1, 2, 2, 1, 3]
print(most_frequent(List))
Hi this is a very simple solution, with linear time complexity
L = ['goose', 'duck', 'duck']
def most_common(L):
current_winner = 0
max_repeated = None
for i in L:
amount_times = L.count(i)
if amount_times > current_winner:
current_winner = amount_times
max_repeated = i
return max_repeated
print(most_common(L))
"duck"
Where number, is the element in the list that repeats most of the time
numbers = [1, 3, 7, 4, 3, 0, 3, 6, 3]
max_repeat_num = max(numbers, key=numbers.count) *# which number most* frequently
max_repeat = numbers.count(max_repeat_num) *#how many times*
print(f" the number {max_repeat_num} is repeated{max_repeat} times")
def mostCommonElement(list):
count = {} // dict holder
max = 0 // keep track of the count by key
result = None // holder when count is greater than max
for i in list:
if i not in count:
count[i] = 1
else:
count[i] += 1
if count[i] > max:
max = count[i]
result = i
return result
mostCommonElement(["a","b","a","c"]) -> "a"
The most common element should be the one which is appearing more than N/2 times in the array where N being the len(array). The below technique will do it in O(n) time complexity, with just consuming O(1) auxiliary space.
from collections import Counter
def majorityElement(arr):
majority_elem = Counter(arr)
size = len(arr)
for key, val in majority_elem.items():
if val > size/2:
return key
return -1
def most_common(lst):
if max([lst.count(i)for i in lst]) == 1:
return False
else:
return max(set(lst), key=lst.count)
def popular(L):
C={}
for a in L:
C[a]=L.count(a)
for b in C.keys():
if C[b]==max(C.values()):
return b
L=[2,3,5,3,6,3,6,3,6,3,7,467,4,7,4]
print popular(L)

How to write-to/read-from Ctypes variable length array of ints

I came across this link but am still struggling to construct an answer.
This is what one of the complex structs that I have looks like. This is actually a deep nested struct within other structs :)
/*
* A domain consists of a variable length array of 32-bit unsigned integers.
* The domain_val member of the structure below is the variable length array.
* The domain_count is the number of elements in the domain_val array.
*/
typedef struct domain {
uint32_t domain_count;
uint32_t *domain_val;
} domain_t;
The test code in C is doing something like this:
uint32_t domain_seg[4] = { 1, 9, 34, 99 };
domain_val = domain_seg;
The struct defined in python is
class struct_domain(ctypes.Structure):
_pack_ = True # source:False
_fields_ = [
('domain_count', ctypes.c_uint32),
('PADDING_0', ctypes.c_ubyte * 4),
('domain_val', POINTER_T(ctypes.c_uint32)),
]
How to populate the domain_val in that struct ? Can I use a python list ?
I am thinking something along dom_val = c.create_string_buffer(c.sizeof(c.c_uint32) * domain_count) but then how to iterate through the buffer to populate or read the values ?
Will dom_val[0], dom_val[1] be able to iterate through the buffer with the correct length ? Maybe I need some typecast while iterating to write/read the correct number of bytes
Here's one way to go about it:
import ctypes as ct
class Domain(ct.Structure):
_fields_ = (('domain_count', ct.c_uint32),
('domain_val', ct.POINTER(ct.c_uint32)))
def __init__(self, data):
size = len(data)
# Create array of fixed size, initialized with the data
self.domain_val = (ct.c_uint32 * size)(*data)
self.domain_count = size
# Note you can slice the pointer to the correct length to retrieve the data.
def __repr__(self):
return f'Domain({self.domain_val[:self.domain_count]})'
x = Domain([1, 9, 34, 99])
print(x)
# Just like in C, you can iterate beyond the end
# of the array and create undefined behavior,
# so make sure to index only within the bounds of the array.
for i in range(x.domain_count):
print(x.domain_val[i])
Output:
Domain([1, 9, 34, 99])
1
9
34
99
To make it safer, you could add a property that casts the pointer to single element to a pointer to sized-array of elements so length checking happens:
import ctypes as ct
class Domain(ct.Structure):
_fields_ = (('_domain_count', ct.c_uint32),
('_domain_val', ct.POINTER(ct.c_uint32)))
def __init__(self,data):
size = len(data)
self._domain_val = (ct.c_uint32 * size)(*data)
self._domain_count = size
def __repr__(self):
return f'Domain({self._domain_val[:self._domain_count]})'
#property
def domain(self):
return ct.cast(self._domain_val, ct.POINTER(ct.c_uint32 * self._domain_count)).contents
x = Domain([1, 9, 34, 99])
print(x)
for i in x.domain: # now knows the size
print(i)
x.domain[2] = 44 # Can mutate the array,
print(x) # and it reflects in the data.
x.domain[4] = 5 # IndexError!
Output:
Domain([1, 9, 34, 99])
1
9
34
99
Domain([1, 9, 44, 99])
Traceback (most recent call last):
File "C:\demo\test.py", line 27, in <module>
x.domain[4] = 5
IndexError: invalid index

How search an unordered list for a key using reduce?

I have a basic reduce function and I want to reduce a list in order to check if an item is in the list. I have defined the function below where f is a comparison function, id_ is the item I am searching for, and a is the list. For example, reduce(f, 2, [1, 6, 2, 7]) would return True since 2 is in the list.
def reduce(f, id_, a):
if len(a) == 0:
return id_
elif len(a) == 1:
return a[0]
else:
# can call these in parallel
res = f(reduce(f, id_, a[:len(a)//2]),
reduce(f, id_, a[len(a)//2:]))
return res
I tried passing it a comparison function:
def isequal(x, element):
if x == True: # if element has already been found in list -> True
return True
if x == element: # if key is equal to element -> True
return True
else: # o.w. -> False
return False
I realize this does not work because x is not the key I am searching for. I get how reduce works with summing and products, but I am failing to see how this function would even know what the key is to check if the next element matches.
I apologize, I am a bit new to this. Thanks in advance for any insight, I greatly appreciate it!
Based on your example, the problem you seem to be trying to solve is determining whether a value is or is not in a list. In that case reduce is probably not the best way to go about that. To check if a particular value is in a list or not, Python has a much simpler way of doing that:
my_list = [1, 6, 2, 7]
print(2 in my_list)
print(55 in my_list)
True
False
Edit: Given OP's comment that they were required to use reduce to solve the problem, the code below will work, but I'm not proud of it. ;^) To see how reduce is intended to be used, here is a good source of information.
Example:
from functools import reduce
def test_match(match_params, candidate):
pattern, found_match = match_params
if not found_match and pattern == candidate:
match_params = (pattern, True)
return match_params
num_list = [1,2,3,4,5]
_, found_match = reduce(test_match, num_list, (2, False))
print(found_match)
_, found_match = reduce(test_match, num_list, (55, False))
print(found_match)
Output:
True
False

Is the rear item in a Queue the last item added or the item at the end of a Queue?

My professor wrote a Queue class that uses arrays. I was giving it multiple test cases and got confused with one specific part. I want to figure out if the last item added is the rear of the queue. Lets say I enqueued 8 elements:
[1, 2, 3, 4, 5, 6, 7, 8]
Then I dequeued. And now:
[None, 2, 3, 4, 5, 6, 7, 8]
I enqueued 9 onto the Queue and it goes to the front. However, when I called my method that returns the rear item of the queue, q.que_rear, it returned 8. I thought the rear item would be 9? Since it was the last item added.
Here is how I tested it in case anyone is confused:
>>> q = ArrayQueue()
>>> q.enqueue(1)
>>> q.enqueue(2)
>>> q.enqueue(3)
>>> q.enqueue(4)
>>> q.data
[1, 2, 3, 4, None, None, None, None]
>>> q.dequeue()
1
>>> q.enqueue(5)
>>> q.enqueue(6)
>>> q.enqueue(7)
>>> q.enqueue(8)
>>> q.data
[None, 2, 3, 4, 5, 6, 7, 8]
>>> q.enqueue(9)
>>> q.data
[9, 2, 3, 4, 5, 6, 7, 8]
>>> q.que_rear()
Rear item is 8
EDIT
I just want to know what’s supposed to be the “rear of the Queue”? The last element added, or the element at the end of the list? In this case I showed, is it supposed to be 8 or 9?
Here is my code:
class ArrayQueue:
INITIAL_CAPACITY = 8
def __init__(self):
self.data = [None] * ArrayQueue.INITIAL_CAPACITY
self.rear = ArrayQueue.INITIAL_CAPACITY -1
self.num_of_elems = 0
self.front_ind = None
# O(1) time
def __len__(self):
return self.num_of_elems
# O(1) time
def is_empty(self):
return len(self) == 0
# Amortized worst case running time is O(1)
def enqueue(self, elem):
if self.num_of_elems == len(self.data):
self.resize(2 * len(self.data))
if self.is_empty():
self.data[0] = elem
self.front_ind = 0
self.num_of_elems += 1
else:
back_ind = (self.front_ind + self.num_of_elems) % len(self.data)
self.data[back_ind] = elem
self.num_of_elems += 1
def dequeue(self):
if self.is_empty():
raise Exception("Queue is empty")
elem = self.data[self.front_ind]
self.data[self.front_ind] = None
self.front_ind = (self.front_ind + 1) % len(self.data)
self.num_of_elems -= 1
if self.is_empty():
self.front_ind = None
# As with dynamic arrays, we shrink the underlying array (by half) if we are using less than 1/4 of the capacity
elif len(self) < len(self.data) // 4:
self.resize(len(self.data) // 2)
return elem
# O(1) running time
def first(self):
if self.is_empty():
raise Exception("Queue is empty")
return self.data[self.front_ind]
def que_rear(self):
if self.is_empty():
print("Queue is empty")
print("Rear item is", self.data[self.rear])
# Resizing takes time O(n) where n is the number of elements in the queue
def resize(self, new_capacity):
old_data = self.data
self.data = [None] * new_capacity
old_ind = self.front_ind
for new_ind in range(self.num_of_elems):
self.data[new_ind] = old_data[old_ind]
old_ind = (old_ind + 1) % len(old_data)
self.front_ind = 0
The que_rear function seems to be added post-hoc in an attempt to understand how the internal circular queue operates. But notice that self.rear (the variable que_rear uses to determine what the "rear" is) is a meaningless garbage variable, in spite of its promising name. In the initializer, it's set to the internal array length and never gets touched again, so it's just pure luck if it prints out the rear or anything remotely related to the rear.
The true rear is actually the variable back_ind, which is computed on the spot whenever enqueue is called, which is the only time it matters what the back is. Typically, queue data structures don't permit access to the back or rear (if it did, that would make it a deque, or double-ended queue), so all of this is irrelevant and implementation-specific from the perspective of the client (the code which is using the class to do a task as a black box, without caring how it works).
Here's a function that gives you the actual rear. Unsurprisingly, it's pretty much a copy of part of enqueue:
def queue_rear(self):
if self.is_empty():
raise Exception("Queue is empty")
back_ind = (self.front_ind + self.num_of_elems - 1) % len(self.data)
return self.data[back_ind]
Also, I understand this class is likely for educational purposes, but I'm obliged to mention that in a real application, use collections.dequeue for all your queueing needs (unless you need a synchronized queue).
Interestingly, CPython doesn't use a circular array to implement the deque, but Java does in its ArrayDeque class, which is worth a read.

Python 3.x - function args type-testing

I started learning Python 3.x some time ago and I wrote a very simple code which adds numbers or concatenates lists, tuples and dicts:
X = 'sth'
def adder(*vargs):
if (len(vargs) == 0):
print('No args given. Stopping...')
else:
L = list(enumerate(vargs))
for i in range(len(L) - 1):
if (type(L[i][1]) != type(L[i + 1][1])):
global X
X = 'bad'
break
if (X == 'bad'):
print('Args have different types. Stopping...')
else:
if type(L[0][1]) == int: #num
temp = 0
for i in range(len(L)):
temp += L[i][1]
print('Sum is equal to:', temp)
elif type(L[0][1]) == list: #list
A = []
for i in range(len(L)):
A += L[i][1]
print('List made is:', A)
elif type(L[0][1]) == tuple: #tuple
A = []
for i in range(len(L)):
A += list(L[i][1])
print('Tuple made is:', tuple(A))
elif type(L[0][1]) == dict: #dict
A = L[0][1]
for i in range(len(L)):
A.update(L[i][1])
print('Dict made is:', A)
adder(0, 1, 2, 3, 4, 5, 6, 7)
adder([1,2,3,4], [2,3], [5,3,2,1])
adder((1,2,3), (2,3,4), (2,))
adder(dict(a = 2, b = 433), dict(c = 22, d = 2737))
My main issue with this is the way I am getting out of the function when args have different types with the 'X' global. I thought a while about it, but I can't see easier way of doing this (I can't simply put the else under for, because the results will be printed a few times; probably I'm messing something up with the continue and break usage).
I'm sure I'm missing an easy way to do this, but I can't get it.
Thank you for any replies. If you have any advice about any other code piece here, I would be very grateful for additional help. I probably have a lot of bad non-Pythonian habits coming from earlier C++ coding.
Here are some changes I made that I think clean it up a bit and get rid of the need for the global variable.
def adder(*vargs):
if len(vargs) == 0:
return None # could raise ValueError
mytype = type(vargs[0])
if not all(type(x) == mytype for x in vargs):
raise ValueError('Args have different types.')
if mytype is int:
print('Sum is equal to:', sum(vargs))
elif mytype is list or mytype is tuple:
out = []
for item in vargs:
out += item
if mytype is list:
print('List made is:', out)
else:
print('Tuple made is:', tuple(out))
elif mytype is dict:
out = {}
for i in vargs:
out.update(i)
print('Dict made is:', out)
adder(0, 1, 2, 3, 4, 5, 6, 7)
adder([1,2,3,4], [2,3], [5,3,2,1])
adder((1,2,3), (2,3,4), (2,))
adder(dict(a = 2, b = 433), dict(c = 22, d = 2737))
I also made some other improvements that I think are a bit more 'pythonic'. For instance
for item in list:
print(item)
instead of
for i in range(len(list)):
print(list[i])
In a function like this if there are illegal arguments you would commonly short-cuircuit and just throw a ValueError.
if bad_condition:
raise ValueError('Args have different types.')
Just for contrast, here is another version that feels more pythonic to me (reasonable people might disagree with me, which is OK by me).
The principal differences are that a) type clashes are left to the operator combining the arguments, b) no assumptions are made about the types of the arguments, and c) the result is returned instead of printed. This allows combining different types in the cases where that makes sense (e.g, combine({}, zip('abcde', range(5)))).
The only assumption is that the operator used to combine the arguments is either add or a member function of the first argument's type named update.
I prefer this solution because it does minimal type checking, and uses duck-typing to allow valid but unexpected use cases.
from functools import reduce
from operator import add
def combine(*args):
if not args:
return None
out = type(args[0])()
return reduce((getattr(out, 'update', None) and (lambda d, u: [d.update(u), d][1]))
or add, args, out)
print(combine(0, 1, 2, 3, 4, 5, 6, 7))
print(combine([1,2,3,4], [2,3], [5,3,2,1]))
print(combine((1,2,3), (2,3,4), (2,)))
print(combine(dict(a = 2, b = 433), dict(c = 22, d = 2737)))
print(combine({}, zip('abcde', range(5))))

Resources