k-combinations in Python 3.x - python-3.x

What is the most efficient way to generate k-tuples with all k-combinations from a given python set? Is there an appropriate built-in function? Something tells me it should be possible with a two-line for loop.
P.S. I did conduct a search and found various entries to the topic "combinations from lists, etc in Python", but all proposed solutions seem rather 'un-python'. I am hoping for a mind-blowing, idiomatic python expression.

itertools has all of those types of functions:
import itertools
for combination in itertools.combinations(iterable, k):
print(combination)

Related

How did numpy add the # operator?

How did they do it? Can I also add my own new operators to Python 3? I searched on google but I did not find any information on this.
No, you can't add your own. The numpy team cooperated with the core Python team to add # to the core language, It's in the core Python docs (for example, in the operator precedence table), although core Python doesn't use it for anything in the standard CPython distribution. The core distribution nevertheless recognizes the operator symbol, and generates an appropriate BINARY_MATRIX_MULTIPLY opcode for it:
>>> import dis
>>> def f(a, b):
... return a # b
>>> dis.dis(f)
2 0 LOAD_FAST 0 (a)
2 LOAD_FAST 1 (b)
4 BINARY_MATRIX_MULTIPLY
6 RETURN_VALUE
Answering your second question,
Can I also add my own new operators to Python 3?
A similar question with some very interesting answers can be found here, Python: defining my own operators?
Recently in PyCon US 2022, Sebastiaan Zeeff delivered a talk showing how to implement a new operator. He warns that the implementation is purely educational though. However, it turns out you actually can implement a new operator yourself! Locally, of course :). You can find his talk here, and his code repository here. And if you think your operator could enhance Python Language, why not propose a PEP for it?

Way to use bisect module for sets in python

I was looking for something similar to lower_bound() function for sets in
python, as we have in C++.
Task is to have a ds, which inserts element in sorted manner, storing only single instance of each distinct value, and returns the left neighbor of a given value, both operations in O(logn) worst time in python.
python: something similar to bisect module for lists, with efficient insertion may work.
sets are unordered, and the standard lib does not offer tree structures.
Maybe you could look at sorted containers (3rd party lib): http://www.grantjenks.com/docs/sortedcontainers/ it might offer a good approach to your problem.

Time Complexity for Python built-ins?

Is there any good reference resource to know the time complexity of Python's built-in functions like dict.fromkeys(), .lower()? I found links like this UCI resource which lists time-complexity for basic list & set operations but of course, not for all built-ins. I also found Python Reference - The Right Way but most of references have #TODO for time complexity.
I also tried reading the source code of python built-ins to figure out how the functions like dict.fromkeys() are implemented but felt lost.
This is a great place to start:
https://wiki.python.org/moin/TimeComplexity
It says that Get Item is O(1) and Iteration is O(n) (Average Case).
So then, if with .fromkeys() you iterate over just the keys of the dict, then make those the keys of a new dict, while also setting values, I'd think that you'd have between O(n) and O(2n), where n is the number of keys in the first dict.
Sorry that I can't offer more than conjecture, but hopefully that link is helpful.

Python built-in functions time/space complexity

For python built-in functions such as:
sorted()
min()
max()
what are time/space complexities, what algorithms are used?
Is it always advisable to use the built-in functions of python?
As mentioned in comments, sorted is timsort (see this post) which is O(n log(n)) and a stable sort. max and min will run in Θ(n). But, if you want to find both of them in a solution, you can find them using 3n/2 comparison instead of 2n. (Although in general they are in O(n)). To know more about the method see this post.

What are the advantage and disadvantages of using a list comprehension in Python 2.54-6?

I've heard that list comprehensions can be slow sometimes, but I'm not sure why? I'm new to Python (coming from a C# background), and I'd like to know more about when to use a list comprehension versus a for loop. Any ideas, suggestions, advice, or examples? Thanks for all the help.
Use a list comprehension (LC) when it's appropriate.
For example, if you are passing any ol' iterable to a function, a generator expression (genexpr) is often more appropriate, and a LC is wasteful:
"".join([str(n) for n in xrange(10)])
# becomes
"".join(str(n) for n in xrange(10))
Or, if you don't need a full list, a for-loop with a break statement would be your choice. The itertools module also has tools, such as takewhile.

Resources