Firstly, thank you for taking the time to read and input. It is greatly appreciated.
Question: What kind of approach can we take to keep the same public API of a class currently using multiple mixins but refactor it internally to be composed of objects that do the same work as the mixin. Autocomplete is a must (so runtime dynamics are kind of out such as hacking things on via __getattr__ or similar - I know this depends on the runtime environment i.e ipython vs pycharm etc, for the sake of this question, assume pycharm which cannot leverage __dir__ I think fully.
Accompanying Information:
I am writing a little assertion library in python and I have a core class which is instantiated with a value and subsequently inherits various assertion capabilities against that value via a growing number of mixin classes:
class Asserto(StringMixin, RegexMixin):
def __init__(self, value: typing.Any, type_of: str = AssertTypes.HARD, description: typing.Optional[str] = None):
self.value = value
self.type_of = type_of
self.description = description
These mixin classes offer various assertion methods for particular types, here is a quick example of one:
from __future__ import annotations
class StringMixin:
def ends_with(self, suffix: str) -> StringMixin:
if not self.value.endswith(suffix):
self.error(f"{self.value} did not end with {suffix}")
def starts_with(self, prefix: str) -> StringMixin:
if not self.value.startswith(prefix):
self.error(f"{self.value} did not end with {prefix}")
I would like to refactor the Asserto class to compose itself of various implementations of some sort of Assertable interface rather than clobber together a god class here with Mixins, I'm likely to have 10+ Mixins by the time I am finished.
Is there a way to achieve the same public facing API as this mixins setup so that client code has access to everything through the Asserto(value).check_something(...) but using composition internally?
I could define every single method in the Asserto class that just delegate to the appropriate concrete obj internally but then I am just making a massive god class anyway and the composition feels like a pointless endeavour in that instance?
for example in client code, I'd like all the current mixins methods to be available on an Asserto instance with autocomplete.
def test_something():
Asserto("foo").ends_with("oo")
Thank you for your time. Perhaps using the mixin approach is the correct way here, but it feels kind of clunky.
Related
I've built a class to ask a user a question, based on a type.
class Question:
def __init__(self, subject):
self.subject = subject
self.question = f"Enter the {subject} to be created. You may end this by typing 'DONE':\n"
self.still_needed = True
def ask_question(self):
ans_list = []
running = True
while running:
var = input(f"Enter {self.subject}?\n")
if var.lower() == 'done':
running = False
else:
ans_list.append(var)
return ans_list
The idea is to have a question model, to create lists of items.
This seems to work well with the following code in main.
roles = Question(subject="role").ask_question()
This creates a list from the Queue Class and uses it's method ask question to generate the list. As far as I can tell the object is then destroyed, as it's not saved to a variable.
My question, being new to Python and OOP is, does this seem like a solid and non-confusing way, or should I refractor? If so, what does the community suggest?
MY OPINION
I guess it depends on you. For one, one of the main purposes of using a class is to create an instance with it later on. Classes are objects ,or "categories" as I like to call them, that you use when there are distinctive types of instances in your project.
Given your code snippet, I can't really suggest anything, I don't know the usage of self.question and self.still_needed. However, if I were to base my opinion on just this part: roles = Question(subject="role").ask_question(), then I'd definitely go with using a function instead. As you've said,
As far as I can tell the object is then destroyed, as it's not saved
to a variable.
ALTERNATIVE SOLUTION
Use decorators → the one with # symbol
In this case, #staticmethod is the way to go!
What are staticmethods? The staticmethod decorator is a way to create a function in a class. So instead of it becoming a method, it can be treated as a function (without self parameter). This also means that a static method bounds to the class rather than its object. Consequently, static methods do not depend on objects (hence, you don't need to create an object for you to use it). Example:
class SomeMathStuff():
#staticmethod
def AddSomeNumbers(iterable):
return sum(iterable)
result = SomeMathStuff.AddSomeNumbers([1, 2, 3])
# result = 6
As you can see, I did not need to create an object, instead I just needed to call its class to use it. Word of warning, most Python programmers argue that this is the un-Pythonic way, but I wouldn't worry too much about it. Hell, even I use these methods sometimes. In my defense, this is a good and efficient way to organize your project. With this, you can apply class methods globally and you can "categorize" them in certain classes you find suitable.
Anyway, this is all I have! I apologize if I misinformed you in any way.
ADDITIONAL INFROMATION ... in case I wasn't the best teacher
https://www.programiz.com/python-programming/methods/built-in/staticmethod
Difference between staticmethod and classmethod
https://softwareengineering.stackexchange.com/questions/171296/staticmethod-vs-module-level-function
Given the following example:
class Container:
def __init__(self, var):
self.var = var
class Test:
def __init__(self):
self.var = Container("123")
Is it possible to overload the type() method such as type(Test().var) would yield string rather than Container ?
EDIT : I am using the Container class in order to place restrictions on Test.var.
The idea is that Test is a class that contains many variables, some of witch have similar names. The Container class is there to ensure that the right types are used ( __eq__(), __str__(), __add__(), ... are overloaded in order to make the Container class as discreet as possible ) so that issues are diagnosed as fast as possible ( the code will by used by people with a very wide variety of expertise in python )
The other solution would have been to use the #property but as there are many variables, the code ends up being way bigger than it would otherwise and not as simple to maintain ( there is close to a hundred classes witch will have to implement the properties )
I would like to overload type(Test().var) in order to return type(Test().var.var) so that it would be as easy to use as possible
The short answer is "no."
From this official Python doc, it states:
Every object has an identity, a type and a value... The type() function returns an object’s type (which is an object itself). Like its identity, an object’s type is also unchangeable.
I am currently developing a piece of software where the I have class instamces that are generated from dictionaries. The way these dictionariea file are structured is as follows:
layer_dict = {
"layer_type": "Conv2D",
"name": "conv1",
"kernel_size": 3,
...
}
Then, the following code is ran
def create_layer(layer_dict):
LayerType = getattr(layers, layer_dict['layer_type']
del layer_dict['layer_type']
return LayerType(**layer_dict)
Now, I want to support the creation of new layer types (by subclassing the BaseLayer class). I've thought of a few ways to do this and thought I'd ask which way is best and why as I don't have much experience developing software (finishing an MSc in comp bio).
Method 1: Metaclasses
The first method I thought of was to have a metaclass that registers every subclass of BaseLayer in a dict and do a simple lookup of this dict instead of using getattr.
class MetaLayer(type)
layers = {}
def __init__(cls, name, bases, dct):
if name in MetaLayer.layers:
raise ValueError('Cannot have more than one layer with the same name')
MetaLayer.layers[name] = cls
Benefit: The metaclass can make sure that no two classes have the same name. The user doesn't need to think about anything but subclassing when creating new layers.
Downside: Metaclasses are difficult to understand and often frowned upon
Method 2: Traversing the __subclasses__ tree
The second method I thought of was to use the __subclassess__ function of BaseLayer to get a list of all subclasses, then create a dict with Layer.__name__ as keys and Layer as values. See example code below:
def get_subclasses(cls):
"""Returns all classes that inherit from `cls`
"""
subclasses = {
sub.__name__: sub for sub in cls.__subclasses__()
}
subsubclasses = (
get_subclasses(sub) for sub in subclasses.values()
)
subsubclasses = {
name: sub for subs in subsubclasses for name, sub in subs.items()
}
return {**subclasses, ** subsubclasses}
Benefit: Easy to explain how this works.
Downside: We might end up with two layers having the same name.
Method 3: Using a class decorator
The final method is my favourite as it doesn't hide any implementation details in a metaclass, and still manages to prevent multiple classes with the same name.
Here the layers module has a global variable named layers and a decorator named register_layer, which simply adds the decorated classes to the layers dict. See code below.
layers = {}
def register_layer(cls):
if cls.__name__ in layers:
raise ValueError('Cannot have two layers with the same name')
layers[cls.__name__] = cls
return cls
Benefit: No metaclasses and no way of having two layers with the same name.
Downside: Requires a global variable, which is often frowned upon.
So, my question is, which method is preferable? And more importantly, why?
Actually - that is the kind of things metaclases are designed for. As you can see from the options you stated above, it is the simpler and more straightforward design.
They are sometimes "frowned upon" because of two things: (1) people don't understand then and don't care for understanding; (2) people misuse then when they are actually not needed; (3) they are hard to combine - so if any of your classes is to be used with a mixn that have a different metaclass (say abc.ABC), you have also to produce a combining metaclass.
Method 4: __init_subclass__
Now, that said, from Python 3.6, there is a new feature that can cover your usecase without the need for metaclasses: the class __init_subclass__ method:
it is called as a classmethod on the base class when subclasses of it are created.
All you need is to write a proper __init_subclass__ method on your BaseLayer class and have all the benefits you'd have from the implementation in the metaclasses and none of the downsides
Like you, I like the class decorator approach as it is more readable.
You can avoid using a global variable by making the class decorator itself a class, and making layers a class variable instead. You can also avoid possible name collision by joining the target class' name with its module name:
class register_layer:
layers = {}
def __new__(cls, target):
cls.layers['.'.join((target.__module__, target.__name__))] = target
return target
I'm doing this Ensime package for Atom.io https://github.com/ensime/ensime-atom and I've been thinking about the possibility to use scala.js instead of writing Coffeescript.
Atom is a web based editor which is scripted with js and is node.js based. A plugin/package defines it's main entry point by pointing out a javascript object with a few specific.
I figured I should start out simple and try using scala.js replacing the simplest coffeescript file I have:
{View} = require 'atom-space-pen-views'
# View for the little status messages down there where messages from Ensime server can be shown
module.exports =
class StatusbarView extends View
#content: ->
#div class: 'ensime-status inline-block'
initialize: ->
serialize: ->
init: ->
#attach()
attach: =>
statusbar = document.querySelector('status-bar')
statusbar?.addLeftTile {item: this}
setText: (text) =>
#text("Ensime: #{text}").show()
destroy: ->
#detach()
As you can see this exports a require.js module and is a class extending a class fetched with require as well.
Sooo.
I'm thinking I'd just use Dynamic for the require dep as I've seen on SO How to invoke nodejs modules from scala.js?:
import js.Dynamic.{global => g}
import js.DynamicImplicits._
private[views] object SpacePen {
private val spacePenViews = require("atom-space-pen-views")
val view = spacePenViews.view
}
But if I wanted to type the super-class, could I just make a facade-trait and do asInstanceOf?
Secondly, I wonder how I can export my class as a node module. I found this:
https://github.com/rockymadden/scala-node/blob/master/main/src/main/coffeescript/example.coffee
Is this the right way? Do I need to do the sandboxing? Couldn't I just get moduleimported from global and write module.exports = _some_scala_object_?
I'm also wondering how I could extend existing js classes. The same problem as asked here, but I don't really understand the answer:
https://groups.google.com/forum/#!topic/scala-js/l0gSOSiqubs
My code so far:
private[views] object SpacePen {
private val spacePenViews = js.Dynamic.global.require("atom-space-pen-views")
type View = spacePenViews.view
}
class StatusBarView extends SpacePen.View {
override def content =
super.div()
}
gives me compile errors that I can't extend sealed trait Dynamic. Of course.
Any pointers highly appreciated!
I'm not particularly expert in Node per se, but to answer your first question, yes -- if you have a pointer to a JS object, and you know the details of its type, you can pretty much always define a facade trait and asInstanceOf to use it. That ought to work.
As for the last bit, you basically can't extend JS classes in Scala.js -- it just doesn't work. The way most of us get around that is by defining implicit classes, or using implicit def's, to get the appearance of extending without actually doing so.
For example, given JS class Foo, I can write
implicit class RichFoo(foo:Foo) {
def method1() = { ... }
}
This is actually a wrapper around Foo, but calling code can simply call foo.method1() without worrying about that detail.
You can see this approach in action very heavily in jquery-facade, particularly in the relationship between JQuery (the pure facade), JQueryTyped (some tweaked methods over JQuery to make them work better in Scala), and JQueryExtensions (some higher-level functions built around JQuery). These are held together using implicit def's in package.scala. As far as calling code is concerned, all of these simply look like methods on JQuery.
I have a library of domain objects which need to be used in the project, however we've found a couple of the classes haven't got an equals or hashCode method implemented.
I'm looking for the simplest (and Grooviest) way to add those methods. Obviously I could create a subclass which only adds the methods, but this would be confusing for developers used to the library and would mean we'd have to refactor existing code.
It is not possible to get the source changed (currently).
If I could edit the class I would just use the #EqualsAndHashCode annotation to carry out an AST transformation (at compile time?), but I can't find a way to instruct the compiler to carry out the transformation on a class which I can't directly annotate.
I'm currently trying to work up an example using the ExpandoMetaClass, so I'd do something like:
MySuperClass.metaClass.hashCode = { ->
// Add dynamic hashCode calculation bits here
}
MySuperClass.metaClass.equals = { ->
// Add dynamic hashCode calculation bits here
}
I don't really want to hand-code the hashCode/equals methods for each class, so I'm looking for something dyamic (like #EqualsAndHashCode) which will work with this.
Am I on the right track? Is there a groovier way?
AST Transforms are only applied at compile time, so you'll get no help from the likes of #EqualsAndHashCode. MetaClass hacks are going to be your only option. That said, there are more-elegant ways to impose MetaClass behavior.
Shameless Self Plug I did a talk about this kind of stuff last year at SpringOne 2GX: http://www.infoq.com/presentations/groovy-app-architecture
In short, you might find benefit in creating extensions (unless you're in Grails) - http://mrhaki.blogspot.com/2013/01/groovy-goodness-adding-extra-methods.html, or by explicitly adding mixins - http://groovy.codehaus.org/Runtime+mixins ... But in general, these are just cleaner ways to do the exact same thing you're already doing.