export class vs functions - node.js

In case of utility module, I can have either class with static methods or just export methods. I think the first solution is better, though I saw a lot of implementations with second option. Are there any "nuances" here which I am not considering?

I would argue that a class with static methods is better for the following reasons:
If your class name is Utils, all imports will by default import it as Utils too. With exported functions however, they could be imported as Utils yet that would only be a convention , one that likely won't be the case in all the different places.
A class named Utils in a file named utils.js with all the utility methods neatly grouped together is aesthetically more pleasant than flat functions defined all over the place.
A class could have properties that are used among its methods, for this you'd need #babel/plugin-proposal-class-properties though. Again, much nicer than variables defined all over the place.

Exporting methods is safer because you don't give access to class properties. Note also that in javascrpt the concept of class does not have a lot of sense, it's been introduced to make feel more confortable developers with oo languages background. Try to work with Object prototyping instead.

A couple options I can think of, if the methods are intended to be a utility method on another class, you could use handbaked mixins: http://coffeescriptcookbook.com/chapters/classes_and_objects/mixins or rely on something similar in underscore/lowdash
If you want the encapsulation of methods and still have the ability to extend, you can do this:
class Foo
foo = -> alert 'foo'
#static: -> foo()
Foo.static() #=> 'foo'
Foo.foo #=> undefined
new Foo().foo #=> undefined
class Bar extends Foo
Bar.static() # => 'foo'
jsfiddle: http://jsfiddle.net/4ne7ccxk/

Related

Proper way to instanciate class in godot (preload vs class_name)

As far as I am aware there are two main ways to instantiate class in gdscript 2.0.
Preload/load class with script and create instance:
var some_class = preload("res://SomeClass.gd")
...
var instance = some_class.new(...args)
Use class_name
# in SomeClass.gd
class_name some_class extends Object
# in place where class instantiated
var instance = some_class.new(...args)
Which way to create new instance of class is preferred? What are differences between two methods? Is there any reason to avoid just using export_class everywhere?
You can use class_name everywhere in your game code.
The differences I'm aware of are:
preload can take a relative path.
class_name classes are always available.
class_name pollutes the global scope.
Thus, preload is better for reusable components. Stuff that you want to be able to take from one project to another, and without worry they will conflict with something. In other words: addons.
Thus: default to class_name unless you have a reason not to (and that is probably when writing addons or editor plugins).

Switching multiple inheritance via mixins to composition but keep the same API

Firstly, thank you for taking the time to read and input. It is greatly appreciated.
Question: What kind of approach can we take to keep the same public API of a class currently using multiple mixins but refactor it internally to be composed of objects that do the same work as the mixin. Autocomplete is a must (so runtime dynamics are kind of out such as hacking things on via __getattr__ or similar - I know this depends on the runtime environment i.e ipython vs pycharm etc, for the sake of this question, assume pycharm which cannot leverage __dir__ I think fully.
Accompanying Information:
I am writing a little assertion library in python and I have a core class which is instantiated with a value and subsequently inherits various assertion capabilities against that value via a growing number of mixin classes:
class Asserto(StringMixin, RegexMixin):
def __init__(self, value: typing.Any, type_of: str = AssertTypes.HARD, description: typing.Optional[str] = None):
self.value = value
self.type_of = type_of
self.description = description
These mixin classes offer various assertion methods for particular types, here is a quick example of one:
from __future__ import annotations
class StringMixin:
def ends_with(self, suffix: str) -> StringMixin:
if not self.value.endswith(suffix):
self.error(f"{self.value} did not end with {suffix}")
def starts_with(self, prefix: str) -> StringMixin:
if not self.value.startswith(prefix):
self.error(f"{self.value} did not end with {prefix}")
I would like to refactor the Asserto class to compose itself of various implementations of some sort of Assertable interface rather than clobber together a god class here with Mixins, I'm likely to have 10+ Mixins by the time I am finished.
Is there a way to achieve the same public facing API as this mixins setup so that client code has access to everything through the Asserto(value).check_something(...) but using composition internally?
I could define every single method in the Asserto class that just delegate to the appropriate concrete obj internally but then I am just making a massive god class anyway and the composition feels like a pointless endeavour in that instance?
for example in client code, I'd like all the current mixins methods to be available on an Asserto instance with autocomplete.
def test_something():
Asserto("foo").ends_with("oo")
Thank you for your time. Perhaps using the mixin approach is the correct way here, but it feels kind of clunky.

Introspection: how do we get the name of a class within a class?

Say we have
class Foo {}
Is there a way to obtain "Foo" from within the class?
Yes.
class Foo {
say ::?CLASS.^name; # OUTPUT: Foo
}
Kaiepi's solution has its place, which I'll get to below, but also consider:
class Foo {
say Foo.perl; # Foo
say OUR.WHO; # Foo
::?PACKAGE
}
Foo.perl
This provides a simple answer to your literal question (though it ignores what you're really after, as explained in your comment below and as suggested by the metaprogramming tag and your use of the word "introspection").
OUR.WHO
I think this is typically more appropriate than ::?CLASS.^name for several reasons:
Looks less line-noisy.
Works for all forms of package, i.e. ones declared with the built in declarators package, module, grammar, or role as well as class, and also custom declarators like actor, monitor, etc.
Will lead readers to mostly directly pertinent issues if they investigate OUR and/or .WHO in contrast to mostly distracting arcana if they investigate the ::?... construct.
::?CLASS vs ::?PACKAGE
OUR.WHO only works in a value grammatical slot, not a type grammatical slot. For the latter you need a suitable ::?... form, eg:
class Foo { has ::?CLASS $bar }
And these ::?... forms also work as values:
class Foo { has $bar = ::?CLASS }
So, despite their relative ugliness, they're more general in this particular sense. That said, if generality is at a premium then ::?PACKAGE goes one better because it works for all forms of package.

Is it a convention to have a minimal init.pp that class-scopes the module?

I've come across the following convention, the init.pp is as minimal as possible and looks like this for the example of a java8 module in modules/java8/init.pp
import "*"
class java8 {
include java8::java8
}
Then a modules/java8/java8.pp defines the actual rules/implementations:
class java8::java8 {
# ...
}
Is this a convention, is it an old convention and deprecated? What would or is the rational behind this?
I'm not familiar with that style as any widely-used convention, and I see only limited value to it. Specifically, it appears to serve as a compromise between code organization interests and usage interests: it allows that every class of consequence will be defined in a manifest file named after it (including the delegate main class, java8::java8, in modules/java8/manifests/java8.pp), while providing a main class for the module with a one-segment qualified name (java8), so that users can simply
include 'java8'
I think it's fairly common nowadays to keep the main class small by making it delegate the details to other, private, classes inside the module, but I don't see much value in delegating to exactly one other class for (apparently) naming purposes alone. I also think it's potentially confusing to have different classes with the same unqualified name (java8) in the same module.

How to provide and consume require.js modules in scala.js (and extending classes)

I'm doing this Ensime package for Atom.io https://github.com/ensime/ensime-atom and I've been thinking about the possibility to use scala.js instead of writing Coffeescript.
Atom is a web based editor which is scripted with js and is node.js based. A plugin/package defines it's main entry point by pointing out a javascript object with a few specific.
I figured I should start out simple and try using scala.js replacing the simplest coffeescript file I have:
{View} = require 'atom-space-pen-views'
# View for the little status messages down there where messages from Ensime server can be shown
module.exports =
class StatusbarView extends View
#content: ->
#div class: 'ensime-status inline-block'
initialize: ->
serialize: ->
init: ->
#attach()
attach: =>
statusbar = document.querySelector('status-bar')
statusbar?.addLeftTile {item: this}
setText: (text) =>
#text("Ensime: #{text}").show()
destroy: ->
#detach()
As you can see this exports a require.js module and is a class extending a class fetched with require as well.
Sooo.
I'm thinking I'd just use Dynamic for the require dep as I've seen on SO How to invoke nodejs modules from scala.js?:
import js.Dynamic.{global => g}
import js.DynamicImplicits._
private[views] object SpacePen {
private val spacePenViews = require("atom-space-pen-views")
val view = spacePenViews.view
}
But if I wanted to type the super-class, could I just make a facade-trait and do asInstanceOf?
Secondly, I wonder how I can export my class as a node module. I found this:
https://github.com/rockymadden/scala-node/blob/master/main/src/main/coffeescript/example.coffee
Is this the right way? Do I need to do the sandboxing? Couldn't I just get moduleimported from global and write module.exports = _some_scala_object_?
I'm also wondering how I could extend existing js classes. The same problem as asked here, but I don't really understand the answer:
https://groups.google.com/forum/#!topic/scala-js/l0gSOSiqubs
My code so far:
private[views] object SpacePen {
private val spacePenViews = js.Dynamic.global.require("atom-space-pen-views")
type View = spacePenViews.view
}
class StatusBarView extends SpacePen.View {
override def content =
super.div()
}
gives me compile errors that I can't extend sealed trait Dynamic. Of course.
Any pointers highly appreciated!
I'm not particularly expert in Node per se, but to answer your first question, yes -- if you have a pointer to a JS object, and you know the details of its type, you can pretty much always define a facade trait and asInstanceOf to use it. That ought to work.
As for the last bit, you basically can't extend JS classes in Scala.js -- it just doesn't work. The way most of us get around that is by defining implicit classes, or using implicit def's, to get the appearance of extending without actually doing so.
For example, given JS class Foo, I can write
implicit class RichFoo(foo:Foo) {
def method1() = { ... }
}
This is actually a wrapper around Foo, but calling code can simply call foo.method1() without worrying about that detail.
You can see this approach in action very heavily in jquery-facade, particularly in the relationship between JQuery (the pure facade), JQueryTyped (some tweaked methods over JQuery to make them work better in Scala), and JQueryExtensions (some higher-level functions built around JQuery). These are held together using implicit def's in package.scala. As far as calling code is concerned, all of these simply look like methods on JQuery.

Resources