I've recently started working on a non-trivial project in CoffeeScript and I'm struggling with how best to deal with registering exports etc. I'm writing it in a very 'pythonesque' manner, with individual files effectively being 'modules' of related classes and functions. What I'm looking for is the best way to define classes and functions locally AND in exports/window with as little repetition as possible.
At the moment, I'm using the following in every file, to save writing exports.X = X for everything in the file:
class module
# All classes/functions to be included in exports should be defined with `#`
# E.g.
class #DatClass
exports[name] = item for own name, item of module
I've also looked at the possibility of using a function (say, publish) that puts the passed class in exports/window depending on its name:
publish = (f) ->
throw new Error 'publish only works with named functions' unless f.name?
((exports ? window).namespace ?= {})[f.name] = f
publish class A
# A is now available in the local scope and in `exports.namespace`
# or `window.namespace`
This, however, does not work with functions as, as far as I know, they cannot be 'named' in CoffeeScript (e.g. f.name is always '') and so publish cannot determine the correct name.
Is there any method that works like publish but works with functions? Or any alternative ways of handling this?
It's an ugly hack but you can use the following :
class module.exports
class #foo
#bar = 3
And then :
require(...).foo.bar // 3
The old
(function (exports) {
// my code
exports.someLib = ...
})(typeof exports === "undefined" ? window : exports);
Is a neat trick that should do what you want.
If writing that wrapper boilerplate is a pain then automate it with a build script.
What I'm looking for is the best way to define classes and functions locally AND in exports/window with as little repetition as possible.
It's impossible to do something like
exports.x = var x = ...;
without writing x twice in JavaScript (without resorting to black magicks, i.e. eval), and the same goes for CoffeeScript. Bummer, I know, but that's how it is.
My advice would be to not get too hung up on it; that kind of repetition is common. But do ask yourself: "Do I really need to export this function or variable and make it locally available?" Cleanly decoupled code doesn't usually work that way.
There's an exception to the "no named functions" rule: classes. This works: http://jsfiddle.net/PxBgn/
exported = (clas) ->
console.log clas.name
window[clas.name] = clas
...
exported class Snake extends Animal
move: ->
alert "Slithering..."
super 5
Related
The question is simple, how do we make es6 modules act like the ImportScript function used on the web browser.
Explanation
The main reason is to soften the blow for developers as they change their code from es5 syntax to es6 so that the transition does not blow up your code the moment you make the change and find out there are a thousand errors due to missing inclusions. It also give's people the option to stay as is indefinitely if you don't want to make the full change at all.
Desired output
ImportScript(A file path/'s); can be applied globally(implicitly) across subsequently required code and vise-verse inside a main file to avoid explicit inclusion in all files.
ES6 Inclusion
This still does not ignore the fact that all your libraries will depend on modules format as well. So it is inevitable that we will still have to include the export statement in every file we need to require. However, this should not limit us to the ability to have a main file that interconnects them all without having to explicitly add includes to every file whenever you need a certain functionality.
DISCLAIMER'S
(Numbered):
(Security) I understand there are many reasons that modules exist and going around them is not advisable for security reasons/load times. However I am not sure about the risk (if any) of even using a method like "eval()" to include such scripts if you are only doing it once at the start of an applications life and to a constant value that does not accept external input. The theory is that if an external entity is able to change the initial state of your program as is launched then your system has already been compromised. So as it is I think the whole argument around Globalization vs modules boils down to the project being done(security/speed needed) and preference/risk.
(Not for everyone) This is a utility I am not implying that everyone uses this
(Already published works) I have searched a lot for this functionality but I am not infallible to err. So If a simple usage of this has already been done that follows this specification(or simpler), I'd love to know how/where I can attain such code. Then I will promptly mark that as the answer or just remove this thread entirely
Example Code
ES5 Way
const fs = require('fs');
let path = require('path');
/* only accepts the scripts with global variables and functions and
does not work with classes unless declared as a var.
*/
function include(f) {
eval.apply(global, [fs.readFileSync(f).toString()])
}
Main file Concept example:
ImportScript("filePath1");loaded first
ImportScript("filePath2");loaded second
ImportScript("filePath3");loaded third
ImportScript("filePath4");loaded fourth
ImportScript("filePath5");loaded fifth
ImportScript("someExternalDependency");sixth
/* where "functionNameFromFile4" is a function defined in
file4 , and "variableFromFile2" is a global dynamic
variable that may change over the lifetime of the
application.
*/
functionNameFromFile4(variableFromFile2);
/* current context has access to previous scripts contexts
and those scripts recognize the current global context as
well in short: All scripts should be able to access
variables and functions from other scripts implicitly
through this , even if they are added after the fact
*/
Typical exported file example (Covers all methods of export via modules):
/*where "varFromFile1" is a dynamic variable created in file1
that may change over the lifetime of the application and "var" is a
variable of type(varFromFile4) being concatenated/added together
with "varFromFile4".
*/
functionNameFromFile4(var){
return var+varFromFile1;
}
//Typical export statement
exportAllHere;
/*
This is just an example and does not cover all usage cases , just
an example of the possible functionality
*/
CONCLUSION
So you still need to export the files as required by the es6 standard , however you only need to import them once in a main file to globalize their functionality across all scripts.
I'm not personally a fan of globalizing all the exports from a module, but here's a little snippet that shows you how one ESM module's exports can be all assigned to the global object:
Suppose you had a simple module called operators.js:
export function add(a, b) {
return a + b;
}
export function subtract(a, b) {
return a - b;
}
You can import that module and then assign all of its exported properties to the global object with this:
import * as m from "./operators.js"
for (const [prop, value] of Object.entries(m)) {
global[prop] = value;
}
// can now access the exports globally
add(1, 2);
FYI, I think the syntax:
include("filePath1")
is going to be tough in ESM modules because dynamic imports in an ESM module using import (which is presumably what you would have to use to implement the include() function you show) are asynchronous (they return a promise), not synchronous like require().
I wonder if a bundler or a transpiler would be an option?
There is experimental work in nodejs related to custom loaders here: https://nodejs.org/api/esm.html#hooks.
If you can handle your include() function returning a promise, here's how you put the above code into that function:
async function include(moduleName) {
const m = await import(moduleName);
for (const [prop, value] of Object.entries(m)) {
global[prop] = value;
}
return m;
}
I'm doing this Ensime package for Atom.io https://github.com/ensime/ensime-atom and I've been thinking about the possibility to use scala.js instead of writing Coffeescript.
Atom is a web based editor which is scripted with js and is node.js based. A plugin/package defines it's main entry point by pointing out a javascript object with a few specific.
I figured I should start out simple and try using scala.js replacing the simplest coffeescript file I have:
{View} = require 'atom-space-pen-views'
# View for the little status messages down there where messages from Ensime server can be shown
module.exports =
class StatusbarView extends View
#content: ->
#div class: 'ensime-status inline-block'
initialize: ->
serialize: ->
init: ->
#attach()
attach: =>
statusbar = document.querySelector('status-bar')
statusbar?.addLeftTile {item: this}
setText: (text) =>
#text("Ensime: #{text}").show()
destroy: ->
#detach()
As you can see this exports a require.js module and is a class extending a class fetched with require as well.
Sooo.
I'm thinking I'd just use Dynamic for the require dep as I've seen on SO How to invoke nodejs modules from scala.js?:
import js.Dynamic.{global => g}
import js.DynamicImplicits._
private[views] object SpacePen {
private val spacePenViews = require("atom-space-pen-views")
val view = spacePenViews.view
}
But if I wanted to type the super-class, could I just make a facade-trait and do asInstanceOf?
Secondly, I wonder how I can export my class as a node module. I found this:
https://github.com/rockymadden/scala-node/blob/master/main/src/main/coffeescript/example.coffee
Is this the right way? Do I need to do the sandboxing? Couldn't I just get moduleimported from global and write module.exports = _some_scala_object_?
I'm also wondering how I could extend existing js classes. The same problem as asked here, but I don't really understand the answer:
https://groups.google.com/forum/#!topic/scala-js/l0gSOSiqubs
My code so far:
private[views] object SpacePen {
private val spacePenViews = js.Dynamic.global.require("atom-space-pen-views")
type View = spacePenViews.view
}
class StatusBarView extends SpacePen.View {
override def content =
super.div()
}
gives me compile errors that I can't extend sealed trait Dynamic. Of course.
Any pointers highly appreciated!
I'm not particularly expert in Node per se, but to answer your first question, yes -- if you have a pointer to a JS object, and you know the details of its type, you can pretty much always define a facade trait and asInstanceOf to use it. That ought to work.
As for the last bit, you basically can't extend JS classes in Scala.js -- it just doesn't work. The way most of us get around that is by defining implicit classes, or using implicit def's, to get the appearance of extending without actually doing so.
For example, given JS class Foo, I can write
implicit class RichFoo(foo:Foo) {
def method1() = { ... }
}
This is actually a wrapper around Foo, but calling code can simply call foo.method1() without worrying about that detail.
You can see this approach in action very heavily in jquery-facade, particularly in the relationship between JQuery (the pure facade), JQueryTyped (some tweaked methods over JQuery to make them work better in Scala), and JQueryExtensions (some higher-level functions built around JQuery). These are held together using implicit def's in package.scala. As far as calling code is concerned, all of these simply look like methods on JQuery.
I'd like a (platform independent) way to list all classes from a package.
A possible way would be get a list of all classes known by Haxe, then filtering through it.
I made a macro to help with just this. It's in the compiletime haxelib.
haxelib install compiletime
And then add it to your build (hxml file):
-lib compiletime
And then use it to import your classes:
// All classes in this package, including sub packages.
var testcases = CompileTime.getAllClasses("my.package");
// All classes in this package, not including sub packages.
var testcases = CompileTime.getAllClasses("my.package",false);
// All classes that extend (or implement) TestCase
var testcases = CompileTime.getAllClasses(TestCase);
// All classes in a package that extend TestCase
var testcases = CompileTime.getAllClasses("my.package",TestCase);
And then you can iterate over them:
for ( testClass in testcases ) {
var test = Type.createInstance( testClass, [] );
}
Please note, if you never have "import some.package.SomeClass" in your code, that class will never be included, because of dead code elimination. So if you want to make sure it gets included, even if you never explicitly call it in your code, you can do something like this:
CompileTime.importPackage( "mygame.levels" );
CompileTime.getAllClasses( "mygame.levels", GameLevel );
How it works
CompileTime.getAllClasses is a macro, and what it does is:
Waits until compilation is finished, and we know all of the types / classes in our app.
Go through each one, and see if it is in the specified package
See also if it extends (or implements) the specified class/interface
If it does, add the class name to some special metadata - #classLists(...) metadata on the CompileTimeClassList file, containing the names of all the matching classes.
At runtime, use the metadata, together with Type.resolveClass(...) to create a list of all matching types.
This is one way to store and retrieve the information: https://gist.github.com/back2dos/c9410ed3ed476ecc1007
Beyond that you could use haxe -xml to get the type information you want, then transform it as needed (use the parser from haxe.rtti to handle the data) and embed the JSON encoded result with -resource theinfo.json (accessed through haxe.Resource).
As a side note: there are chances you'll be better off not having any automation and just add the classes to an array manually. Imagine you have somepackage.ClassA, somepackage.ClassB, ... then you can do
import somepackage.*;
//...
static var CLASSES:Array<Class<Dynamic>> = [ClassA, ClassB, ...];
It gives you more flexibility as whatever you want to do, you can always add 3rd party classes, which may not necessarily be in the same package and you can also choose to not use a class without having to delete it.
I'm starting out a long term project, based on Node.js, and so I'm looking to build upon a solid dependency injection (DI) system.
Although Node.js at its core implies using simple module require()s for wiring components, I find this approach not best suited for a large project (e.g. requiring modules in each file is not that maintainable, testable or dynamic).
Now, I'd done my bits of research before posting this question and I've found out some interesting DI libraries for Node.js (see wire.js and dependable.js).
However, for maximal simplicity and minimal repetition I've come up with my own proposition of implementing DI:
You have a module, di.js, which acts as the container and is initialized by pointing to a JSON file storing a map of dependency names and their respective .js files.
This already provides a dynamic nature to the DI, as you may easily swap test/development dependencies.
The container can return dependencies by using an inject() function, which finds the dependency mapping and calls require() with it.
For simplicity, the module is assigned to a global variable, i.e. global.$di, so that any file in the project may use the container/injector by calling $di.inject().
Here's the gist of the implementation:
File di.js
module.exports = function(path) {
this.deps = require(path);
return {
inject: function(name) {
if (!deps[name])
throw new Error('dependency "' + name + '" isn\'t registered');
return require(deps[name]);
}
};
};
Dependency map JSON file
{
"vehicle": "lib/jetpack",
"fuel": "lib/benzine",
"octane": "lib/octane98"
}
Initialize the $di in the main JavaScript file, according to development/test mode:
var path = 'dep-map-' + process.env.NODE_ENV + '.json;
$di = require('di')(path);
Use it in some file:
var vehicle = $di.inject('vehicle');
vehicle.go();
So far, the only problem I could think of using this approach is the global variable $di.
Supposedly, global variables are a bad practice, but it seems to me like I'm saving a lot of repetition for the cost of a single global variable.
What can be suggested against my proposal?
Overall this approach sounds fine to me.
The way global variables work in Node.js is that when you declare a variable without the var keyword, and it gets added to the global object which is shared between all modules. You can also explicitly use global.varname. Example:
vehicle = "jetpack"
fuel = "benzine"
console.log(vehicle) // "jetpack"
console.log(global.fuel) // "benzine"
Variables declared with var will only be local to the module.
var vehicle = "car"
console.log(vehicle) // "car"
console.log(global.vehicle) // "jetpack"
So in your code if you are doing $di = require('di')(path) (without var), then you should be able to use it in other modules without any issues. Using global.$di might make the code more readable.
Your approach is a clear and simple one which is good. Whether you have a global variable or require your module every time is not important.
Regarding testability it allows you to replace your modules with mocks. For unit testing you should add a function that makes it easy for you to apply different mocks for each test. Something that extends your dependency map temporarily.
For further reading I can recommend a great blog article on dependency injection in Node.js as well as a talk on the future dependency injector of angular.js which is designed by some serious masterminds.
BTW, you might be interested in Fire Up! which is a dependency injection container I implemented.
I decided to start a new project to get into hacklang, and after fixing some if the problems I initially ran into transitioning from php habits, I ran into the following errors:
Unbound name: str_replace
Unbound name: empty
Doing some research I found that this is due to using 'legacy' php which isn't typechecked, and will error with //strict.
That's fine and all, empty() was easy enough to replace, however str_replace() is a bit more difficult.
Is there an equivalent function that will work with //strict? Or at least something similar.
I'm aware that I could use //decl but I feel like that defeats the purpose in my case.
Is there at least any way to tell which functions are implemented in hack and which are not in the documentation as I couldn't find one?
For reference (though it isn't too relevant to the question itself), here is the code:
<?hh //strict
class HackMarkdown {
public function parse(string $content) : string {
if($content===null){
throw new RuntimeException('Empty Content');
}
$prepared = $this->prepare($content);
}
private function prepare(string $contentpre) : Vector<string>{
$contentpre = str_replace(array("\r\n","\r"),"\n",$contentpre);
//probably need more in here
$prepared = Vector::fromArray(explode($contentpre,"\n"));
//and here
return $prepared;
}
}
You don't need to change your code at all. You just need to tell the Hack tools about all the inbuilt PHP functions.
The easiest way to do this is to download this folder and put it somewhere in your project. I put it in a hhi folder in the base of my project. The files in there tell Hack about all the inbuilt PHP functions.
Most of them don't have type hints, which can lead to Hack thinking the return type of everything is mixed instead of the actual return, that is actually correct in most cases as, for example, str_replace can return either a string or a bool. However, it does stop the "unbound name" errors, which is the main reason for adding them.