Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am looking to working on a project using node js addons with C++. I came across two abstract library NAN and N-API that I can use. However I am unable to decide which one I should use. I was not able to find proper comparison between these two libraries.
What are the pros, cons and differences of both? How to choose between them?
So far I have found that NAN has more online tutorials/articles regarding async calls. But N-API is officially supported by Node (and was created after NAN as a better alternative, although not sure.)
My understanding is this:
The Node-API (formerly N-API) was added to the core node.js interface in v8.0.0. "It is intended to insulate Addons from changes in the underlying JavaScript engine…" to quote the documentation. It also provides some other wrappers around things like buffers and asynchronous work (which should help avoid some of the underlying non-stable APIs noted in their Implications of ABI stability section).
nan (Native Abstractions for Node) is indeed older and so also supports older versions of node.js — back to node.js 0.8! Now despite its author claiming back in 2017:
As I mentioned somewhere else, N-API is not meant to be directly used for anything. Where has this notion come from? It is an (effectively internal) low-level infrastructure layer meant to offer ABI stability. There will be another layer on top.
…I do not see much warning to that effect in the official Node.js add-on documentation. Perhaps this other comment is a bit more insightful:
Yes, you should still use NAN for production use. It covers every relevant version of Node.js. Also note that N-API is not intended for end users. You should eventually use https://github.com/nodejs/node-addon-api.
Again, that was in June of 2017 by the maintainer of nan at the time. It seems that node-addon-api has matured in the meantime and remains active. In fact, I found a comment in the -addon-api repo that is only a month old at present:
…part of the goal was to make it easy to transition from nan.
So I think the answer is:
use nan if you want something mature and very backwards-compatible
use node-addon-api if you want something forwards-looking in C++
use Node-API/N-API if you are comfortable working in C and dealing with possible lower-level concerns
You should use the node-addon-API module for new C++ code (or N-API for C code). All supported (non-EOL) versions of Node.js support it, and it makes maintaining and distributing native add-ons much easier: whereas addons using NAN require rebuilding the module for each NODE_MODULE_VERSION (major version of Node.js), modules using N-API/Node-Addon-API are forward-compatible:
A given version n of N-API will be available in the major version of Node.js in which it was published, and in all subsequent versions of Node.js, including subsequent major versions.
There's a somewhat confusing compatibility matrix here. N-API version 3 is compatible with Node.js v8.11.2+, v9.11.0+ and all later major versions (v10+), for example.
On top of that, node-addon-API fixes a lot of the annoying parts of NAN (like Buffers always being char* instead of, say uint8_t*).
NAN still works of course, and there are more learning resources online, but node-addon-API is the way forward.
Related
This question already has an answer here:
Node.js plans to support import/export ES6 (ECMAScript 2015) modules
(1 answer)
Closed 5 years ago.
Just wonder how do we import a module in node.js 8 - are we still using require?
Or do we still need babel for using import?
I have been digging around but seems no answer. If we still have to use require, why can't node implement import yet?
UPDATE-2018.11.15 ↓
Short answer
We're still using require
Long answer
ESM loading has partially landed in node 8.5.0 which was released in September 2017. As such, it has beeen part of the specs as an experimental feature for a little while: see the API documentation here. Caveats include the need for the --experimental-modules flag and the use of a new .mjs extension for modules.
There is still changes that need to happen in V8 before ESM loading is stable and fully featured so as with my original answer, I would still advise on sticking with CommonJS require if you don't already use Babel for other stuff
See this post for a more didactic explanation
PREVIOUS ANSWER ↓
The two implementations are completely different under the hood, so there is more to it than what meets the eyes
The takeaway is that there are still lingering issues/questions over the specifications (all the way to V8), and as such import cannot currently be implemented in Node without a using a transpiler
See this comment (dated February 2017) from one of the contributor:
At the current point in time, there are still a number of specification and implementation issues that need to happen on the ES6 and Virtual Machine side of things before Node.js can even begin working up a supportable implementation of ES6 modules. Work is in progress but it is going to take some time — We’re currently looking at around a year at least.
Keep in mind that transpilers simply converts the ES6 module syntax to the CommonJS module syntax, so there is currently no performance benefits. In other words, if you don't have a Babel pipeline already, there is not much incentives to create one just to use the new proposed import syntax, except from a proactive syntactic perspective
For more details on how the implementation differs, see this write up
Node.JS v0.11.3 claims to have support for ECMAScript 6 modules with the flag --harmony_modules.
I have tried various examples, such as the following.
module math {
export var pi = 3.141593;
}
What is the syntax to get modules working in Node.JS?
The modules implementation in V8 is incomplete. There's parsing support when enabled with --harmony-modules, but support of the actual functionality was put on hold. The reason for this is because the specification for how ES6 modules will actually work has been in the works and is still not fully nailed down.
The implementation in Continuum (the linked screenshot from Crazy Train's answer) dates back to an interim spec from November 2012 and is now woefully out of date because of the ongoing changes to the ES6 module's spec. This is why the V8 devs put development of support for modules on hold.
It seems like the modules spec is approaching stability (though I expect we'll see small refinements for a while) and I think (hope at least) that we'll see SpiderMonkey and V8 moving forward with implementations over the next 6 months.
Useful links:
V8 modules bug: https://code.google.com/p/v8/issues/detail?id=1569
SpiderMonkey modules bug: https://bugzilla.mozilla.org/show_bug.cgi?id=harmony%3Amodules
You can use Continuum, which is an ES6 virtual machine written in (current) JavaScript.
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
What is the versioning style of the project?
You should only be using even numbered versions: x.[even].z. These are all "stable" and bug fixes will be released to them (until the next minor version). Only the latest x.[even].z version is supported at any given time.
For compatibility, you should just look at the documentation. For example, for events: http://nodejs.org/api/events.html
Stability: 4 - API Frozen
That means you can be rest assured that the EventEmitter class will never change.
Then there's stuff like domains where no one is sure what they're doing, and you probably shouldn't be using it:
Stability: 1 - Experimental
Your best best is just to stick with Stability >= 3 features and not worry about compatibility between versions.
Also, there doesn't seem to be a strict release cycle.
Node has a two-track versioning system. Even-numbered versions (0.4, 0.6, 0.8) are stable, and odd-numbered versions are unstable. The stable releases are API-stable, which means that if you are using 0.8.1 and 0.8.2 comes out, you should be able to upgrade with no issues.
On the 0.9.x stream, any update may change the API, especially in parts of the system that are under active development. When the odd-version reaches a certain level of stability and maturity, it becomes the next even-version.
There is not a strict timed-release cycle. The primary maintainer of Node.JS is a guy named Isaac Schleuter and he has been very public about his goals and targets with node. He is also open to a lot of community input on this, so they run NodeConf and Node Summer Camp and some other events to gather input.
If you have time to really dig into the community, check out the NodeUp podcast and some of Isaac's talks to get an idea about the direction they are going and APIs.
You ask about version 1.0. As far as I remember, Isaac has a couple of specific things he wants to stabilize before going to version 1.0. In particular, I remember Streams and Buffers which have really become key to node's growth. (that's stated, this just from memory)
What are the current rules for writing python code that will pass cleanly through 2to3 and what are the practices that seem to be best suited to writing code that will not become mired forever in version 2.
I have read from the SciPy/NumPy forums that "100% test coverage" (unit testing) is important for many people, and I am not sure if that would apply to everybody. Certainly having a reasonable set of unit tests to try your code out with after conversion, seems a sane step.
Are there other things? What are skilled Pythonistas doing if they are writing 2.x code that they hope to have come through "cleanly" in the 2to3 process.
I am looking for specific instances of "[don't] do this" as well as some more general "best-practices", but specific instances of "do's and don'ts" are helpful.
Let's assume that frameworks, libraries (Django, SciPy/NumPy), and every other C Extension we need gets ported to Python3 eventually, and I'm asking about how you write and maintain the pure python language code that you write yourself.
Update: It's possible that what I really want is the "style guide" and list of deprecated features that everybody was already staying away from. I cut my teeth on Python 1.5 and moved to 2.0, and then have not really followed much of the 2.5/2.6 era, used them but really my code is more 2.1 era.
I'd say:
Read the "What's new for Python 3.0". Very informative.
In particular, if you care about Unicode or text encodings at all, take the time to understand what has changed for 3.x. That's probably one of the trickier things to change for Python 3.x.
Get Python 2.6 or 2.7, and run your code with the -3 flag. It will tell you about things in your code that will need changing.
Before using 3rd-party packages, check to see if they have a Python 3.x version. If not, check the package web site, mailing lists, version control repositories etc to see how actively the package is being developed and whether there is a roadmap towards Python 3.x support.
Download Python 3.x and try it out! Admittedly, that might not be practical if you care about code that currently depends on packages that don't yet support Python 3.x (e.g. wxPython or Django).
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
As you know if you've read some of my other questions, I'm writing a programming language. One of my big concerns is that many languages have troubles with backwards compatibility, and I wish to avoid such issues. On one hand, I've seen a lot of pain and agony in the Python community over the switch to Python 3000 because it breaks backwards compatibility. On the other hand, I've seen C++, which started off shackled to C syntax and has never really recovered; i.e. the syntax of C is a poor fit for many C++ constructs.
My solution is to allow programmers to add a compiler directive to the file which will tell the compiler which version of the language to use when compiling. But my question is, how do other languages handle this problem? Are there any other solutions that have been tried, and how successful were those solutions?
When something is broken, the courageous language designer must not be afraid to break backward compatibility. I know of two good ways to do it:
The Glasgow Haskell Compiler typically deprecates unwanted features and then drops support after two versions.
The Lua team have a policy that each major release (there have been 5 since 1993) may break backward compatibility, but they typically provide a compatibility layer that helps users migrate to the latest version. (Plus they are scrupulous about keeping everything available; the current version is 5.1 but I have Lua 2.5 code that I still maintain, and if I find a bug in Lua 2.5, they will fix it.)
Easy : Deprecation
When new methods or functions are available, they don't simply eliminate the old ones. They just deprecated them. So, developers working on new compilers know that at some point they will need to use the new versions of those functions or in the future their program won't compile. In that way they are 'backward compatible' but at the same time enforcing the usage of the new functionality.
I think you're on the right track with a compiler directive. It's may be better to package that as a command-line argument to your compiler though.
No matter what, in your compiler logic you can test against the version something like this:
if ( language_major_version > 2 ) // 2.00.00 and above
... normal processing ...
else
... emit compatibility/deprecation error ...
VoiceXML, an XML-based language for specifying voice dialogs, is one example of putting the directive in the source code:
<?xml version="1.0"?>
<vxml version="2.1">
...
</vxml>
Since the syntax is always well-formed XML, this is really easy to implement, almost cheating,
I'm going to be the really harsh sobering voice and say: You're never going to have enough users for it to matter. Sorry, but the statistics are against you.
In the unlikely event that it does become a problem, these are the strategies I've seen used for this problem
Don't worry about it and just break backward compatibility
Keep old versions of the interpreter packaged in with the new versions and switch using some directive, or other kind of metadata
Make new versions a strict superset of old versions. That way, all old programs compile in the new version of the compiler/interpreter
Provide a converter to convert old style programs to new style programs.
Base the language on a virtual machine that accepts bytecode compiled from any version of the language. Ensure that there are facilities for different versions to "talk" to eachother.
Compromise and end up pissing everyone off instead of just half of your audience
New versions have a loose mode by default and a "strict" mode, the former being strictly backwards compatible, the latter removing old and busted features for those who opt in.
The good news is that none of these strategies work extremely well, so you have the opportunity to be creative and mess up in a novel new way.
Generally speaking you continue to support all the old features for at least one new version though preferably two versions into the future. Then the feature is depreciated and it is up to the user of your language to update their applications prior to the feature being dropped from your language.
I forgot one other way that languages have dealt with backward compatibility: Stubbornly insist on never updating the language. See Donald Knuth's TEX for an example of this.