I am currently using CommonDomain, and this works currently very well, but I have a few questions:
Is CommonDomain still being actively maintained? I see on GitHub that the last work was done around 2 years ago: https://github.com/joliver/CommonDomain
If it is being maintained is there any plan on upgrading CommonDomain to work with NEventStore 5?
Updating 'NEventStore 4.0.0.15' to 'NEventStore 5.0.1.2' failed. Unable to find a version of 'CommonDomain' that is compatible with 'NEventStore 5.0.1.2'.
Thank you very much
CommonDomain is now being actively maintained as part of NEventStore:
https://github.com/NEventStore/NEventStore/tree/master/src/NEventStore/CommonDomain
Related
This question already has answers here:
Is there an option for us to customize and group the test scenarios in the statics section of the Karate Gatling Report?
(2 answers)
Closed 1 year ago.
I am using Karate-Gatling combo for perormance testing. In 0.9.6, the Gatling logs included a thread index, from which I could determine how long each user took to finish the scenario. The logs no longer contain this information in 1.0.1.
Is there a way to get the information about time it took to process a single user in 1.0.1? Or am I stuck with some sort of statistics of Duration*ConcurrentUsers/TotalUsers?
This is news to me. There have been some code contributions and no one else has reported this. The Gatling version has also been upgraded, and perhaps it is no longer supported. The best thing is for you to help us understand what should be changed and do some investigation. We have a nice developer guide, so you should be able to easily build from source.
I suspect it is this commit: https://github.com/intuit/karate/pull/1335/files
If if we don't get help, it is unlikely we resolve this as no one else has reported this.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am looking to working on a project using node js addons with C++. I came across two abstract library NAN and N-API that I can use. However I am unable to decide which one I should use. I was not able to find proper comparison between these two libraries.
What are the pros, cons and differences of both? How to choose between them?
So far I have found that NAN has more online tutorials/articles regarding async calls. But N-API is officially supported by Node (and was created after NAN as a better alternative, although not sure.)
My understanding is this:
The Node-API (formerly N-API) was added to the core node.js interface in v8.0.0. "It is intended to insulate Addons from changes in the underlying JavaScript engine…" to quote the documentation. It also provides some other wrappers around things like buffers and asynchronous work (which should help avoid some of the underlying non-stable APIs noted in their Implications of ABI stability section).
nan (Native Abstractions for Node) is indeed older and so also supports older versions of node.js — back to node.js 0.8! Now despite its author claiming back in 2017:
As I mentioned somewhere else, N-API is not meant to be directly used for anything. Where has this notion come from? It is an (effectively internal) low-level infrastructure layer meant to offer ABI stability. There will be another layer on top.
…I do not see much warning to that effect in the official Node.js add-on documentation. Perhaps this other comment is a bit more insightful:
Yes, you should still use NAN for production use. It covers every relevant version of Node.js. Also note that N-API is not intended for end users. You should eventually use https://github.com/nodejs/node-addon-api.
Again, that was in June of 2017 by the maintainer of nan at the time. It seems that node-addon-api has matured in the meantime and remains active. In fact, I found a comment in the -addon-api repo that is only a month old at present:
…part of the goal was to make it easy to transition from nan.
So I think the answer is:
use nan if you want something mature and very backwards-compatible
use node-addon-api if you want something forwards-looking in C++
use Node-API/N-API if you are comfortable working in C and dealing with possible lower-level concerns
You should use the node-addon-API module for new C++ code (or N-API for C code). All supported (non-EOL) versions of Node.js support it, and it makes maintaining and distributing native add-ons much easier: whereas addons using NAN require rebuilding the module for each NODE_MODULE_VERSION (major version of Node.js), modules using N-API/Node-Addon-API are forward-compatible:
A given version n of N-API will be available in the major version of Node.js in which it was published, and in all subsequent versions of Node.js, including subsequent major versions.
There's a somewhat confusing compatibility matrix here. N-API version 3 is compatible with Node.js v8.11.2+, v9.11.0+ and all later major versions (v10+), for example.
On top of that, node-addon-API fixes a lot of the annoying parts of NAN (like Buffers always being char* instead of, say uint8_t*).
NAN still works of course, and there are more learning resources online, but node-addon-API is the way forward.
The rules for when to increase the MAJOR vs the MINOR version number with SemVer 2.0 are very compelling. They clearly give a lot of advantages to knowing if the app/service is backwards compatible.
But the site does not really give an reason for the differences between a MINOR and what it calls a PATCH. I don't see it giving the same benefits of MAJOR vs MINOR.
For reference here are the SemVer rules:
MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible manner, and
PATCH version when you make backwards-compatible bug fixes.
So the only difference between MINOR and PATCH is features vs bug fixes. My company wants to do that differently.
They want to have MINOR be a collection of [backwards compatible] features. "PATCH" (which we call Incremental) be the releases needed to get those features out. (We release bug fixes as we release features.)
For example, if we plan for 7 [backwards compatible] features in our 2.4 release then 2.4.0 may have 2 of the features, 2.4.1 would have 3 more features and 2.4.2 would have the last 2 (perhaps with a bug fix or two in each release).
I can see that this violates SemVer, but I need to know why SemVer has decided to be prescriptive on the differences between the MINOR and PATCH versions so I can know which way to push my company.
NOTE: I hope that this is not too subjective for Stack Overflow. I don't usually ask questions like this, so it is possible that this question will need to be closed...
The standard is deliberately terse. There's nothing in it that prevents you from releasing a bunch of bug fixes along with your new features and you only have to bump the minor field when you do this. If the changes only involve bug fixes, refactoring or documentation that does not add, remove or modify any interface, then you only bump the patch. The whole point is to communicate to your consumers, their level of risk when taking an update from you.
EDIT:
It is a best practice to separate bug fixes (patches) from feature work (minor) and breaking changes (major), into separate releases. This allows your consumers to automatically pick up the latest fixes, without having to deal with feature bloat or breaking changes.
For example, if we plan for 7 [backwards compatible] features in our 2.4 release then 2.4.0 may have 2 of the features, 2.4.1 would have 3 more features and 2.4.2 would have the last 2 (perhaps with a bug fix or two in each release).
Nothing stops you following SemVer convention and release 2.4.0 with 2 features, 2.5.0 with 3 more features and 2.6.0 with last 2 features and a few bug fixes for previously released features.
It is better to follow some common convention instead of reinventing the wheel. You save time thinking, discussing and documenting a custom solution as well as avoid confusion within the team and newcomers.
https://semver.org/
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
What is the versioning style of the project?
You should only be using even numbered versions: x.[even].z. These are all "stable" and bug fixes will be released to them (until the next minor version). Only the latest x.[even].z version is supported at any given time.
For compatibility, you should just look at the documentation. For example, for events: http://nodejs.org/api/events.html
Stability: 4 - API Frozen
That means you can be rest assured that the EventEmitter class will never change.
Then there's stuff like domains where no one is sure what they're doing, and you probably shouldn't be using it:
Stability: 1 - Experimental
Your best best is just to stick with Stability >= 3 features and not worry about compatibility between versions.
Also, there doesn't seem to be a strict release cycle.
Node has a two-track versioning system. Even-numbered versions (0.4, 0.6, 0.8) are stable, and odd-numbered versions are unstable. The stable releases are API-stable, which means that if you are using 0.8.1 and 0.8.2 comes out, you should be able to upgrade with no issues.
On the 0.9.x stream, any update may change the API, especially in parts of the system that are under active development. When the odd-version reaches a certain level of stability and maturity, it becomes the next even-version.
There is not a strict timed-release cycle. The primary maintainer of Node.JS is a guy named Isaac Schleuter and he has been very public about his goals and targets with node. He is also open to a lot of community input on this, so they run NodeConf and Node Summer Camp and some other events to gather input.
If you have time to really dig into the community, check out the NodeUp podcast and some of Isaac's talks to get an idea about the direction they are going and APIs.
You ask about version 1.0. As far as I remember, Isaac has a couple of specific things he wants to stabilize before going to version 1.0. In particular, I remember Streams and Buffers which have really become key to node's growth. (that's stated, this just from memory)
I need to implement a memory cache with Node, it looks like there are currently two packages available for doing this:
node-memcached (https://github.com/3rd-Eden/node-memcached)
node-memcache (https://github.com/vanillahsu/node-memcache)
Looking at both Github pages it looks like both projects are under active development with similar features.
Can anyone recommend one over the other? Does anyone know which one is more stable?
At the moment of writing this, the project 3rd-Eden/node-memcached doesn't seem to be stable, according to github issue list. (e.g. see issue #46) Moreover I found it's code quite hard to read (and thus hard to update), so I wouldn't suggest using it in your projects.
The second project, elbart/node-memcache, seems to work fine , and I feel good about the way it's source code is written. So If I were to choose between only this two options, I would prefer using the elbart/node-memcache.
But as of now, both projects suffer from the problem of storing BLOBs. There's an opened issue for the 3rd-Eden/node-memcached project, and the elbart/node-memcache simply doesn't support the option. (it would be fair to add that there's a fork of the project that is said to add option of storing BLOBs, but I haven't tried it)
So if you need to store BLOBs (e.g. images) in memcached, I suggest using overclocked/mc module. I'm using it now in my project and have no problems with it. It has nice documentation, it's highly-customizable, but still easy-to-use. And at the moment it seems to be the only module that works fine with BLOBs storing and retrieving.
Since this is an old question/answer (2 years ago), and I got here by googling and then researching, I feel that I should tell readers that I definitely think 3rd-eden's memcached package is the one to go with. It seems to work fine, and based on the usage by others and recent updates, it is the clear winner. Almost 20K downloads for the month, 1300 just today, last update was made 21 hours ago. No other memcache package even comes close. https://npmjs.org/package/memcached
The best way I know of to see which modules are the most robust is to look at how many projects depend on them. You can find this on npmjs.org's search page. For example:
memcache has 3 dependent projects
memcached has 31 dependent projects
... and in the latter, I see connect-memcached, which would seem to lend some credibility there. Thus, I'd go with the latter barring any other input or recommenations.