CouchDB - share functions across views, across design documents, across databases - couchdb

Ok, here's the thing.
I have a good JS background, had my share of JS in the past, and have lots of cool bare-bones tools I take with me from project to project that act like a library.
I'm trying to formulate work with CouchDB.
Now, after getting used to luxury of cool tools that you wrote and simplify the language for you - I find it a little frustrating to write many things in bare-bones manner.
I'm looking for a way I can load to the database context a limited, highly efficient and generic set of tools that focus on the pure language and makes the work with the language much more groovy (and gosh, no, im not talking about jquery or any of the even more busty libraries out there).
If on top of that, there could be found a way where I can add to the execution context of the couchDB JS engine some of my own logic tools (BL model functions) - it would present a great and admirable power and make couchDB the new home for a JavaScript-er like me.
Maybe I'm aiming too low.
I'd be satisfied with a way I can allocate a set of extensions even for a specific database, and I don't mind do it for every database in separate. Or worse - to add it to every design document, so I can teach for example several views in the same design-doc what a Person is, what a Worker is, and use their methods to retrieve data from them according to logic in a reusably coded manner.
Can anybody point me the the way?
Whatever way you can point me - I'll be very verrry grateful.
If there are ways for all of these - then great.
Trust me to know the difference of what logic belongs to what layer...
You open my possibilities - I promise to use them :D

CouchDB now supports code sharing as CommonJS modules.
http://docs.couchbase.org/couchdb-release-1.1/index.html#couchdb-release-1.1-commonjs
http://caolanmcmahon.com/posts/commonjs_modules_in_couchdb
In this way, you can share your javascript modules between views, lists, and shows in the same design doc. (Server-side)
Also, you can load these modules on the browser side with this library:
https://github.com/couchapp/couchapp/blob/master/couchapp/templates/vendor/couchapp/_attachments/jquery.couch.app.js
You also might want to look at Kanso:
http://kansojs.org/
It does a really good job of making your javascript work seemless between the server and client.

You can find some helpful tools here : https://github.com/vivekpathak/casters
The running examples and test cases may particularly help you.

Related

If I'm integrating two seperate API's, is a MERN stack appropriate?

I have two separate cloud-based APIs that I am working on integrating together. Neither software directly talks to each other so I am creating something in the middle to get them to communicate. I have had trouble finding examples or documentation on how exactly to do this, does anyone know of any resources that could help me out?
My plan going in was to use a MERN Stack, running on a local server to do GET and POST requests to both APIs, use some mapping and logic to transpose the data into the correct format and send it to the other software. I do not have a client per se (other than myself) on my end, so I really will be skipping the React part of MERN, at least that is what I'm thinking. I'll be using Mongo to keep track of both sets of data for redundancy. I also considered using a LAMP Stack but felt that MERN would be faster in handling the data, and Mongo is more flexible in handling different data formats. If there is another process or technology that could help me that I'm not thinking of, I would be grateful to hear about it.
Has anyone encountered something like this before? Thank you.
As with most architecture questions, there's no completely right or wrong answer here. You could certainly design a well-built system to handle for this purpose with either stack; even more-so when you mention that your front-end framework is not an important consideration. Instead, ask yourself questions like this:
Which stack do you have more experience with, and is this an appropriate time to learn a new set of technologies, or is it important to do the best work you're capable of right now (how important is time, cost, or quality in this case)?
Another generalization I'll stick my neck out for is a data-first approach; what sort of data are you dealing with from each cloud integration, and what kind of data do you need to support and/or create in order to make your system work? Mongo, being a NoSQL persistence layer, will allow you to change your data model and handle more varied data in a quicker and easier manner than a SQL solution will. This is a double-edged sword, however, as lack of validation and a strongly-constrained (typed?) data model will make your application harder to work with and debug as it grows. In short - how big might this application grow?
If you have a handy and familiar way to manage the three different data models you're dealing with (cloud service 1, cloud service 2, and your app) via MySQL, then that's a compelling reason to use it. However, if your style is to start dumping data into your database and you're comfortable with a more iterative approach (which may require more, albeit shorter rounds of refactoring), then Mongo with MERN may be the preferable choice.
Finally, will others ever be working on this application? If so, which language would you prefer to be dealing with them upon - PHP or Javascript?

ODM/ORM for PouchDB in JavaScript

I'm interested in using PouchDB for a offline-first mobile application (Cordova) and I'm wondering if there is any leightweight ORM/ODM for PouchDB written in JavaScript. Couldn't find one.
Is PouchDB the most common way to implement "offlinability" in JavaScript? Or are there other better ways to do that (without going native).
Another one is https://github.com/iyobo/pouchorm, claiming to be 'The definitive ORM for working with PouchDB. Native support for Typescript.'
At the time of writing it appears to be still actively developed.
I did some research and consider PouchDB and RxDB the two most feasible options if you go for open source and want to avoid larger costs for the servers. There still is CouchBase which provides libraries for many languages.
PouchORM (https://github.com/iyobo/pouchorm) is a solid option.
It's light weight and yet super-functional.
It has all the features I would care for when using pouchdb; namely it's simplicity, data validation and it's cool use of separating data across a single DB using collections.
The collections are also indexed by their type so this does not affect speed.

Node-Neo4j-Embedded limits (call for new benchmark)

Is there anybody out there using Node-Neo4j-embedded in production mode ?
What kind of limits are expectable ?
Because I think this module is pushing the Cypher queries directly to the node-java module, what uses them directly with Neo4j java libs, I belief there shouldn't be any limits.
I feel it is dangerous to decide to use a lib what isn't maintained for about 2 years (see: github) - and it shouldn't be on Neo4j docs if it isn't maintained (see: README.md dead link about API-Docs).
It looks like there could be a new trend to power up node.js support like first citizen languages by other distributor(s) for (in_memory) graph databases. Maybe Neo4j also should review this and the unmaintained node module (like OrentDB did). The trend had bin initiated by a benchmark-battle between ArangoDB and OrientDB.
I would love to see an Node-Neo4j-embedded benchmark answer to the open source benchmark of ArangoDB - done by professional Neo4j people like OrientDB people had done. But note: They hadn't been fair enough (read the last lines about enabling query caches...).
Or it has to be a new benchmark focused on most possible first citizens-like access by NodeJS. There are three possible solutions to test. I am not experienced enough to do such a test what would be really acceptable. But I would like to help by verifying this.
Please support this call for action with comments and (several types of) answers. A better (native like access) and wider range of supporting in_memory and graph solutions would help the node community very much. A new benchmark would force innovation
Short note about ArangoDBs benchmark: They've tested the REST-APIs. But if you think about performance, you don't like to use a REST-API - you like to use direct library access.
#editors: you are welcome
We (ArangoDB) think that the scalability of embedded databases is to limited. It also limits the number of databases which you may want to compare. Users prefer to implement their solutions in their Application stack of choice, so you would limit the number of people potentially interested in your comparison.
The better way of doing this is to compare the officialy supported interface of the database vendor into the client stack that is commonly supported amongst all players in the field. This is why we have chosen nodejs.
There is enough chatter about benchmarks and how to compare them on stackoverflow, so in doubt, start out to create a usecase and implement code for it, present your results in a reproducable way and request that for comment, instead of demanding others to do this for you.

Desktop app with website using data

I have a plan to create a desktop app (language not chosen yet) that will be used as an admin part to manipulate data. At the same time the database will be used for a website.
My only concern is -- I may mix up technologies that aren't compatible, but the only thing that ties them together is the database.
Say I will use Delphi to create the desktop app to manage an Access or MSSQL/MYSQL (if possible) and then use php as to make the web.
Can there be obvious problems with this idea that I am blind to right now?
Any other ideas suggestions are greatly appreciated.
Databases have to be one of the most common ways I see two languages communicating/cooperating. I've seen databases as a conduit between C/C++, Java, Perl, Python, C#, etc... Databases have the benefit of storing data in a pretty language agnostic way. Almost all languages have a way to talk to a database.
The main downside of using two different languages is that you won't be able to reuse code between your web project and your desktop project. That may sound fine, but every time you update your DB schema, you have to update the two code bases. Not a deal-breaker, but annoying nonetheless.
I would recommend avoiding Access if you could help it. Access works for a simple desktop application, but once you start introducing multiple users, you should go with something a little more robust (and secure). Go with something like SQL Server Compact or SQLite if you need a file database. I personally would bite the bullet and go right for MySQL.

How can I still use DDD, TDD in BizTalk?

I just started getting into BizTalk at work and would love to keep using everything I've learned about DDD, TDD, etc. Is this even possible or am I always going to have to use the Visio like editors when creating things like pipelines and orchestrations?
You can certainly apply a lot of the concepts of TDD and DDD to BizTalk development.
You can design and develop around the concept of domain objects (although in BizTalk and integration development I often find interface objects or contract first design to be a more useful way of thinking - what messages get passed around at my interfaces). And you can also follow the 'Build the simplest possible thing that will work' and 'only build things that make tests pass' philosophies of TDD.
However, your question sounds like you are asking more about the code-centric sides of these design and development approaches.
Am I right that you would like to be able to follow the test driven development approach of first writing a unti test that exercises a requirement and fails, then writing a method that fulfils the requirement and causes the test to pass - all within a traditional programing language like C#?
For that, unfortunately, the answer is no. The majority of BizTalk artifacts (pipelines, maps, orchestrations...) can only really be built using the Visual Studio BizTalk plugins. There are ways of viewing the underlying c# code, but one would never want to try and directly develop this code.
There are two tools BizUnit and BizUnit Extensions that give some ability to control the execution of BizTalk applications and test them but this really only gets you to the point of performing more controled and more test driven integration tests.
The shapes that you drag onto the Orchestration design surface will largely just do their thing as one opaque unit of execution. And Orchestrations, pipelines, maps etc... all these things are largely intended to be executed (and tested) within an entire BizTalk solution.
Good design practices (taking pointers from approaches like TDD) will lead to breaking BizTalk solutions into smaller, more modular and testable chunks, and are there are ways of testing things like pipelines in isolation.
But the detailed specifics of TDD and DDD in code sadly don't translate.
For some related discussion that may be useful see this question:
Mocking WebService consumed by a Biztalk Request-Response port
If you often make use of pipelines and custom pipeline components in BizTalk, you might find my own PipelineTesting library useful. It allows you to use NUnit (or whatever other testing framework you prefer) to create automated tests for complete pipelines, specific pipeline components or even schemas (such as flat file schemas).
It's pretty useful if you use this kind of functionality, if I may say so myself (I make heavy use of it on my own projects).
You can find an introduction to the library here, and the full code on github. There's also some more detailed documentation on its wiki.
I agree with the comments by CKarras. Many people have cited that as their reason for not liking the BizUnit framework. But take a look at BizUnit 3.0. It has an object model that allows you to write the entire test step in C#/VB instead of XML. BizUnitExtensions is being upgraded to the new object model as well.
The advantages of the XML based system is that it is easier to generate test steps and there is no need to recompile when you update the steps. In my own Extensions library, I found the XmlPokeStep (inspired by NAnt) to be very useful. My team could update test step xml on the fly. For example, lets say we had to call a webservice that created a customer record and then checked a database for that same record. Now if the webservice returned the ID (dynamically generated), we could update the test step for the next step on the fly (not in the same xml file of course) and then use that to check the database.
From a coding perspective, the intellisense should be addressed now in BizUnit 3.0. The lack of an XSD did make things difficult in the past. I'm hoping to get an XSD out that will aid in the intellisense. There were some snippets as well for an old version of BizUnit but those havent been updated, maybe if theres time I'll give that a go.
But coming back to the TDD issue, if you take some of the intent behind TDD - the specification or behavior driven element, then you can apply it to some extent to Biztalk development as well because BizTalk is based heavily on contract driven development. So you can specify your interfaces first and create stub orchestrations etc to handle them and then build out the core. You could write the BizUnit tests at that time. I wish there were some tools that could automate this process but right now there arent.
Using frameworks such as the ESB guidance can also help give you a base platform to work off so you can implement the major use cases through your system iteratively.
Just a few thoughts. Hope this helps. I think its worth blogging about more extensively.
This is a good topic to discuss.Do ping me if you have any questions or we can always discuss more over here.
Rgds
Benjy
You could use BizUnit to create and reuse generic test cases both in code and excel(for functional scenarios)
http://www.codeplex.com/bizunit
BizTalk Server 2009 is expected to have more IDE integrated testability.
Cheers
Hemil.
BizUnit is really a pain to use because all the tests are written in XML instead of a programming language.
In our projects, we have "ported" parts of BizUnit to a plain old C# test framework. This allows us to use BizUnit's library of steps directly in C# NUnit/MSTest code. This makes tests that are easier to write (using VS Intellisense), more flexible, and most important, easier to debug in case of a test failure. The main drawback of this approach is that we have forked from the main BizUnit source.
Another interesting option I would consider for future projects is BooUnit, which is a Boo wrapper on top of BizUnit. It has advantages similar to our BizUnit "port", but also has the advantage of still using BizUnit instead of forking from it.

Resources