Difference between Cloudant and CouchOne? - couchdb

I wonder what the difference is between Cloudant and CouchOne.

Good question. My quick answer:
CouchOne is lead by Damien Katz, the originator of the CouchDB Apache project. CouchOne is now focused squarely on scaling couchdb down to run efficiently on mobile devices. The goal is to leverage the p2p replication model of CouchDB to solve the sync problem on mobile.
Cloudant is founded by 3 PhD's from MIT with big-data backgrounds. Cloudant is focused squarely on scaling CouchDB up (see the open-source bigcouch project) to power data-intensive applications in the cloud. Cloudant provides scalable data as a service for high-rate, large volume online transaction processing, search and analytics.
Thus there is a real opportunity to see the CouchDB API flourish at two tremendously different scales to provide the application developer a single platform that runs on the mobile and in the cloud, with seamless data (and CouchApp!) migration between the two.

Update (2015)
Currently, according to Professional Services on CouchDB Wiki, there are 3 CouchDB hosting services:
Smileupps
Cloudant
Iris Couch
Since this question is specifically about Cloudant and CouchOne, here's more info:
Cloudant was bought by IBM in March 2014 - see IBM Completes Acquisition of Cloudant - and continues to operate.
According to Wikipedia: "Cloudant is an IBM software product, which is primarily delivered as a cloud-based service. Cloudant is an open source non-relational, distributed database service of the same name that requires zero-configuration. Cloudant is based on the Apache-backed CouchDB project and the open source BigCouch project." (source)
CouchOne is not available any more. As of June 2015 http://www.couchone.com/ gives 404 Not Found (since at least March 2013), #couchone on Twitter had last tweets in May 2011 and says that "CouchOne is now Couchbase, Inc." - but please note that contrary to some marketing material the Couchbase Server is not a continuation of CouchDB - it has a different code base, licensing, philosophy, features, data and protocols.
For more info on this and an explanation of differences between things like CouchDB, CouchIO, CouchOne, Couchbase, Couchbase Server, Couchbase Mobile, Couchbase Lite, CouchApps, BigCouch, Touchbase, Membase, Memcached, MemcacheDB etc.- see the answer that I wrote to: Difference between CouchDB and Couchbase in March 2013.
Original answer (2011)
It's a late answer to a somewhat dated thread but it's a number one Google hit for "couchone and cloudant" so here's a little update.
Few days ago CouchOne announced a merge with Membase to form a new company called Couchbase. Membase is known eg. for the database behind FarmVille by Zynga so it's a mature large scale NoSQL solution and Couchbase is planned to be a technology scaling from smartphones to large data center clusters.
Cloudand on the other hand started from large scale (see Mike's comments) and while you can get a large scale solutions for more than $1000/mo, you can also get a 2GB database for $15/mo and even a smaller one for free.
There is a difference in managing the databases, CouchOne uses just Futon right now and Cloudant has a custom web interface where you can set up shared databases, virtual hosts, custom domains etc.
All in all Cloudant seems to be more mature right now and we have to see how the Couchbase develops.
The bottom line is that both CouchOne and Cloudant can be tried for free so it's probably best to try out both and see what best suits your needs.

I would also note that CouchOne/CouchBase supports the GeoCouch extensions, while Cloudant does not. Important if you want to do bounding box queries.

Related

Electron Desktop Application communicating with remote NoSql server

I've begun to dive into developing a desktop application with electron. I have been interested in pairing this application with a NoSQL database to create users, display data, and do CRUD operations. I've considered databases such as MongoDB and CouchDB, and I'm curious if creating a desktop application that communicates with a database hosted elsewhere is a feasible goal.
I'm hoping that someone here can help direct me to great resources on creating a desktop application that works with a remote NoSQL database. Any advice here would be greatly appreciated!
I recommend the use of CouchDB, which uses a JSON based document format. CouchDB bundles the server and data storage functionality in a single product, providing a REST-like HTTP interface for document insertion, updates, retrieval and deletion.
Therefore, you'll be able to interact with CouchDB directly from within the Electron desktop application. Apache CouchDB Nano is the official Node.js library for accessing CouchDB.
The following additional factors speach for CouchDB:
It is open source.
It has comprehensive documentation.
It is available for Linux, macOS and Windows.
It's easily installed and quickly set up.
It can be installed on your local computer (for development), on your own servers, or in the cloud.
It supports Mango querying language (inspired from MongoDB).
It is highly scalable.
It is shipped with Fauxton web interface that lets you create, update, delete, view and query documents on the fly.
etc.

Geolocation App Google Cloud

I've no experience with geo location based apps and want to build a geolaction based app with a backend written in nodejs and running on google cloud.
My main problem is how to design the database and which db should I use (Bigtable or Datastore)? The main query is to query places at a given location and radius. I have read a lot about the geohash, but the nodejs librarys aren't so good now.
So what are you recommend me for chosing and designing database?
If you want to store the data in relational format, perform frequent
joins between location/co-ordinates and the amount of data being
processed is less (>50 GB), then go for Google Cloud SQL.
Cloud Bigtable is ideal for storing very large amounts of
single-keyed data with very low latency. It has great integration
services with most of the Apache projects.
If there is no requirement of data to be in the relational format,
and frequent insertions and updations are required on huge amounts of
data, go for Google Cloud Datastore. The querying process would be
slightly different and difficult for a naive person to understand.
You can also use Google BigQuery which processes TBs of data within a
few seconds, if frequent insertions and updations are not required.
It is more of a data store.
Have a look at the following URL for better insights: https://cloud.google.com/storage-options/
Google has also announced Cloud Spanner which is a relational
database service that offers great consistency and speed (still be
beta). It is still in early stage, but can revolutionise the concepts
of SQL vs NoSQL.
All of the above databases have querying libraries written for NodeJS.
GeoMesa, an Apache licensed open source suite of tools that enables large-scale geospatial analytics, works with Cloud Bigtable. I don't know how well this will interact with node.js, but it's worth considering a framework like GeoMesa since it will likely enable you to focus more on your core product.

Benefits of using a hosted search service over building your own

I'm building a B2B Node app which has heavily related data models. We currently have our own search queries, but as we scale some of the queries appear to be becoming sluggish.
We will need to support multilingual search as well as content-based searches (searching matching content within related data).
The queries are growing more and more complicated (each has multiple joins on joins on joins) and I'm now considering a hosted search tool such as Algolia.
Given my concerns below, why should I use a hosted cloud search service rather than continue building my own queries?
Data privacy is important
Data is hosted in our own postgres DB - integrations with that are important (e.g.: will I now need to manually maintain our DB data and data in Algolia?)
Speed will be important, but not so much now
Must be able to do content-based searches across multiple languages
We are a tiny team of devs now, so dev resource time is vital
What other things should I be concerned about that can help make a decision in search capabilities?
Regarding maintenance of both DB and Cloud data, it seems it's as simple as getting all data, caching it, and storing it in the cloud:
var index = Algolia.initIndex('contacts');
var contactsJSON = require('./contacts.json');
index.addObjects(contactsJSON, function(err, content) {
if (err) {
console.error(err);
}
});
Search services like Algolia or self-hosted Elasticsearch/solr operate as full text search, not relational db queries.
But it sounds like the bottleneck is the continual rejoining. Which if you can make your relational data act like a full text document db then that could be a more efficient type of index (pre-joined sort of).
You might also look into views, or a data warehouse (maybe star schema).
But if you are going the search route maybe investigate hosting your own elasticsearch.
You could specify database, schema, sql, index, query details if you want more help.
Full Disclosure: I founded a company called SearchStax on the premise that companies and developers should not spend time setting up, managing, scaling or building tools for the search infrastructure (ops) - they are better off investing time of their employees into building value for the company, whether that be features, capabilities, product or customers.
Open Source Search solutions based on top of Lucene (Apache Solr / Elasticsearch) have what you need now and what you might need in near future from a capability perspective from a search engine. Find a mature service provider / AS-A-Service company that has specialization in open source search and let them deal with all. It may look small effort right now, though it's probably not worth time and effort of your devs to spend time on the operations of that.
For your concerns mentioned above:
Data privacy is important
Your concern around Privacy and Security are addressable. There are multiple ways you can secure your Solr environment and the right MSP or a Managed Solution provider should be able to address those.
a. Security at the transport layer can be addressed by SSL certificates. All the data going over the wire is encrypted.
b. IP Filtering and User Based Authentication should address who has access to what. Solr-as-a-Service offering by Measured Search supports both.
c. Security at rest can be addressed in multiple ways - OS level / File encryption, but you can even go further by ensuring not even your services provider has access to that data by using Searchable Encryption technology.
Privacy concerns are all address by Terms & Conditions - I am sure your legal department will address that from a Service Provider's perspective.
Data is hosted in our own postgres DB - integrations with that are important
Solr provides ability to import data directly (DIH) through a traditional relational database (MySQL, Postgres, Oracle, etc). You can either use that so Solr can pull data periodically or write your own simple script to push data through the Solr APIs.
If you are hosted in the cloud (AWS), a tunnel can be created so only the Solr deployments have the ability to pull data from your servers and your database servers are not exposed to the world, if you choose to go the DIH route.
Speed will be important, but not so much now
Solr is built for search speed - I don't think that's where your problems are going to be. Service offering like Measured Search's - you can spin up a cluster in any data center supported by AWS or Azure and make sure your search deployments are closer to your application servers so the latency overhead is minimal.
Must be able to do content-based searches across multiple languages
Yes, Solr supports that. More than 30 languages.
We are a tiny team of devs now, so dev resource time is vital
I am biased here, but I would not have my developers spend much time on operations and let them focus on what they do best - build great product capabilities to push the limits and deliver business value.
If you are interested in doing a comparison and ROI of doing it yourself vs using a solr-as-a-service like offered by SearchStax, check this paper out - https://www.searchstax.com/white-papers/why-measured-search-is-better-than-diy-solr-infrastructure/

Solr vs. ElasticSearch [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
What are the core architectural differences between these technologies?
Also, what use cases are generally more appropriate for each?
Update
Now that the question scope has been corrected, I might add something in this regard as well:
There are many comparisons between Apache Solr and ElasticSearch available, so I'll reference those I found most useful myself, i.e. covering the most important aspects:
Bob Yoplait already linked kimchy's answer to ElasticSearch, Sphinx, Lucene, Solr, Xapian. Which fits for which usage?, which summarizes the reasons why he went ahead and created ElasticSearch, which in his opinion provides a much superior distributed model and ease of use in comparison to Solr.
Ryan Sonnek's Realtime Search: Solr vs Elasticsearch provides an insightful analysis/comparison and explains why he switched from Solr to ElasticSeach, despite being a happy Solr user already - he summarizes this as follows:
Solr may be the weapon of choice when building standard search
applications, but Elasticsearch takes it to the next level with an
architecture for creating modern realtime search applications.
Percolation is an exciting and innovative feature that singlehandedly
blows Solr right out of the water. Elasticsearch is scalable, speedy
and a dream to integrate with. Adios Solr, it was nice knowing you. [emphasis mine]
The Wikipedia article on ElasticSearch quotes a comparison from the reputed German iX magazine, listing advantages and disadvantages, which pretty much summarize what has been said above already:
Advantages:
ElasticSearch is distributed. No separate project required. Replicas are near real-time too, which is called "Push replication".
ElasticSearch fully supports the near real-time search of Apache
Lucene.
Handling multitenancy is not a special configuration, where
with Solr a more advanced setup is necessary.
ElasticSearch introduces
the concept of the Gateway, which makes full backups easier.
Disadvantages:
Only one main developer [not applicable anymore according to the current elasticsearch GitHub organization, besides having a pretty active committer base in the first place]
No autowarming feature [not applicable anymore according to the new Index Warmup API]
Initial Answer
They are completely different technologies addressing completely different use cases, thus cannot be compared at all in any meaningful way:
Apache Solr - Apache Solr offers Lucene's capabilities in an easy to use, fast search server with additional features like faceting, scalability and much more
Amazon ElastiCache - Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory cache in the cloud.
Please note that Amazon ElastiCache is protocol-compliant with Memcached, a widely adopted memory object caching system, so code, applications, and popular tools that you use today with existing Memcached environments will work seamlessly with the service (see Memcached for details).
[emphasis mine]
Maybe this has been confused with the following two related technologies one way or another:
ElasticSearch - It is an Open Source (Apache 2), Distributed, RESTful, Search Engine built on top of Apache Lucene.
Amazon CloudSearch - Amazon CloudSearch is a fully-managed search service in the cloud that allows customers to easily integrate fast and highly scalable search functionality into their applications.
The Solr and ElasticSearch offerings sound strikingly similar at first sight, and both use the same backend search engine, namely Apache Lucene.
While Solr is older, quite versatile and mature and widely used accordingly, ElasticSearch has been developed specifically to address Solr shortcomings with scalability requirements in modern cloud environments, which are hard(er) to address with Solr.
As such it would probably be most useful to compare ElasticSearch with the recently introduced Amazon CloudSearch (see the introductory post Start Searching in One Hour for Less Than $100 / Month), because both claim to cover the same use cases in principle.
I see some of the above answers are now a bit out of date. From my perspective, and I work with both Solr(Cloud and non-Cloud) and ElasticSearch on a daily basis, here are some interesting differences:
Community: Solr has a bigger, more mature user, dev, and contributor community. ES has a smaller, but active community of users and a growing community of contributors
Maturity: Solr is more mature, but ES has grown rapidly and I consider it stable
Performance: hard to judge. I/we have not done direct performance benchmarks. A person at LinkedIn did compare Solr vs. ES vs. Sensei once, but the initial results should be ignored because they used non-expert setup for both Solr and ES.
Design: People love Solr. The Java API is somewhat verbose, but people like how it's put together. Solr code is unfortunately not always very pretty. Also, ES has sharding, real-time replication, document and routing built-in. While some of this exists in Solr, too, it feels a bit like an after-thought.
Support: there are companies providing tech and consulting support for both Solr and ElasticSearch. I think the only company that provides support for both is Sematext (disclosure: I'm Sematext founder)
Scalability: both can be scaled to very large clusters. ES is easier to scale than pre-Solr 4.0 version of Solr, but with Solr 4.0 that's no longer the case.
For more thorough coverage of Solr vs. ElasticSearch topic have a look at https://sematext.com/blog/solr-vs-elasticsearch-part-1-overview/ . This is the first post in the series of posts from Sematext doing direct and neutral Solr vs. ElasticSearch comparison. Disclosure: I work at Sematext.
I see that a lot of folks here have answered this ElasticSearch vs Solr question in terms of features and functionality but I don't see much discussion here (or elsewhere) regarding how they compare in terms of performance.
That is why I decided to conduct my own investigation. I took an already coded heterogenous data source micro-service that already used Solr for term search. I switched out Solr for ElasticSearch then I ran both versions on AWS with an already coded load test application and captured the performance metrics for subsequent analysis.
Here is what I found. ElasticSearch had 13% higher throughput when it came to indexing documents but Solr was ten times faster. When it came to querying for documents, Solr had five times more throughput and was five times faster than ElasticSearch.
Since the long history of Apache Solr, I think one strength of the Solr is its ecosystem. There are many Solr plugins for different types of data and purposes.
Search platform in the following layers from bottom to top:
Data
Purpose: Represent various data types and sources
Document building
Purpose: Build document information for indexing
Indexing and searching
Purpose: Build and query a document index
Logic enhancement
Purpose: Additional logic for processing search queries and results
Search platform service
Purpose: Add additional functionalities of search engine core to provide a service platform.
UI application
Purpose: End-user search interface or applications
Reference article : Enterprise search
I have been working on both solr and elastic search for .Net applications.
The major difference what i have faced is
Elastic search :
More code and less configuration, however there are api's to change
but still is a code change
for complex types, type within types i.e nested types(wasn't able to achieve in solr)
Solr :
less code and more configuration and hence less maintenance
for grouping results during querying(lots of work to achieve in
elastic search in short no straight way)
I have created a table of major differences between elasticsearch and Solr and splunk, you can use it as 2016 update:
While all of the above links have merit, and have benefited me greatly in the past, as a linguist "exposed" to various Lucene search engines for the last 15 years, I have to say that elastic-search development is very fast in Python. That being said, some of the code felt non-intuitive to me. So, I reached out to one component of the ELK stack, Kibana, from an open source perspective, and found that I could generate the somewhat cryptic code of elasticsearch very easily in Kibana. Also, I could pull Chrome Sense es queries into Kibana as well. If you use Kibana to evaluate es, it will further speed up your evaluation. What took hours to run on other platforms was up and running in JSON in Sense on top of elasticsearch (RESTful interface) in a few minutes at worst (largest data sets); in seconds at best. The documentation for elasticsearch, while 700+ pages, didn't answer questions I had that normally would be resolved in SOLR or other Lucene documentation, which obviously took more time to analyze. Also, you may want to take a look at Aggregates in elastic-search, which have taken Faceting to a new level.
Bigger picture: if you're doing data science, text analytics, or computational linguistics, elasticsearch has some ranking algorithms that seem to innovate well in the information retrieval area. If you're using any TF/IDF algorithms, Text Frequency/Inverse Document Frequency, elasticsearch extends this 1960's algorithm to a new level, even using BM25, Best Match 25, and other Relevancy Ranking algorithms. So, if you are scoring or ranking words, phrases or sentences, elasticsearch does this scoring on the fly, without the large overhead of other data analytics approaches that take hours--another elasticsearch time savings.
With es, combining some of the strengths of bucketing from aggregations with the real-time JSON data relevancy scoring and ranking, you could find a winning combination, depending on either your agile (stories) or architectural(use cases) approach.
Note: did see a similar discussion on aggregations above, but not on aggregations and relevancy scoring--my apology for any overlap.
Disclosure: I don't work for elastic and won't be able to benefit in the near future from their excellent work due to a different architecural path, unless I do some charity work with elasticsearch, which wouldn't be a bad idea
If you are already using SOLR, remain stick to it. If you are starting up, go for Elastic search.
Maximum major issues have been fixed in SOLR and it is quite mature.
Imagine the use case:
A lot(100+) of small(10Mb-100Mb, 1000-100000 documents) search indexes.
They are using by a lot of applications (microservices)
Each application can use more than one index
Small by size index, yes. But huge load(hundreds search-requests per second) and requests are complex (multiple aggregations, conditions and so on)
Downtimes are not allowed
All of that is working years long, and constantly growing.
Idea to have individual ES instance per each index - is huge overhead in this case.
Based on my experience, this kind of use case is very complex to support with Elasticsearch.
Why?
FIRST.
The major problem is fundamental back compatibility disregard.
Breaking changes are so cool!
(Note: imagine SQL-server which require you to do small change in all your SQL-statements, when upgraded... can't imagine it. But for ES it's normal)
Deprecations which will dropped in next major release are so sexy!
(Note: you know, Java contain some deprecations, which 20+ years old, but still working in actual Java version...)
And not only that, sometimes you even have something which nowhere documented (personally came across only once but... )
So. If you want to upgrade ES (because you need new features for some app or you want to get bug fixes) - you are in hell. Especially if it is about major version upgrade.
Client API will not back compatible. Index settings will not back compatible.
And upgrade all app/services same moment with ES upgrade is not realistic.
But you must do it time to time. No other way.
Existing indexes is automatically upgraded? - Yes. But it not help you when you will need to change some old-index settings.
To live with that, you need constantly invest a lot of power in ... forward compatibility of you apps/services with future releases of ES.
Or you need to build(and anyway constantly support) some kind of middleware between you app/services and ES, which provide you back compatible client API.
(And, you can't use Transport Client (because it required jar upgrade for every minor version ES upgrade), and this fact do not make your life easier)
Is it looks simple & cheap? No, it's not. Far from it.
Continuous maintenance of complex infrastructure which based on ES, is way to expensive in all possible senses.
SECOND.
Simple API ? Well... no really.
When you is really using complex conditions and aggregations.... JSON-request with 5 nested levels is whatever, but not simple.
Unfortunately, I have no experience with SOLR, can't say anything about it.
But Sphinxsearch is much better it this scenario, becasue of totally back compatible SphinxQL.
Note:
Sphinxsearch/Manticore are indeed interesting. It's not Lucine based, and as result seriously different. Contain several unique features from the box which ES do not have and crazy fast with small/middle size indexes.
I have use Elasticsearch for 3 years and Solr for about a month, I feel elasticsearch cluster is quite easy to install as compared to Solr installation. Elasticsearch has a pool of help documents with great explanation. One of the use case I was stuck up with Histogram Aggregation which was available in ES however not found in Solr.
Add an nested document in solr very complex and nested data search also very complex. but Elastic Search easy to add nested document and search
I only use Elastic-search. Since I found solr is very hard to start.
Elastic-search's features:
Easy to start, very few setting. Even a newbie can setup a cluster step by step.
Simple Restful API which using NoSQL query. And many language libraries for easy accessing.
Good document, you can read the book: . There is a web version on official website.

Membase can someone explain the idea behind their technology

It is fourth day already since I've started diving into CouchDB specifically Membase (Couchbase), Membase seems really interesting technology for me due to simplicity of administration, their interface is as magical as informal and simple. The way you add/remove buckets is just fun.
Unfortunately I didn't managed to launch their .NET client on Mac OS X (on Windows it worked fine) and also couldn't find out the way to perform Map/Reduce queries so it seemed that Membase Server technology is little simpler then pure CouchDB. Anyway everything changed until recently I've stumbled upon the diagram that describes their technology:
The Image is explained here
It seems that "Couchbase server (Currently Membase Server)" plays role of some sort of Master database which isn't accessed directly, and also there is "Couchbase Single Server" which plays the role of client database which has all the features of CouchDB (such as Map/Reduce queries)
If so then how is "CouchSync" is performed? Is it possible to perform this "CouchSync" from code?
Before I describe to you how CouchSync works I think it would be beneficial for me to describe how the Couchbase product history evolved. This will make things more clear. About a year ago, Membase Server was first released. The idea behind Membase Server was to provide memcached with persistence (the persistence layer was sqlite) and simple to use clustering technology. Then about 6 months ago the companies Membase and CouchOne merged to form Couchbase. Directly after the merger Couchbase continued to provide Membase Server, but now also provided Couchbase Single Server. Couchbase Single Server is essentially CouchDB with GeoCouch packaged in by default along with many major performance improvements. On July 29th, 2011 Couchbase announced a developer preview of the first version of Couchbase Server. Couchbase Server is the combination of Couchbase Single Server and Membase. Basically what Couchbase did was replace sqlite with CouchDB as the persistence engine. So this basically caused the product to go from being a key-value store to a document store database.
So what is CouchSync?
CouchSync is basically what Couchbase is calling CouchDB replication. It is very simple to setup in both Couchbase Server, Couchbase Single Server and in CouchDB. All it is is a changes feed that is streamed from one server to another.
A note on using Membase. Since Membase doesn't provide any of the CouchDB support it doesn't actually fit into this diagram and therfore doesn't support CouchSync. You will actually want to look at the developer preview of Couchbase Server since this product has both Membase and CouchDB features. In the meantime if you are looking for something more stable to test then take a look at Couchbase Single Server as it will be able to give you a feel for some of the features (like CouchSync) that are in Couchbase Server
Also, the point of this diagram is to show that you can do CouchSync across the entire Couchbase product line. You don't need to go through Couchbase Single Server to do CouchSync to Couchbase Mobile. You can do CouchSync directly from Couchbase Server.
Is it possible to perform CouchSync from code?
No. It's easier than that. You set it up in the web ui.
Hope that helps.
[EDIT]:
This diagram is now outdated. Couchbase the company no longer supports Couchbase Single Server (which is it's version of CouchDB). The CouchSync feature will now sync directly with Couchbase server.

Resources