marklogic replication similar to couchdb [closed] - couchdb

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Is it possible to setup a 2-way replication with marklogic 6 similar to couchdb? Scenario: Use database on location b if location a is offline and automatic resync if a is online again, additionaly a + b are used simultaneus by pushing / syncinc data automagically in 2 ways a -> b and b -> a

MarkLogic has two kinds of replication. "Flexible Replication" which replicates documents as logical units, and "Database Replication" which replicates transactional updates using journal frames.
The Flexible Replication approach is comparable to CouchDB, since it writes by the document and does not group writes from a transaction on the master db into a transactional group on the replica. Couch does not have transactions in the first place so this is comparable. Flexible replication can replicate two ways if the same documents are not updated on both sides. Database replication cannot replicate two ways.
Be careful, because two-way replication in any system requires some solution to conflicts. MarkLogic handles this by requiring you to specify sets of master data on each server (each identified by a non-conflicting "domain" such as a collection or directory). Couch appears to keep conflicting versions without telling you which one you're getting, so there's a difference there.

Related

Should we create one Azure CosmosDB per web application or multiple ones? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 months ago.
Improve this question
I setup a web app with Azure cosmos db. I came across that we have a limit on the number of containers per DB. Now it is 25 containers when we have shared throughput.
What is the best practice here:
Creating multiple databases per application although the app is not microservices.
User serverless cosmos db
Put the throughput on the container level.
please advise.
I have a container for each entity like Organizations and Users... So the containers number reaches this limit easily.
I think you need to rethink you design. From the docs:
A container is a schema-agnostic container of items. Items in a container can have arbitrary schemas. For example, an item that represents a person and an item that represents an automobile can be placed in the same container. By default, all items that you add to a container are automatically indexed without requiring explicit index or schema management. You can customize the indexing behavior by configuring the indexing policy on a container.
Do not fall into the trap of trying to map a relational database schema to the resource model of Cosmos DB. Do not think of a container as a table. Have you read the modeling guide already?

Use NoSQL on a single box [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am designing a software that will be deployed to one single server. I will have about 1TB data and there will be more writing than reading.
I have an option to buy a good server. I also have an option to use Redis and Cassandra. But I cannot do both. I doubt if it makes sense to run NoSQL on one single node. Will I get enough speedup over traditional SQL database?
This type of questions is very problematic as it calls for an opinion, which is at most cases highly subjective.
I cannot speak on Cassandra's behalf for better or worse.
Redis is an in-memory solution - that basically means that whether reading or writing, you'll get the best performance available today. It also means that your 1TB of data will need to fit in that one good server's RAM. Also note that you'll need additional RAM to actually operate the server (OS) and Redis itself. Depending on what/how you do, you could end up with a RAM requirement of up to x2.5-3 the data's size. That means ~4TB of RAM... and that's a lot.
If the single server requirement isn't hard, I'd look into loosing it. Any setup, Redis or not, will not offer any availability off a single box. If you use a cluster, you'll be able to scale easily using cheaper, "less good" ;), servers.
If there will be more writing than reading then redis is probably not your answer.
Cassandra will handle heavy writes pretty well, but the key question is: do you know your read queries ahead of time? If so, then Cassandra is a good solution. However, if you plan to do ad-hoc querying then Cassandra is not the answer. This last point is actually the key one.

How data can be synchronized among multiple linux servers [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have basically 4 servers for running the same project. I want make changes in database from UI.
What should I do so that all changes are reflected on all server so that all servers contain the same data.
You can use replication in database for your purpose.
You can use data replication. replicate all the data from all four servers at one single location.
Database replication is the frequent electronic copying data from a database in one computer or server to a database in another so that all users share the same level of information. The result is a distributed database in which users can access data relevant to their tasks without interfering with the work of others. The implementation of database replication for the purpose of eliminating data ambiguity or inconsistency among users is known as normalization.

Mongo DB find query taking too much time [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have worked with MySQL and PHP project earlier for a iPhone app, but when storage data size increases with time, client moved with NodeJS and MongoDB.
We have made a new version of app with Mongo database, which is working fine with few records.
But when we have migrated MySQL database into MongoDB, it has consumed almost 2GB space at server.
Our app having large numbers of users and its related data.
And now we are stuck that find records (20 records) taking so much time (4 to 5 seconds) which leads to unwanted time consumption in the app, and users irritated in most of activities in app.
Take a look at this section of the documentation MongoDB talking about Performance Optimization.
As the documentation displayed options are:
Create Indexes to Support Queries
Use Projections to Return Only Necessary Data
http://docs.mongodb.org/manual/tutorial/optimize-query-performance-with-indexes-and-projections/
You probably need to add indexes to your collection.
ensureIndex command creates an index on the collection. It will improve the speed of your queries significantly. The indexes will have to be created according to the queries you use.
Please follow this documentation:
http://docs.mongodb.org/manual/core/index-creation/

Can you use couchDB for web apps like ebay? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I mean, can you use couchDB for:
CRUD of items, users
bids and auctions resolutions
bidirectional califications
forum
items comparison
You could try to use CouchDB for an application - as to whether you would be successful is another question.
Something on the scale of eBay will have special requirements that are not representative of a typical application, If you are building a small auction site then perhaps CouchDB would suffice. A document-oriented database like CouchDB may not be so hot when you have to deal with transactional/records-based data like that associated with auctions.
I think couchdb would be excellent for part of the problem, though there are a few elements that would not be great. Particularly, eventual consistency over distributed nodes seems really bad for real time bidding.
You could keep the the item and user info in CouchDB, along with forums and a lot of that sort of stuff, but some functionality (bid tracking, search) would be more suited to other backends. As an example, the CouchDB guys are looking at tying CouchDB into other tools (like SOLR) for indexing, etc.
I would look to see how Amazon uses SimpleDB internally (or do they?). Might have some clues as to right ways to use a document-based database.
As you can see here they are indeed using non-relational approach, so I guess you're heading in a correct direction (flexibility-wise at least).

Resources