Add new node to existed CouchDB cluster after cluster_finished - couchdb

Can I add new node to an existed CouchDB cluster?
Basically, I already completed a CouchDB cluster configuration using /_cluster_setup API, with finish_cluster action. Can I still add another new node to the cluster manually using CouchDB REST API? I tried to use /_cluster_setup API again to add it, but the new node is not included in "all_nodes" section from /_memebership API.
I also tried to follow the example on http://docs.couchdb.org/en/stable/cluster/nodes.html#adding-a-node . But it seems not helping.
Any recommendation?

Related

Add custom nodes in firebase using Node-RED

I am new to Raspberry Pi and Node-RED. I am want to add custom node to firebase using Node-RED nodes. Here is my database screenshot :
I have used push option of firebase node to create multiple nodes with auto generated ID's. I want to give custom ID like "User1", "User2" and so on using firebase node in Node-RED. How can this be achieved.
push() always generates random IDs. The names of these nodes can't be changed after they're added. If you want to assign your own names, you will need to build a path to the node you want, and populate it with set().

Neo4j seraph-model: Create a node with relation to other node

Hi i have started working the neo4j and the seraph and seraph-model packages.
A problem poped:
I cannot seem to find an way to create a Node connected to other node in one query example:
Create (n:User)-[r:Has]->(p:UserImage)
I know I can do that using a the native seraph.query but then I lose some of the model features.. (like the timestamps)
Is there some otherway to do that?
How expensive is to do that query in 3 steps? (create user, create image, link then)
seraph-model is extension of seraph. seraph.query is for raw query. model features are accessible only when use modelName.Method

Chef server migration: How to update the client.pem in nodes?

I am attempting to migrate from 1 chef server to another using knife-backup. However, knife-backup does not seem to update the nodes, and all my nodes are still pointing to the old server in their respective client.rb files, and their validation.pem and client.pem are still paired with the old server.
Consequently, I update all the client.rb and validation.pem files manually.
However, I still need to update client.pem. Obviously, one way to do so would be to bootstrap the node again to the new server, however I do not want to do that because I do not want to deploy to these nodes because that could cause a loss of data.
Is there any way to update client.pem in the nodes without having to bootstrap or run chef-client? One way would be to get the private key and do it manually, but I am not sure how to do that.
Thanks!
PS: Please feel free to suggest any other ideas for migration as well!
It's the chef server "client" entities that contain the public keys matching the private key ("client.pem") files on each client server. The knife backup plugin reportedly restores chef clients. Have you tried just editing the chef server URL (in the "client.rb") and re-running chef-client?
Additional note:
You can discard the "validation.pem" files. These are used during bootstrap to create new client registrations. Additionally most likely your new chef server has a alternative validation key.

Getting the next node id for a Neo4j Node using the REST API

EDIT
When i am talking about node and node id i am specifically talking about the Neo4j representation of a node not node as in Node.js
I am building out an application on top of Neo with node using the thingdom wrapper on top of the REST API and i am attempting to add my own custom id property that will be a hash of the id to be used in the URL for example.
What i am currently doing is creating the node and then once the id is returned hashing this and saving it back to the node, so in effect i am calling the REST API twice to create a single node.
This is a long shot but is there a way to get a reliable next id from Neo using the REST API so that i can do this all in one request.
If not does anyone know of a better approach to what i am doing?
The internal id of neo4j nodes is not supposed to be used for external interfaces, as noted in the documentation. That means especially it's not a good idea to try to guess the next id.
It's recommended to use application specific ids to reference nodes, if you use UUIDs (especially uuid type 4) there is only a minimal chance of collisions and you can compute them on node creation, before storing them in the database.
By curiosity, can I ask you why you need to have the Id stored in the Node?
But anyway, it's quite common in Node.js to call a succession of APIs. And you will see that with Neo4j it will be required more than once.
If you don't already use it, I can only suggest you to take a look at Async: https://github.com/caolan/async
And particularly to the "waterfall" method that allows you to call more than one API that use the result of the previous call.

How to get elastic search to play with MongoDb and node.js?

I am fairly new to both mongodb and node.js but recently got everything to work well for me until I reached the point where I needed to add a full text search to my website. From my research I figured out that Elasticsearch would be a good fit, but I couldnt figure out exactly how to get it to work with node.js and mongodb. I am currently using Heroku and MongoLab to host my application. Here are my questions.
How do I host Elasticsearch?
How do I make all my mongo data available to elasticsearch do I use a river or do I manually inset and delete all data?
I found something this river but I am not quite sure how to make this happen automatically and where to host it.
How do I query Elasticsearch from node.js? Is there a package that allows for this?
Edit:
Question 2 is really what I am struggling with. I have also included question 1 and 3 to help people that are new to the topic and coming from google.
1) either on your own server/VM/whatever.. or with a hosted service such as https://searchbox.io/
2) you can create a script to index your existing data and then index new data once its created, or use a river to index your current database.
3) ElasticSearch is a simple HTTP API, you can make your own requests using the 'http' module or simplify it with something like https://github.com/mikeal/request
you can also use a 3rd party library like https://github.com/phillro/node-elasticsearch-client
Searchly.com (Aka SearchBox.io)introduced a new feature crawlers includes MongoDB crawler.
It fetches data from a given collection and syncs periodically to ElasticSearch. Check http://www.searchly.com/documentation/crawler-beta/
You can host on your own server or use aws elasticsearch service or use elastic cloud provided by elasticsearch.
Try any of the following three solutions:-
i) Try using mongoosastic. NPM package
ii) Use mongo-connector.
iii) python script for indexing data to elasticsearch
elasticsearch-js. The javascript client library

Resources