couchdb 2.0 installation and singel node setup - couchdb

after installing couchdb 2.0 the docs ask you to to this:
After installation and initial startup, visit Fauxton at
http://127.0.0.01:5984/_utils#setup. You will be asked to set up
CouchDB as a single-node instance or set up a cluster.
This gets in the way when automating the intallation process.
What is actually going on when you decide for one option or the other?
Can the same results be achieved via API calls?
Thanks for any insights
volker

Of course :)
Documentation from the couchdb-documentation repository.
The Cluster Setup Api
If you would prefer to manually configure your CouchDB cluster, CouchDB exposes
the _cluster_setup endpoint for that. After installation and initial setup.
We can setup the cluster. On each node we need to run the following command to
setup the node:
curl -X POST -H "Content-Type: application/json" http://admin:password#127.0.0.1:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password"}'
After that we can join all the nodes together. Choose one node
as the "setup coordination node" to run all these commands on.
This is a "setup coordination node" that manages the setup and
requires all other nodes to be able to see it and vice versa.
Setup will not work with unavailable nodes.
The notion of "setup coordination node" will be gone once the setup is finished.
From then onwards the cluster will no longer have a "setup coordination node".
To add a node run these two commands:
curl -X POST -H "Content-Type: application/json" http://admin:password#127.0.0.1:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password", "port": 15984, "remote_node": "<remote-node-ip>", "remote_current_user": "<remote-node-username>", "remote_current_password": "<remote-node-password>" }'
curl -X POST -H "Content-Type: application/json" http://admin:password#127.0.0.1:5984/_cluster_setup -d '{"action": "add_node", "host":"<remote-node-ip>", "port": "<remote-node-port>", "username": "garren", "password":"password"}' -H "Content-Type: application/json"
This will join the two nodes together.
Keep running the above commands for each
node you want to add to the cluster. Once this is done run the
following command to complete the setup and add the missing databases:
curl -X POST -H "Content-Type: application/json" http://admin:password#127.0.0.1:5984/_cluster_setup -d '{"action": "finish_cluster"}'
You CouchDB cluster is now setup.
Source : https://github.com/apache/couchdb-documentation/blob/master/src/cluster/setup.rst

Related

How to handle Sematext becoming detached from monitoring a restarted container

I am using Sematext to monitor a small composition of Docker containers plus the Logsene feature to gather the web traffic logs from one container running Node Express web application.
It all works fine until I update and restart the web server container to pull in a new code build. At this point, Sematext Logsene seems to get detached from the container, and so I lose the HTTP log trail in the monitoring. I still see the Docker events, so it seems only the logs part which is broken.
I am running Sematext "manually" (i.e. it's not in my Docker Compose) like this:
sudo docker run -d --name sematext-agent --restart=always -e SPM_TOKEN=$SPM_TOKEN \
-e LOGSENE_TOKEN=$LOGSENE_TOKEN -v /:/rootfs:ro -v /var/run/docker.sock:/var/run/docker.sock \
sematext/sematext-agent-docker
And I update my application simply like this:
docker-compose pull web && docker-compose up -d
where web is the web application service name (amongst database, memcached etc)
which recreates the web container and restarts it.
At this point Sematext stops forwarding HTTP logs.
To fix it I can restart Sematext agent like this:
docker restart sematext-agent
And the HTTP logs start arriving in their dashboard again.
So, I know I could just append the agent restart command to my release script, but I am wondering if there's a way to prevent it from becoming detached in the first place? I guess it's something to do with it monitoring the run files.
I have searched their documentation and FAQs, but not found anything specific about this effect.
I seem to have fixed it, but not in the way I'd expected.
While looking through the documentation I found the sematext-agent-docker package with the Logsene integration built-in has been deprecated and replaced by two separate packages.
"This image is deprecated.
Please use sematext/agent for monitoring and sematext/logagent for log collection."
https://hub.docker.com/r/sematext/sematext-agent-docker/
You now have to use both Logagent https://sematext.com/docs/logagent/installation-docker/ and a new Sematext Agent https://sematext.com/docs/agents/sematext-agent/containers/installation/
With these both installed, I did a quick test by pulling a new container image, and it seems that the logs still arrive in their web dashboard.
So perhaps the problem was specific to the previous package, and this new agent can "follow" the container rebuilds better somehow.
So my new commands are (just copied from the documentation, but I'm using env-vars for the keys)
docker run -d --name st-logagent --restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-e LOGS_TOKEN=$SEMATEXT_LOGS_TOKEN \
-e REGION=US \
sematext/logagent:latest
docker run -d --restart always --privileged -P --name st-agent \
-v /:/hostfs:ro \
-v /sys/:/hostfs/sys:ro \
-v /var/run/:/var/run/ \
-v /sys/kernel/debug:/sys/kernel/debug \
-v /etc/passwd:/etc/passwd:ro \
-v /etc/group:/etc/group:ro \
-e INFRA_TOKEN=$SEMATEXT_INFRA_TOKEN \
-e CONTAINER_TOKEN=$SEMATEXT_CONTAINER_TOKEN \
-e REGION=US \
sematext/agent:latest
Where
CONTAINER_TOKEN == old SPM_TOKEN
LOGS_TOKEN == old LOGSENE_TOKEN
INFRA_TOKEN = new to me
I will see if this works in the long run (not just the quick test).

How to a server to server dialogflow api call?

I have a docker container in a swarm running on centOS.
I need to make a dialogflow POST call to do export an dialogflow agent from the docker container.
I tried using dialogflow developer token, it didn't seem to work out.
Also tried making the same call via api using auth token.
curl --request POST \
'https://dialogflow.googleapis.com/v2/projects/testProject/agent:export?key=PROJECT_KEY' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{}' \
--compressed

How to backup & restore spinnaker pipelines

I am new to & trying to use spinnaker for the client I am working with. I am somewhat familiar with spinnaker architecture.
I know FRONT50 micro-service is responsible for this task. I am not sure how I can safely backup the pipeline data and restore into a new instance.
I want that to be able to continuously back up these pipelines as they are being added so that when I happen to recreate the spinnaker instance(i.e destroy my the infra and then recreate from scratch) I am able to restore these.
I am currently using Azure as the cloud provider and using Azure Container service.
I found this page here : https://www.spinnaker.io/setup/install/backups/
but does not indicate if the pipelines will also be backed up.
Many thanks in advance
I am not sure about the standard method but you can copy the configurations for pipelines and applicatons from front50 manually.
For pipelines, just do a curl to http://<front50-IP>:8080/pipelines
curl http://<front50-IP>:8080/pipelines -o pipelines.json
For applications config:
curl http://<front50-IP>:8080/v2/applications -o applications.json
To push pipeline config to Spinnaker, you can do:
cat pipelines.json | curl -d#- -X POST \
--header "Content-Type: application/json" --header \
"Accept: /" http://<Front50_URL>:8080/pipelines
P.S: My Spinnaker version is 1.8.1 and both, v1 and v2, k8s providers are supported.
Update-2: If you are using AWS S3 or GCS, you can back up the buckets directly.

How to call WebServices in Puppet manifests?

We need to invoke an external webservice via Puppet. The steps provided to me consist of invoking webservices using curl command. Is there anyway to invoke webservices natively via Puppet instead of calling curl commands instead ?
The curl example is provided below
curl -k -v –H "Content-Type:application/json" –-data some_data_file https://service.server.com/some_service

I'm unable to connect Docker Remote API using nodejs hosted in AWS

I have created t1.micro instance in Amazon web-services(AWS), and installed docker.io.
I executed following commend in SSH client "sudo docker -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d &".
when I am trying to get all images : myipaddres:4243/images/json.
I'am getting "This webpage is not available" page.
Finally I found https://github.com/crosbymichael/dockerui expose remote api on my host. :)

Resources