CouchDb data migration from 1.4 to 2.0 - couchdb

I am updating CouchDB from 1.4 to 2.0 on my windows 8 system.
I have taken backup of my data and view files from /var/lib/couchdb and uninstalled 1.4.
Installed 2.0 successfully and its running.
Now I copied all data to /var/lib/couchdb and /data folder but futon is not showing any database.
I created a new database "test" and its accessible in futon but I could not find it in /data dir.
Configuration:
default.ini:
[couchdb]
uuid =
database_dir = ./data
view_index_dir = ./data
Also I want to understand that: will upgrade require re-indexing?

you might want to look at the local port of the node you copied the data into: when you just copy data files, it will likely work, but they appear at another Port (5986 instead of 5984).
What this means is: when you copy the database file (those residing in the directory specified in /_config/couchdb/database_dir and ending with .couch; quoting https://blog.couchdb.org/2016/08/17/migrating-to-couchdb-2-0/ here) into the data directory of one of the nodes of the CouchDB 2.0 cluster (e.g., lib/node1/data), the database will appear in http://localhost:5986/_all_dbs (note 5986 instead of 5984: this is the so-called local port never intended for production use but helpful here).
As the local port is not a permanent solution, you can now start a replication from the local port to a clustered port (still quoting https://blog.couchdb.org/2016/08/17/migrating-to-couchdb-2-0/ - assuming you're dealing with a database named mydb resulting in a filename mydb.couch):
# create a clustered new mydb on CouchDB 2.0
curl -X PUT 'http://machine2:5984/mydb'
# replicate data (local 2 cluster)
curl -X POST 'http://machine2:5984/_replicate' -H 'Content-type: application/json' -d '{"source": "http://machine2:5986/mydb", "target": "http://machine2:5984/mydb"}'
# trigger re-build index(es) of somedoc with someview;
# do for all to speed up first use of application
curl -X GET 'http://machine2:5984/mydb/_design/_view/?stale=update_after'
As an alternative, you could also replicate from the old CouchDB (running) to the new one as you can replicate between 1.x and 2.0 just as you could replicate between 1.x and 1.x

Use this to migrate all databases residing in CouchDB's database_dir, e.g. /var/lib/couchdb
# cd to database dir, where all .couchdb files reside
cd /var/lib/couchdb
# create new databases in the target instance
for i in ./*.couch; do curl -X PUT http://machine2:5986$( echo $i | grep -oP '[^.]+(?=.couch)'); done
# one-time replication of each database from source to target instance
for i in ./*.couch; do curl -X POST http://machine1:5984/_replicate -H "Content-type: application/json" -d '{"source": "'"$( echo $i | grep -oP '[^./]+(?=.couch)')"'", "target": "http://machine2:5986'$( echo $i | grep -oP '[^.]+(?=.couch)')'"}'; done
If you are running both the source and the target CouchDB within a docker container on the same docker host, you might first check the docker host IP that is mapped into the source container in order to allow the source container to access the target container
/sbin/ip route|awk '/default/ { print $3 }'

Related

postgresql pg_dump backup transfer

I need to transfer around 50 databases from one server to other. Tried two different backup options.
1. pg_dump -h XX.XXX.XX.XXX -p pppp -U username -Fc -d db -f db.cust -- all backups in backup1 folder size 10GB
2. pg_dump -h XX.XXX.XX.XXX -p pppp -U username -j 4 -Fd -d db -f db.dir -- all backups in backup2 folder size 10GB
then transferred to other server for restoration using scp.
scp -r backup1 postgres#YY.YYYY.YY.YYYY:/backup/
scp -r backup2 postgres#YY.YYYY.YY.YYYY:/backup/
Noticed a strange thing. Though the backup folder size are same for both backup but it takes different time to transfer using scp. For directory format backup, transfer is 4 times than custom format backup. Both the SCP done in same network and tried multiple times but result are some.also tried rsync but no difference.
please suggest, what could be the reason for slowness and how can I speed up. I am open to use any other method to transfer.
Thanks

pg_basebackup: directory "/var/lib/pgsql/9.3/data" exists but is not empty

I am trying to build a HA two node cluster with pacemaker and corosync for postgresql-9.3. I am using the link below as my guide.
http://kb.techtaco.org/#!linux/postgresql/building_a_highly_available_multi-node_cluster_with_pacemaker_&_corosync.md
However, I cannot get pass to the part where I need do pg_basebackup as shown below.
[root#TKS-PSQL01 ~]# runuser -l postgres -c 'pg_basebackup -D
/var/lib/pgsql/9.3/data -l date +"%m-%d-%y"_initial_cloning -P -h
TKS-PSQL02 -p 5432 -U replicator -W -X stream' pg_basebackup:
directory "/var/lib/pgsql/9.3/data" exists but is not empty
/var/lib/pgsql/9.3/data in TKS-PSQL02 is confirmed empty.
[root#TKS-PSQL02 9.3]# ls -l /var/lib/pgsql/9.3/data/ total 0
Any idea why I am getting such error? And is there any better way to do Postgresql HA? Note: I am not using shared storage for the database so I could not proceed with Redhat clustering.
Appreciate all the answers in advance.

How can I get Docker Linux container information from within the container itself?

I would like to make my docker containers aware of their configuration, the same way you can get information about EC2 instances through metadata.
I can use (provided docker is listening on port 4243)
curl http://172.17.42.1:4243/containers/$HOSTNAME/json
to get some of its data, but would like to know if there is a better way at least the get the full ID of the container, because HOSTNAME is actually shortened to 12 characters and docker seems to perform a "best match" on it.
Also, how can I get the external IP of the docker host (other than accessing the EC2 metadata, which is specific to AWS)
Unless overridden, the hostname seems to be the short container id in Docker 1.12
root#d2258e6dec11:/project# cat /etc/hostname
d2258e6dec11
Externally
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2258e6dec11 300518d26271 "bash" 5 minutes ago
$ docker -v
Docker version 1.12.0, build 8eab29e, experimental
I've found out that the container id can be found in /proc/self/cgroup
So you can get the id with :
cat /proc/self/cgroup | grep -o -e "docker-.*.scope" | head -n 1 | sed "s/docker-\(.*\).scope/\\1/"
A comment by madeddie looks most elegant to me:
CID=$(basename $(cat /proc/1/cpuset))
You can communicate with docker from inside of a container using unix socket via Docker Remote API:
https://docs.docker.com/engine/reference/api/docker_remote_api/
In a container, you can find out a shortedned docker id by examining $HOSTNAME env var.
According to doc, there is a small chance of collision, I think that for small number of container, you do not have to worry about it. I don't know how to get full id directly.
You can inspect container similar way as outlined in banyan answer:
GET /containers/4abbef615af7/json HTTP/1.1
Response:
HTTP/1.1 200 OK
Content-Type: application/json
{
"Id": "4abbef615af7...... ",
"Created": "2013.....",
...
}
Alternatively, you can transfer docker id to the container in a file.
The file is located on "mounted volume" so it is transfered to container:
docker run -t -i -cidfile /mydir/host1.txt -v /mydir:/mydir ubuntu /bin/bash
The docker id (shortened) will be in file /mydir/host1.txt in the container.
This will get the full container id from within a container:
cat /proc/self/cgroup | grep "cpu:/" | sed 's/\([0-9]\):cpu:\/docker\///g'
WARNING: You should understand the security risks of this method before you consider it. John's summary of the risk:
By giving the container access to /var/run/docker.sock, it is [trivially easy] to break out of the containment provided by docker and gain access to the host machine. Obviously this is potentially dangerous.
Inside the container, the dockerId is your hostname.
So, you could:
install the docker-io package in your container with the same version as the host
start it with --volume
/var/run/docker.sock:/var/run/docker.sock --privileged
finally, run: docker inspect $(hostname) inside the container
Avoid this. Only do it if you understand the risks and have a clear mitigation for the risks.
To make it simple,
Container ID is your host name inside docker
Container information is available inside /proc/self/cgroup
To get host name,
hostname
or
uname -n
or
cat /etc/host
Output can be redirected to any file & read back from application
E.g.: # hostname > /usr/src//hostname.txt
I've found that in 17.09 there is a simplest way to do it within docker container:
$ cat /proc/self/cgroup | head -n 1 | cut -d '/' -f3
4de1c09d3f1979147cd5672571b69abec03d606afcc7bdc54ddb2b69dec3861c
Or like it has already been told, a shorter version with
$ cat /etc/hostname
4de1c09d3f19
Or simply:
$ hostname
4de1c09d3f19
Docker sets the hostname to the container ID by default, but users can override this with --hostname. Instead, inspect /proc:
$ more /proc/self/cgroup
14:name=systemd:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
13:pids:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
12:hugetlb:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
11:net_prio:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
10:perf_event:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
9:net_cls:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
8:freezer:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
7:devices:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
6:memory:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
5:blkio:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
4:cpuacct:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
3:cpu:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
2:cpuset:/docker/7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
1:name=openrc:/docker
Here's a handy one-liner to extract the container ID:
$ grep "memory:/" < /proc/self/cgroup | sed 's|.*/||'
7be92808767a667f35c8505cbf40d14e931ef6db5b0210329cf193b15ba9d605
Some posted solutions have stopped working due to changes in the format of /proc/self/cgroup. Here is a single GNU grep command that should be a bit more robust to format changes:
grep -o -P -m1 'docker.*\K[0-9a-f]{64,}' /proc/self/cgroup
For reference, here are snippits of /proc/self/cgroup from inside docker containers that have been tested with this command:
Linux 4.4:
11:pids:/system.slice/docker-cde7c2bab394630a42d73dc610b9c57415dced996106665d427f6d0566594411.scope
...
1:name=systemd:/system.slice/docker-cde7c2bab394630a42d73dc610b9c57415dced996106665d427f6d0566594411.scope
Linux 4.8 - 4.13:
11:hugetlb:/docker/afe96d48db6d2c19585572f986fc310c92421a3dac28310e847566fb82166013
...
1:name=systemd:/docker/afe96d48db6d2c19585572f986fc310c92421a3dac28310e847566fb82166013
I believe that the "problem" with all of the above is that it depends upon a certain implementation convention either of docker itself or its implementation and how that interacts with cgroups and /proc, and not via a committed, public, API, protocol or convention as part of the OCI specs.
Hence these solutions are "brittle" and likely to break when least expected when implementations change, or conventions are overridden by user configuration.
container and image ids should be injected into the r/t environment by the component that initiated the container instance, if for no other reason than
to permit code running therein to use that information to uniquely identify themselves for logging/tracing etc...
just my $0.02, YMMV...
There are 3 places that I see that might work so far, each have advantages & disadvantages:
echo $HOSTNAME or hostname
cat /proc/self/cgroup
cat /proc/self/mountinfo
$HOSTNAME is easy, but it is partial, and it will also be overwritten to pod name by K8s.
/proc/self/cgroup seems working with cgroupV1 but won't be there hosted in cgroupV2.
/proc/self/mountinfo will still have the container id for cgroupV2, however, the mount point will have different values by different container runtimes.
For example, in docker engine, the value looks like:
678 655 254:1 /docker/containers/7a0144cee1256c539fab790199527b7051aff1b603ebcf7ed3fd436440ef3b3a/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/vda1 rw
In ContainerD (K8s default engine lately), it looks like:
1733 1729 0:35 /kubepods/besteffort/pod3272f253-be44-4a82-a541-9083e68cf99f/7a0144cee1256c539fab790199527b7051aff1b603ebcf7ed3fd436440ef3b3a /sys/fs/cgroup/blkio ro,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,blkio
Also, the biggest problem for all above is that they are all implementations, there's no abstraction and they all could be changed over time.
There is an effort to make it standard and I think it is worth watching:
https://github.com/opencontainers/runtime-spec/issues/1105
You can use this command line to identify the current container ID (tested with docker 1.9).
awk -F"-|/." '/1:/ {print $3}' /proc/self/cgroup
Then, a little request to Docker API (you can share /var/run/docker.sock) to retrieve all informations.
awk -F'[:/]' '(($4 == "docker") && (lastId != $NF)) { lastId = $NF; print $NF; }' /proc/self/cgroup
As an aside, if you have the pid of the container and want to get the docker id of that container, a good way is to use nsenter in combination with the sed magic above:
nsenter -n -m -t pid -- cat /proc/1/cgroup | grep -o -e "docker-.*.scope" | head -n 1 | sed "s/docker-\(.*\).scope/\\1/"
Use docker inspect.
$ docker ps # get conteiner id
$ docker inspect 4abbef615af7
[{
"ID": "4abbef615af780f24991ccdca946cd50d2422e75f53fb15f578e14167c365989",
"Created": "2014-01-08T07:13:32.765612597Z",
"Path": "/bin/bash",
"Args": [
"-c",
"/start web"
],
"Config": {
"Hostname": "4abbef615af7",
...
Can get ip as follows.
$ docker inspect -format="{{ .NetworkSettings.IPAddress }}" 2a5624c52119
172.17.0.24

mysql dump 200DB ~ 40GB from one server to another

What would be the most efficient way for me to export 200 databases with a total of 40GB of data, and import them into another server? I was originally planning on running a script that would export each DB to their own sql file, and then import them into the new server. If this is the best way, are there some additional flags i can pass to the mysqldump that will speed it up?
The other option I saw was to directly pipe the mysqldump into an import over SSH. Would this be a better option? If so could you provide some info on what the script might look like?
If the servers can ping each other you could use PIPES to do so:
mysqldump -hHOST_FROM -u -p db-name | mysql -hHOST_TO -u -p db-name
Straightforward!
[EDIT]
Answer for your question:
mysqldump -hHOST_FROM -u -p --all | mysql -hHOST_TO -u -p
The quick and fastest way is to use percona xtrabackup to take hot backups. It is very fast and you can use it on live system whereas mysqldump can cause locking. Please avoid copying /var/lib directory to other server in case of Innodb, this would have very bad effects.
Try percona xtrabackup, here is some more information on this on installation and configuration. Link here.
If both mysql servers will have same dbs and config I think the best method is to copy the /var/lib/mysql dir using rsync. Stop servers before doing the copy to avoid table corruption
Export MySQL database using SSH with the command
mysqldump -p -u username database_name > dbname.sql
Move the dump using wget from the new server SSH.
wget http://www.domainname.com/dbname.sql
Import the MySQL database using SSH with the command
mysql -p -u username database_name < file.sql
Done!!

How do I clone an OpenLDAP database

I know this is more like a serverfault question than a stackoverflow question, but since serverfault isn't up yet, here I go:
I'm supposed to move an application from one redhat server to another, and without very good knowledge of the internal workings of the application, how would I move the OpenLDAP database from the one machine to the other, with schemas and all.
What files would I need to copy over? I believe the setup is pretty standard.
The problem with SourceRebels' answer is that slapcat(8) does not guarantee that the data is ordered for ldapadd(1)/ldapmodify(1).
From man slapcat (from OpenLDAP 2.3) :
The LDIF generated by this tool is suitable for use with slapadd(8).
As the entries are in database order, not superior first order, they
cannot be loaded with ldapadd(1) without first being reordered.
(FYI: In OpenLDAP 2.4 that section was rephrased and expanded.)
Plus using a tool that uses the backend files to dump the database and then using a tool that loads the ldif through the ldap protocol is not very consistent.
I'd suggest to use a combination of slapcat(8)/slapadd(8) OR ldapsearch(1)/ldapmodify(1). My preference would go to the latter as it does not need shell access to the ldap server or moving files around.
For example, dump database from a master server under dc=master,dc=com and load it in a backup server
$ ldapsearch -Wx -D "cn=admin_master,dc=master,dc=com" -b "dc=master,dc=com" -H ldap://my.master.host -LLL > ldap_dump-20100525-1.ldif
$ ldapadd -Wx -D "cn=admin_backup,dc=backup,dc=com" -H ldap://my.backup.host -f ldap_dump-20100525-1.ldif
The -W flag above prompts for ldap admin_master password however since we are redirecting output to a file you wont see the prompt - just an empty line. Go ahead and type your ldap admin_master password and enter and it will work. First line of your output file will need to be removed (Enter LDAP Password:) before running ldapadd.
Last hint, ldapadd(1) is a hard link to ldapmodify(1) with the -a (add) flag turned on.
ldapsearch and ldapadd are not necessarily the best tools to clone your LDAP DB. slapcat and slapadd are much better options.
Export your DB with slapcat:
slapcat > ldif
Import the DB with slapadd (make sure the LDAP server is stopped):
slapadd -l ldif
Some appointments:
Save your personalized schemas and objectclasses definitions on your new server. You can look for your included files at slapd.conf to obtain it, for example (this is a part of my slapd.conf):
include /etc/ldap/schema/core.schema
Include your personalized schemas and objectclasses in your new openLDAP installation.
Use slapcat command to export your full LDAP tree to a single/various ldif files.
Use ldapadd to import the ldif files on to your new LDAP installation.
I prefer copy the database through the protocol:
first of all be sure you have the same schemas on both servers.
dump the database with ldapsearch:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" > domain.ldif
and import it in the new server:
ldapmodify -Wx -D "cn=admin,dc=domain" -a -f domain.ldif
in one line:
ldapsearch -LLL -Wx -D "cn=admin,dc=domain" -b "dc=domain" | ldapmodify -w pass -x -D "cn=admin,dc=domain" -a
By using the bin/ldap* commands you are talking directly with the server while using bin/slap* commands you are dealing with the backend files
(Not enough reputation to write a comment...)
Ldapsearch opens a connection to the LDAP server.
Slapcat instead accesses the database directly, and this means that ACLs, time and size limits, and other byproducts of the LDAP connection are not evaluated, and hence will not alter the data. (Matt Butcher, "Mastering OpenLDAP")
Thanks, Vish. Worked like a charm! I edited the command:
ldapsearch -z max -LLL -Wx -D "cn=Manager,dc=domain,dc=fr" -b "dc=domain,dc=fr" >/tmp/save.ldif
ldapmodify -c -Wx -D "cn=Manager,dc=domain,dc=fr" -a -f /tmp/save.ldif
Just added the -z max to avoid the size limitation and the -c to go on even if the target domain already exists (my case).

Resources