Easiest way to copy/duplicate a RethinkDB database? - database-administration

How can I easily duplicate my production database (mydb) to create a dev database (mydb-dev)?
The rethinkdb restore command seems to have no option to specify the name of the output database. It only has the option to select which database I'd like to restore from the dump. I'm using rethinkdb 1.16.3

You can use rethinkdb export, extract the archive, and rename the directory inside before importing it:
$ rethinkdb export
$ cd rethinkdb_export_2015-04-05T13:54:43
$ mv mydb mydb_dev
$ rethinkdb import -d ./
Thinker tool by internalfx also allows you to clone a database to a different DB, using the --targetDB= option.

Related

databricks - no response when executing command in terminal 'export DATABRICKS_CONFIG_FILE="dbfs:/FileStore/tables/partition.csv'

I am trying to export the csv file by following this guide https://docs.databricks.com/dev-tools/cli/index.html , but there's no response when executing below command, it looks like exist the command directly without saying exporting is successfully or failed.
I have finished install the cli and setup authentication by entering a host and token in mac terminal by following the guide as well.
export DATABRICKS_CONFIG_FILE="dbfs:/FileStore/tables/partition.csv"
please refer to this screenshot:
At first, I write the dataframe into file system by below code
df.coalesce(1).write.mode("overwrite").csv("dbfs:/FileStore/tables/partition.csv")
how could i successfully export the file from databricks and where does it stored locally?
screenShot:
Yes, you can copy to your local machine or move to another destination as needed
Configure azure CLI with azure databricks:
Please follow this steps:
pip install databricks-cli
Use databricks configure --token command
Mention Azure databricks host name: https://adb-xxxxx.azuredatabricks.net/
Past your Personal Access Token.
Now all set to export the CSV file and store it in a destination location.
databricks fs cp dbfs:/FileStore/tables/partition.csv dbfs:/destination/your_folder/file.csv
databricks fs cp C:/folder/file.csv dbfs:/FileStore/folder
Or
If you have a lot of CSV files placed in a folder .you prefer to export the entire folder rather than individual files.
Use -r to select your folder instead of the individual file.
databricks fs cp -r dbfs:/<folder> destination/folder
Alternative approach in python:
You can use directly dbutils.fs.cp("dbfs:/FileStore/gender_submission.csv","destination/folder")

How to create mysql databse in git lab CI/CD yaml file?

I am new gitlab CI/CD. I have created a simple yaml script to build and test my php application. I am using shared runner option available in gitlab CI.
I have specified the database name "MYSQL_DATABASE" and it doesn't seem to make any effect.
How do I specify that? Is there any other way to create database in YAML file. When I specify create database, build is getting failed stating
"/bin/bash: line 78: create: command not found".
It is hard to help without knowing more about your configuration. As user JGC already stated, the main error cause seems to be that you are trying to run create database as bash command.
If you want to create a MySQL database directly on a Linux command-line, you can use
mysql -uroot -ppassword -e "CREATE DATABASE database-name
However, with GitLab CI you should try to use one of the solutions described at https://docs.gitlab.com/ee/ci/services/mysql.html
With the Docker executor (e.g. with the SaaS version via gitlab.com) you can just use the following in your .gitlab-ci.yml:
services:
- mysql:latest
variables:
MYSQL_DATABASE: database-name
MYSQL_ROOT_PASSWORD: mysql_strong_password

Import and add index to mongodb on a Dokku install

I have a recently deployed app on an Ubuntu server using Dokku. This is a Node.js app with a Mongodb database.
For the site to work properly I need to to load geojson file in the database. On my development machine this was done from the ubuntu command line using the mongoimport command. I can't figure out how to do this in Dokku.
I also need to add a geospatial index. This was done from the mongo console on my development machine. I also cant figure out how to do that on the Dokku install.
Thanks a lot #Jonathan. You helped me solve this problem. Here is what I did.
I used mongodump on my local machine to create a backup file of the database. It defaulted to a .bson file.
I uploaded that file to my remote server. On the remote server I put the bson file inside a folder called "dump". Then tarred that folder. I initially used the -z flag out of habit but mongo/dokku didn't like the gzip. So I used tar with no compression like so:
tar -cvf dump.tar dump
next I ran the dokku mongo import command:
$dokku mongo:import mongo_claims < dump.tar
2016-03-05T18:04:17.255+0000 building a list of collections to restore from /tmp/tmp.6S378QKhJR/dump dir
2016-03-05T18:04:17.270+0000 restoring mongo_claims.docs4 from /tmp/tmp.6S378QKhJR/dump/docs4.bson
2016-03-05T18:04:20.729+0000 [############............] mongo_claims.docs4 22.3 MB/44.2 MB (50.3%)
2016-03-05T18:04:22.821+0000 [########################] mongo_claims.docs4 44.2 MB/44.2 MB (100.0%)
2016-03-05T18:04:22.822+0000 no indexes to restore
2016-03-05T18:04:22.897+0000 finished restoring mongo_claims.docs4 (41512 documents)
2016-03-05T18:04:22.897+0000 done
That did the trick. My site immediately had all the data.
mongodump will export all the data + indexes from an existing database.
https://docs.mongodb.org/manual/reference/program/mongodump/
Then mongorestore will restore a mongodump with indexes to an existing database.
https://docs.mongodb.org/manual/reference/program/mongorestore/
mongorestore recreates indexes recorded by mongodump.
You can do both commands from you dev machine to the Dokku database.
Importing works well, but since you mentionned mongo console, it's nice to know that you can also connect to your Mongo instance if you use https://github.com/dokku/dokku-mongo's mongo:list and mongo:connect...
E.g.:
root#somewhere:~# dokku mongo:list
NAME VERSION STATUS EXPOSED PORTS LINKS
mydb mongo:3.2.1 running 1->2->3->4->5 mydb
root#somewhere:~# dokku mongo:connect mydb
MongoDB shell version: 3.2.1
connecting to: mydb
> db
mydb
Mongo shell!
> exit
bye
For dokku v0.5.0+ and dokku-mongo v1.7.0+
Use mongodump to export your data into an archive:
mongodump --db mydb --gzip --archive=mydb.archive
Use dokku:import to import your data from an archive
dokku mongo:import mydb < mydb.archive

How to Load schema from a file in cassandra

I am trying to load schema to Cassandra server from a file. As suggested by some one, i tried sstable2json and json2sstable but i guess that imports and exports data files while i am trying to load the schema of the database only.Any suggestion on possible ways to do it ?
I am using Cassandra 1.2.
To get schema file go to directory where Cassandra resides ..not in bin directory within it
echo -e "use your_keyspace;\r\n show schema;\n" | bin/cassandra-cli -h your_listen_address(e.g.localhost) > mySchema.cdl
To load that file
bin/cassandra-cli -h localhost -f mySchema.cdl

Migrate couchdb data from 0.10.0?

For a linux system, I've backed up an old database from couchdb 0.10.0, basically a tar archive of the /var/lib/couchdb directory.
What is the procedure to convert this data in the format required by couchdb 1.0.1? If I simply restore the files to their original location, they are not found. If I place them in /var/lib/couchdb/1.0.1, I get the following error:
{"error":"kill","reason":"{gen_server,call,\n [couch_server,\n {open,<<\"test\">>,\n [{user_ctx,\n {user_ctx,null,\n [<<\"_admin\">>],\n <<\"{couch_httpd_auth, default_authentication_handler}\">>}}]},\n infinity]}"}
(In this case the database is named test.couch, I placed test.couch in /var/lib/couchdb/1.0.1/test.couch and tried to open it from the URL: http://localhost:5984/test/)
edit: oops, the solution was pretty obvious. Copying was the right thing to do, but I forgot to change permissions.
So, to restore a backed up couchdb database, all that is needed is:
sudo chown couchdb:couchdb backup/test.couch
sudo mv backup/test.couch /var/lib/couchdb/1.0.1
You could try replication between a 0.10 and 1.0.1 server although I'm pretty sure that 1.0.1 can read 0.10 databases. Is there more information in couch.log?

Resources