Import and add index to mongodb on a Dokku install - node.js

I have a recently deployed app on an Ubuntu server using Dokku. This is a Node.js app with a Mongodb database.
For the site to work properly I need to to load geojson file in the database. On my development machine this was done from the ubuntu command line using the mongoimport command. I can't figure out how to do this in Dokku.
I also need to add a geospatial index. This was done from the mongo console on my development machine. I also cant figure out how to do that on the Dokku install.

Thanks a lot #Jonathan. You helped me solve this problem. Here is what I did.
I used mongodump on my local machine to create a backup file of the database. It defaulted to a .bson file.
I uploaded that file to my remote server. On the remote server I put the bson file inside a folder called "dump". Then tarred that folder. I initially used the -z flag out of habit but mongo/dokku didn't like the gzip. So I used tar with no compression like so:
tar -cvf dump.tar dump
next I ran the dokku mongo import command:
$dokku mongo:import mongo_claims < dump.tar
2016-03-05T18:04:17.255+0000 building a list of collections to restore from /tmp/tmp.6S378QKhJR/dump dir
2016-03-05T18:04:17.270+0000 restoring mongo_claims.docs4 from /tmp/tmp.6S378QKhJR/dump/docs4.bson
2016-03-05T18:04:20.729+0000 [############............] mongo_claims.docs4 22.3 MB/44.2 MB (50.3%)
2016-03-05T18:04:22.821+0000 [########################] mongo_claims.docs4 44.2 MB/44.2 MB (100.0%)
2016-03-05T18:04:22.822+0000 no indexes to restore
2016-03-05T18:04:22.897+0000 finished restoring mongo_claims.docs4 (41512 documents)
2016-03-05T18:04:22.897+0000 done
That did the trick. My site immediately had all the data.

mongodump will export all the data + indexes from an existing database.
https://docs.mongodb.org/manual/reference/program/mongodump/
Then mongorestore will restore a mongodump with indexes to an existing database.
https://docs.mongodb.org/manual/reference/program/mongorestore/
mongorestore recreates indexes recorded by mongodump.
You can do both commands from you dev machine to the Dokku database.

Importing works well, but since you mentionned mongo console, it's nice to know that you can also connect to your Mongo instance if you use https://github.com/dokku/dokku-mongo's mongo:list and mongo:connect...
E.g.:
root#somewhere:~# dokku mongo:list
NAME VERSION STATUS EXPOSED PORTS LINKS
mydb mongo:3.2.1 running 1->2->3->4->5 mydb
root#somewhere:~# dokku mongo:connect mydb
MongoDB shell version: 3.2.1
connecting to: mydb
> db
mydb
Mongo shell!
> exit
bye

For dokku v0.5.0+ and dokku-mongo v1.7.0+
Use mongodump to export your data into an archive:
mongodump --db mydb --gzip --archive=mydb.archive
Use dokku:import to import your data from an archive
dokku mongo:import mydb < mydb.archive

Related

Newbie: Can't connect to database using browser

Recently installed Neo4J on a Raspberry Pi on a docker container (portainer). Everything seems to working fine. I can open a terminal in Portainer and run commands. I can see there are two DB and I can even run Cypher commands (cut and pasted the Movie entries). But I'm not able to run any commands using the browser. I seem to be able to connect the browser (http://localhost:7474/browser/) and see the ":play movie-graph" run. But when I try to run the query to enter movie data, I get the following error: "ERROR: Neo.DatabaseError.General.UnknownError" Running :sysinfo doesn't return any results. Also the cursor is $ as opposed to a DB name. And don't see any databases in the Database menu on the left.
Again, I'm able to run queries using Cypher Shell through a Portainer terminal.
Here are the container details:
IMAGE neo4j:latest#sha256:b91a4a85afb0cec9892522436bbbcb20f1d6d026c8c24cafcbcc4e27b5c8b68d
CMD neo4j
ENTRYPOINT tini -g -- /startup/docker-entrypoint.sh
ENV
JAVA_HOME /usr/local/openjdk-11
JAVA_VERSION 11.0.15
LANG C.UTF-8
NEO4J_AUTH none
NEO4J_dbms_connector_bolt_advertised__address localhost:7687
NEO4J_dbms_connector_http_advertised__address localhost:7474
NEO4J_dbms_connector_https_advertised__address localhost:7473
NEO4J_EDITION community
NEO4J_HOME /var/lib/neo4j
NEO4J_SHA256 34c8ce7edc2ab9f63a204f74f37621cac3427f12b0aef4c6ef47eaf4c2b90d66
NEO4J_TARBALL neo4j-community-4.4.8-unix.tar.gz
PATH /var/lib/neo4j/bin:/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
I'm sure it is something silly I'm missing, but reading multiple forum comments haven't help. Any help would be appreciated.
Thank you.
SJ
When you load the neo4jbrowser from your internet browser at:
http://localhost:7474/browser
, you need to connect to the bolt advertised address of your neo4j server at:
bolt://localhost:7687

start-stop-daemon: failed to start `/usr/bin/mongod'

I have an alpine machine on my virtual machine and I want to install mongodb. I added the package for mongodb using "apk add mongodb". I started mongo daemon using command mongod in one terminal. Then opened another terminal with mongo shell using mongo --disableJavaScriptJIT. I tried adding files and reading them from the database and that worked fine. But when I do sudo service mongodb restart I got the following output.
* Caching service dependencies ... [ ok ]
* Starting mongodb ...
* start-stop-daemon: failed to start `/usr/bin/mongod' [ !! ]
* ERROR: mongodb failed to start
The first thing you should do is to read the log file. I think that you’ll read here that mongodb doesn’t have rights to access some files. When you started it manually, you haven’t run it as user mongodb, have you…?
If this hypothesis is right, then the solution is to fix owner (and group) of the /var/lib/mongodb (recursively).

Openshift app with flask, sqlalchemy and sqlite - problems with database reverting

I have a problem pretty much exactly like this:
How to preserve a SQLite database from being reverted after deploying to OpenShift?
I don't understand his answer fully and clearly not enough to apply it to my own app and since I can't comment his answer (not enough rep) I figured I had to make ask my own question.
Problem is that when pushing my local files (not including the database file) my database on openshift becomes the one I have locally (all changes made through the server are reverted).
I've googled alot and pretty much understand the problem being that the database should be located somewhere else but I can't grasp fully where to place it and how to deploy it if it's outside the repo.
EDIT: Quick solution: If you have this problem, try connecting to your openshift app with rhc ssh appname
and then cp app-root/repo/database.db app-root/data/database.db
if you have the openshift data dir as reference to SQLALCHEMY_DATABASE_URI. I recommend the accepted answer below though!
I've attached my filestructure and here's some related code:
config.py
import os
basedir = os.path.abspath(os.path.dirname(__file__))
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'database.db')
SQLALCHEMY_MIGRATE_REPO = os.path.join(basedir, 'db_repository')
app/__ init.py__
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
#so that flask doesn't swallow error messages
app.config['PROPAGATE_EXCEPTIONS'] = True
app.config.from_object('config')
db = SQLAlchemy(app)
from app import rest_api, models
wsgi.py:
#!/usr/bin/env python
import os
virtenv = os.path.join(os.environ.get('OPENSHIFT_PYTHON_DIR', '.'), 'virtenv')
#
# IMPORTANT: Put any additional includes below this line. If placed above this
# line, it's possible required libraries won't be in your searchable path
#
from app import app as application
## runs server locally
if __name__ == '__main__':
from wsgiref.simple_server import make_server
httpd = make_server('localhost', 4599, application)
httpd.serve_forever()
filestructure: http://sv.tinypic.com/r/121xseh/8 (can't attach image..)
Via the note at the top of the OpenShift Cartridge Guide:
"Cartridges and Persistent Storage: Every time you push, everything in your remote repo directory is recreated. Store long term items (like an sqlite database) in the OpenShift data directory, which will persist between pushes of your repo. The OpenShift data directory can be found via the environment variable $OPENSHIFT_DATA_DIR."
You can keep your existing project structure as-is and just use a deploy hook to move your database to persistent storage.
Create a deploy action hook (executable file) .openshift/action_hooks/deploy:
#!/bin/bash
# This deploy hook gets executed after dependencies are resolved and the
# build hook has been run but before the application has been started back
# up again.
# if this is the initial install, copy DB from repo to persistent storage directory
if [ ! -f ${OPENSHIFT_DATA_DIR}database.db ]; then
cp -rf ${OPENSHIFT_REPO_DIR}database.db ${OPENSHIFT_DATA_DIR}/database.db 2>/dev/null
fi
# remove the database from the repo during all deploys
if [ -d ${OPENSHIFT_REPO_DIR}database.db ]; then
rm -rf ${OPENSHIFT_REPO_DIR}database.db
fi
# create symlink from repo directory to new database location in persistent storage
ln -sf ${OPENSHIFT_DATA_DIR}database.db ${OPENSHIFT_REPO_DIR}database.db
As another person pointed out, also make sure you are actually committing/pushing your database (make sure your database isn't included in your .gitignore).

Easiest way to copy/duplicate a RethinkDB database?

How can I easily duplicate my production database (mydb) to create a dev database (mydb-dev)?
The rethinkdb restore command seems to have no option to specify the name of the output database. It only has the option to select which database I'd like to restore from the dump. I'm using rethinkdb 1.16.3
You can use rethinkdb export, extract the archive, and rename the directory inside before importing it:
$ rethinkdb export
$ cd rethinkdb_export_2015-04-05T13:54:43
$ mv mydb mydb_dev
$ rethinkdb import -d ./
Thinker tool by internalfx also allows you to clone a database to a different DB, using the --targetDB= option.

How to Load schema from a file in cassandra

I am trying to load schema to Cassandra server from a file. As suggested by some one, i tried sstable2json and json2sstable but i guess that imports and exports data files while i am trying to load the schema of the database only.Any suggestion on possible ways to do it ?
I am using Cassandra 1.2.
To get schema file go to directory where Cassandra resides ..not in bin directory within it
echo -e "use your_keyspace;\r\n show schema;\n" | bin/cassandra-cli -h your_listen_address(e.g.localhost) > mySchema.cdl
To load that file
bin/cassandra-cli -h localhost -f mySchema.cdl

Resources