I have an app app_blog in django project.I want to delete both migrations file with django migration command.
blog
[ ] 0001_initial
[ ] 0002_auto_20200126_0741
On your project folder do this:
./remove_migrations.sh
Then,
If you're using mysql as your db you can simply do this:
1. mysql -u root -p (To login to mysql)
2. use database foo; (foo is the name of your db)
3. DELETE FROM django_migrations; (To simply delete all migrations made)
Optionally you can specify an app name within your project when deleting myigrations for only that app like this:
3. DELETE FROM django_migrations WHERE app = app_blog
After deleting the migrations now do this in your terminal where your project resides.
python manage.py makemigrations
python manage.py migrate --fake
Then try running your local server
python manage.py runserver
Or share for someone to use it using
python manage.py runserver 0.0.0.0:8080 (8080 is the port to use)
Related
Mysql5.7.18 why won't start, I extract is the edition version.
mysql server could not start.
cmd : net start mysql.
error:server not start.
Mysql installation tutorial:
1. Download mysql archives decompression;
2. New data folder under the mysql root path
3. Enter the DOS, then CD to mysql installation location bin directory;
4. Execute the command: mysqld -- the initialize - insecure - user = mysql
5. Execute the command: net start mysql
Basic installation ends
I have a recently deployed app on an Ubuntu server using Dokku. This is a Node.js app with a Mongodb database.
For the site to work properly I need to to load geojson file in the database. On my development machine this was done from the ubuntu command line using the mongoimport command. I can't figure out how to do this in Dokku.
I also need to add a geospatial index. This was done from the mongo console on my development machine. I also cant figure out how to do that on the Dokku install.
Thanks a lot #Jonathan. You helped me solve this problem. Here is what I did.
I used mongodump on my local machine to create a backup file of the database. It defaulted to a .bson file.
I uploaded that file to my remote server. On the remote server I put the bson file inside a folder called "dump". Then tarred that folder. I initially used the -z flag out of habit but mongo/dokku didn't like the gzip. So I used tar with no compression like so:
tar -cvf dump.tar dump
next I ran the dokku mongo import command:
$dokku mongo:import mongo_claims < dump.tar
2016-03-05T18:04:17.255+0000 building a list of collections to restore from /tmp/tmp.6S378QKhJR/dump dir
2016-03-05T18:04:17.270+0000 restoring mongo_claims.docs4 from /tmp/tmp.6S378QKhJR/dump/docs4.bson
2016-03-05T18:04:20.729+0000 [############............] mongo_claims.docs4 22.3 MB/44.2 MB (50.3%)
2016-03-05T18:04:22.821+0000 [########################] mongo_claims.docs4 44.2 MB/44.2 MB (100.0%)
2016-03-05T18:04:22.822+0000 no indexes to restore
2016-03-05T18:04:22.897+0000 finished restoring mongo_claims.docs4 (41512 documents)
2016-03-05T18:04:22.897+0000 done
That did the trick. My site immediately had all the data.
mongodump will export all the data + indexes from an existing database.
https://docs.mongodb.org/manual/reference/program/mongodump/
Then mongorestore will restore a mongodump with indexes to an existing database.
https://docs.mongodb.org/manual/reference/program/mongorestore/
mongorestore recreates indexes recorded by mongodump.
You can do both commands from you dev machine to the Dokku database.
Importing works well, but since you mentionned mongo console, it's nice to know that you can also connect to your Mongo instance if you use https://github.com/dokku/dokku-mongo's mongo:list and mongo:connect...
E.g.:
root#somewhere:~# dokku mongo:list
NAME VERSION STATUS EXPOSED PORTS LINKS
mydb mongo:3.2.1 running 1->2->3->4->5 mydb
root#somewhere:~# dokku mongo:connect mydb
MongoDB shell version: 3.2.1
connecting to: mydb
> db
mydb
Mongo shell!
> exit
bye
For dokku v0.5.0+ and dokku-mongo v1.7.0+
Use mongodump to export your data into an archive:
mongodump --db mydb --gzip --archive=mydb.archive
Use dokku:import to import your data from an archive
dokku mongo:import mydb < mydb.archive
I have a problem pretty much exactly like this:
How to preserve a SQLite database from being reverted after deploying to OpenShift?
I don't understand his answer fully and clearly not enough to apply it to my own app and since I can't comment his answer (not enough rep) I figured I had to make ask my own question.
Problem is that when pushing my local files (not including the database file) my database on openshift becomes the one I have locally (all changes made through the server are reverted).
I've googled alot and pretty much understand the problem being that the database should be located somewhere else but I can't grasp fully where to place it and how to deploy it if it's outside the repo.
EDIT: Quick solution: If you have this problem, try connecting to your openshift app with rhc ssh appname
and then cp app-root/repo/database.db app-root/data/database.db
if you have the openshift data dir as reference to SQLALCHEMY_DATABASE_URI. I recommend the accepted answer below though!
I've attached my filestructure and here's some related code:
config.py
import os
basedir = os.path.abspath(os.path.dirname(__file__))
SQLALCHEMY_DATABASE_URI = 'sqlite:///' + os.path.join(basedir, 'database.db')
SQLALCHEMY_MIGRATE_REPO = os.path.join(basedir, 'db_repository')
app/__ init.py__
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
#so that flask doesn't swallow error messages
app.config['PROPAGATE_EXCEPTIONS'] = True
app.config.from_object('config')
db = SQLAlchemy(app)
from app import rest_api, models
wsgi.py:
#!/usr/bin/env python
import os
virtenv = os.path.join(os.environ.get('OPENSHIFT_PYTHON_DIR', '.'), 'virtenv')
#
# IMPORTANT: Put any additional includes below this line. If placed above this
# line, it's possible required libraries won't be in your searchable path
#
from app import app as application
## runs server locally
if __name__ == '__main__':
from wsgiref.simple_server import make_server
httpd = make_server('localhost', 4599, application)
httpd.serve_forever()
filestructure: http://sv.tinypic.com/r/121xseh/8 (can't attach image..)
Via the note at the top of the OpenShift Cartridge Guide:
"Cartridges and Persistent Storage: Every time you push, everything in your remote repo directory is recreated. Store long term items (like an sqlite database) in the OpenShift data directory, which will persist between pushes of your repo. The OpenShift data directory can be found via the environment variable $OPENSHIFT_DATA_DIR."
You can keep your existing project structure as-is and just use a deploy hook to move your database to persistent storage.
Create a deploy action hook (executable file) .openshift/action_hooks/deploy:
#!/bin/bash
# This deploy hook gets executed after dependencies are resolved and the
# build hook has been run but before the application has been started back
# up again.
# if this is the initial install, copy DB from repo to persistent storage directory
if [ ! -f ${OPENSHIFT_DATA_DIR}database.db ]; then
cp -rf ${OPENSHIFT_REPO_DIR}database.db ${OPENSHIFT_DATA_DIR}/database.db 2>/dev/null
fi
# remove the database from the repo during all deploys
if [ -d ${OPENSHIFT_REPO_DIR}database.db ]; then
rm -rf ${OPENSHIFT_REPO_DIR}database.db
fi
# create symlink from repo directory to new database location in persistent storage
ln -sf ${OPENSHIFT_DATA_DIR}database.db ${OPENSHIFT_REPO_DIR}database.db
As another person pointed out, also make sure you are actually committing/pushing your database (make sure your database isn't included in your .gitignore).
I'm setting up a container with the following Dockerfile
# Start with project/baseline
FROM project/baseline # => image with mongo / nodejs / sailsjs
# Create folder that will contain all the sources
RUN mkdir -p /var/project
# Load the configuration file and the deployment script
ADD init.sh /var/project/init.sh
ADD src/ /var/project/ # src contains a list of folder, each one being a sails app
# Compile the sources / run the services / run mongodb
CMD /var/project/init.sh
The init.sh script is called when the container runs.
It should start a couple of webapp and mongodb.
#!/bin/bash
PROJECT_PATH=/var/project
# Start mongodb
function start_mongo {
mongod --fork --logpath /var/log/mongodb.log # attempt to have mongo running in daemon
}
# Start services
function start {
for service in $(ls);do
cd $PROJECT_PATH/$service
npm start # Runs sails lift on each service
done
}
# start mongodb
start_mongo
# start web applications defined in /var/project
start
Basically, there is a couple of nodejs (sailsjs) application in /var/project.
When I run the container, I got the following message:
$ sudo docker run -t -i projects/test
about to fork child process, waiting until server is ready for connections.
forked process: 10
and then it remains stuck.
How can mongo and the sails processes can be started and the container to remain in a running state ?
UPDATE
I now use this supervisord.conf file
[supervisord]
nodaemon=false
[program:mongodb]
command=/usr/bin/mongod
[program:process1]
command=/bin/bash "cd /var/project/service1 && node app.js"
[program:process2]
command=/bin/bash "cd /var/project/service2 && node app.js"
it is called in the Dockerfile like:
# run the applications (mongodb + project related services)
CMD ["/usr/bin/supervisord"]
As my services are dependent upon mongo starting correctly, supervisord does not wait that long and the services are not started then. Any idea to solve that ?
By the way, it that a so best practice to use mongo in the same container ?
UPDATE 2
I went back to a service.sh script that is called when the container is running. I know this is not clean (but I'll say it's temporary so I can fix the pb I have in supervisor), but I'm doing the following:
run nohup mongod &
wait 60 sec
run my node (forever) processes
The thing is, the container exit right after the forever processes are ran... how can it be kept active ?
If you want to cleanly start multiple services inside a container, one option is to use a process supervisor of some sort. One option is documented here, in the official Docker documentation.
I've done something similar using runit. You can see my base runit image here, and a multi-service application image using that here.
So i'm using Pycharm and have installed the Django framework and Postgres. When I initially setup the default Django DB to point to the Postgres database everything works fine.
The problem occurs if I make any changes to a table, meaning the table structure, then try to synchronize the database in Pycharm I'm getting an error message saying "Django default - Connection to Django default failed:"
Here is my setup:
OSX Maverick
Postgres 9.3
Django 1.5
Pycharm 3
Python 3.3
Any assistance someone can give is greatly appreciated!
It looks like Django can't connect to Postgres. You might just need to tweak your settings.py file to make sure you have the correct postgres DB info in there.
In any event here is a more complete guide:
To get django working with Pycharm
Make sure you have followed how to install django correctly
https://docs.djangoproject.com/en/dev/topics/install/
2.go to the project drop down box and click edit configuration
then make sure
host = 127.0.0.1
port = 8000
make sure it is using python 3.x.x as the interpreter rather than python 2.x
In Pycharm go to Preferences —> Django
make sure it is enabled
and then:
Django Project Root connects to the root of your project (usually where manage.py is)
Settings points to your settings.py file inside your project
Manage script points to manage.py
make sure you have installed psycopg2
if you are using python3 that means using pip3 not pip (pip is for python 2.x)
pip3 install psycopg2
Edit your settings.py file as below
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'polls',
'USER': 'lms',
'PASSWORD': '',
'HOST': '', #note an empty string means localhost
'PORT': '5432',
'OPTIONS': {
'autocommit': True,
}
}
}
note the default postgres port is 5432 - if it is not connecting using that check this post:
postgresql port confusion 5433 or 5432?
this may also be useful:
http://marcinkubala.wordpress.com/2013/11/11/postgresql-on-os-x-mavericks/