How to transfer data from old database to new modified database in django? - python-3.x

I have old django project and new django project. I created dump file from database of old django. And also I made changes in tables and created new tables.
Now I want to load that dump file to my new django app. I am facing errors when I firstly migrate then restore data or firstly restore then migrate..
When I do migration first, it says tables already exist.
When I do restore first , it says django.db.utils.ProgrammingError: relation "django_content_type" already exists
I use migrate --fake error goes but new tables are not created in database.
I spent 3-4 days but could not succeed.
Please, help me if you can.
PS: my database is postgresql

This is not straightforward and will need some manual interventions and it depends on what do you want to do in the future
If the tables that already exist in the database have a stable design and won't be changed or you can do the changes manually using SQL statements then set managed = False to the models' meta, this will make Django skip making migrations for those models
If you want to keep the power of migration in the new project for all models then this will more complex
Delete all your migrations
You need to make your models equivalent to your database, you can set managed=False for new models like Users
Run python manage.py makemigrations, this will create the structure of the initial database.
Fake running the migrations python manage.py migrate --fake
Dump the records of django_migrations table
Create a new empty migration (with --empty) and add the SQL statements of the django_migrations table to it using migrations.RunSQL()
now fake again so you skip that new migration.
Now you are ready to use migrations as usual.
When installing new database, you will just need to run python manage.py migrate

Related

Table Replication and Synchronization in AZURE

I am pretty new to AZURE cloud and stuck at a place where I want to repplicate 1 table into another database with same schema and table name.
By replication I mean, the new table in another database should automatically synced with the original table. I can do this using the elastic table, but the queries are taking way too long and some time getting timed out, so I am thinking of having a local table in another database instead of elastic table, but I am not sure how I can do this in AZURE ?
Note: Both database resided on same DB server
Any example, links will be helpful
Thanks
To achieve this you can use a DACPAC (Data-Tier Package) a data tier package can be created in Visual Studio or extracted from an existing database. They contain the database creation scripts and manage your deltas for you. More information can be found here. For information about how to build and deploy a DACPAC using both VS and extracted from a database see this answer

Coredata migration is really needed?

I have a SQLite database with two columns it is bundled in the app. There is no write or save interaction in the database, it is fixed and read only. I read some documents and tutorials about the lightweight/manual migration all make it clear that you have to save the user data when migrating, that is not my case I don't need to save user data, I will deploy a new app version with a new database. I want to add two new attributes to my database and use in the app. Why I have to migrate? Why can't I just delete the old three files of SQLite database and add the new one and use the new attributes as needed. So I tried and did not work, anyone here to give me the steps to make the app to recognize the new database?
Actually the way to delete the SQL database files is the right way.
But you have to do that before the Core Data stack is going to be initialized.

Execute two database projects with different schemas to a single database

I have two database projects. Both have different schemas (dbo and nts). I want to execute both the projects to a single database. When I execute my nts project I am getting the name of the stored procedures as [nts].[StoredProcedureName]. When I execute dbo project to this database, the previous nts stored procedures are getting deleted. If I reverse my order (dbo first then nts), my nts stored procedures are getting executed with dbo as their schema. I am new to databases and sql. I tried going to the database and adding nts schema. But it doesn't solve my problem. Let me know if you need any other information.
When you publish your database projects, make sure these settings in the Advanced publish options are set as follows:
Always re-create database = unchecked
Drop objects in target but not in source = unchecked
Side note: You should consider combining the 2 projects into 1. A single database project can support multiple schemas.

Jhipster Elasticsearch org.elasticsearch.indices.IndexMissingException

I am trying out jhipster and learning the technology stack.
Environment:
Database:
Orcale (both prod and dev)
Elasticsearch
Windows
I created a new jhipster project and copied some external generated entities into the domain folder.
Then wrote a parser that generates the [Entity].json file in the .jhipster folder.
I ran the entity subgenerator using this json file which asks me to overwrite the existing entity file(which I copied from external project).
I select no and then the generator generates the CRUD html/js files.
When I run the application, it can save/edit data correctly.
But when I search, I get IndexMissingException.
I checked the target folder and found that target/elasticsearch/data does not contain any index for this entity.
I am not very familiar with elasticsearch and would like to know if there is any workaround for this IndexMissingException
There are a few ways to solve this.
You can simply delete your target folder while the application isn't running, then rerun it. This will regenerate the indexes for all of your entities, but because Elasticsearch is essentially a data store, you will lose all data from it so it is not appropriate for a production environment.
I have created a Yeoman generator that will generate a service to reinsert all data from your main datastore into your elasticsearch indexes. This can help resolve the data issue from the first solution. It will also programmatically delete and recreate your indexes, so it can be used to solve your problem directly.
You can use the Create Index API while the server is running. This is important for a production environment where the data in your index is important to keep.

Did a successful Core Data migration but the existing store was deleted

I have a CoreData store managed with MagicalRecord. I did a successful migratiion, but lost the data in the newly created store. This is what I have:
salonbookV1.0 is the original xcdatamodel for the initial store. I added only new attributes to an existing entity, and the mappingmodel looks like this: (a partial image).
Let me elaborate on what I did...
created the xcdatamodeld folder with both xcdatamodel's in it
marked the salonbookV1.0 as the current version and ran the app creating some entries
stopped the app, and marked salonbookV1.5 as the current version and ran the app
data which was entered previously was gone! (apparently the migration did not occur)?
The migration was accomplished; I know that because I can use the new attributes. However, the existing CD store was deleted. I have read all I can on MR, and there is only one method that deals with migration; MR does the rest without any coding from me.
So the question remains: why is the existing store being deleted?
I don't know about MR but in "normal" Core Data you have to set the NSMigratePersistentStoresAutomaticallyOption to the persistent store, otherwise it will not migrate your existing data to the new store version.

Resources