I'm a building an internal webb application for components of building parts. I have table with projects which is tied to some other tables. When a user creates a new project, I want to "bootstrap" the project with a default categorization schema, which the user then can modify for his/her project. So I need to do some copy from a default schema and tie it to the users project.
I'm running NodeJS on backend, AngularJS on frontend and postgres as db. Where is the best way to put this logic? Either I use triggers on the db. The trigger is activated when a new post is made to the project table. Or, I'll do it with complicated queries in Node. Or is there some other way? Is there a best practice? It's probably "easier" to do a trigger. But I worry about the maintenance and testing of the app.
Since the issue that you have is related to the state of the database, you should solve it inside the database. There are basically two ways of solving this:
Revoke the insert privilege on the project table. Create a function new_project() that has parameters for all the required initial state of the project. Inside that function you create schema, do some copying, setup privileges and populate the tables with the parameter values.
Revoke the insert privilege on the project table. Create a view that has all required columns from all relevant tables to make a valid initial project and create an INSTEAD OF INSERT trigger on the views. In the trigger function you perform all the required steps as above.
Debugging code on PostgreSQL is not very advanced but whether or not you place you code in PostgreSQL or on the application side, you will have the same issues. The advantage of PostgreSQL is that the bug - if any - is never far away from where you code operates.
Related
I have an app out in the App Store, and I am working on a lightweight migration (adding new attributes and new entities, not deleting anything). From extensive research, I know that I need to add a new version of my current Core Data Model for the local version of the data model. Anyone who updates their app and only uses the local data will automatically be migrated over.
However, I can not find anything about what happens when I update the iCloud schema (from icloud.developer.apple.com). Mainly, I'm concerned about users who are on older versions of the app and are using iCloud. When I update the schema in the iCloud website, will users on an older version of the app lose their current data or not be able to sync their data since their local schema will be different from the iCloud one?
Also, I'm using an NSPersistentCloudKitContainer for syncing the Core Data with CloudKit.
Any help is greatly appreciated as I do not want to mess up anyone's data!
No, their data still be on iCloud and they could continue to use your app.
When your Schema is deployed to the Production environment, you can not change types of Records or delete them, so all your changes will be done only in addition to the current Schema settings and does not affect users, which have not updated the app yet.
I had a similar question previously and was quite anxious about updating my app Schema, but everything went well - no problems for users and no data was lost.
Do not forget to initialize your new scheme from the app and deploy changes to the Production on iCloud dashboard.
You could initialize your scheme in your AppDelegate when you initialize your NSPersistentCloudKitContainer with following code:
let options = NSPersistentCloudKitContainerSchemaInitializationOptions()
try? container.initializeCloudKitSchema(options: options)
After that you could comment out these lines until the next update of Core Data model.
You could check that all changes are uploaded in the iCloud dashboard by clicking on Deploy Schema Changes - you will see a confirmation window with all the changes to the model which will be deployed.
It is also possible to change your Scheme directly in the iCloud dashboard, but it is not so convenient (unless you need to add just one Record type).
Since changes in the Schema are not affecting existing users, I usually move them to Production before I submit the app for review, but after all testing related to new Record types is done and I am not planning to change anything there.
I have an ASP.NET MVC 5 project with database and external JSON file that is updated once a day, by third-party website.
What I'm trying to do is to update my DB once a day accordingly with the JSON (the precision is not the issue here).
Currently I'm using a button that call to Action that parsing the JSON and update the database, and I want to do it automatically.
As far as I understood, running the scheduled task from the MVC application is bad practice and risky, and running external dedicated service is preferred.
If I understood it correctly, I can make a console application that will parse the JSON and update the DB automatically, but I'm not sure if this console application can run on the windows server, and if so, how to do it (and I'm also not sure that this is really good idea).
So, I would be very happy if you can advise me here.
Thanks.
Finally the solution was to build a console application that parse JSON and updating the database.
Then, I used the built-in task scheduler in my hosting control panel to run the application (in my case the control panel is plesk)
Say I have 1 azure app which calls 1 azure api service. Now I need to update both applications to a newer version, in the most extended scale, i.e. database not compatible, api has changes to existing method signatures that are not compatible to old version invocation either. I use visual studio's publish profile to directly update. The problem I've been facing is that during the publish process, although it's only a few seconds of time, there're still active end users doing things on the web app and making api calls. I've personally seen results in such situations which are unstable, unpredictable and the saved data might be simply corrupt data.
So is there a better way to achieve some sort of 'flash update' which causes absolutely no side effect to end users? Thanks.
You should look at a different deployment strategy. First update the database, maybe with accepting null values, deploy a new API next to the current one. Validate it. Switch the traffic from current to new. Same for the website. It is a blue green deployment strategy, requires some more effort but solves the downtime or errors. https://www.martinfowler.com/bliki/BlueGreenDeployment.html
For the web app, you should use the deployment slots, deploy your new version to a staging slot and once you are ready, it is a matter of pointing the site URL to the new slot. This doesn't take anytime at all.
For the database, I believe you should freeze updates, take a backup and let the users work in readonly mode, and once you finish all your DB migration and changes, point the application to the new database and that is it.
I'm using Entity Framework with a code first model; I've got my InitialCreate migration setup and working locally, I can run code against my database context, and everything works.
But when I deploy my project to Azure, I just get a connection string error ("Format of the initialization string does not conform to specification starting at index 0.").
I can't seem to find where in the Publish dialog are the options to create the Azure database. -- Do I have to create the database separately and hook them up manually? -- If so, what exact process should I follow. Does the database need to have contents?
I thought Microsoft was making a big deal that this could all be done in a single deploy step, but that doesn't seem to be the case from my current experience.
When you publish your project in the publish dialog, there is an option for the code first migration in the Settings tab, it will automatically show your data context and it will give you the option to set the remote connection string, and this will add a section in web.config to specify the data context and the Migration class to run during the migration process.
It will also allow you to set if you want to run the code first Migration or not.
You can also take a backup from the dev and clear the data then upload it to Azure SQL DB, this way the code first data context will check at first connection and it will find the code an database the same
Windows Azure Table has two distinct mechanisms for altering an existing entity: Update, which modifies properties in place, and Merge which replaces the entire entity.
Which of these is used when you call TableServiceContext.UpdateObject()? (I'm guessing Update.) And is the other one exposed at all through this API?
(Apologies if this is right under my nose in the docs and I'm not seeing it.)
Actually, it's Merge that modifies properties in place, and Update that replaces the entire entity.
I believe the storage client library does a merge by default, but I think you can use SaveChangeOptions.UpdateAsReplace to modify this behavior.
An easy way to test/verify this is to run a debugging proxy like Fiddler and just see what happens over the wire.