We are trying to implement push notification with webhook, so we know that GI created on top of SQL view will not support this, but will it support a projection DAC?
Yes, it supports projection DAC.
Acumatica Framework monitors its cache. And in case if changes happened to be with usage of Acumatica cache, then Push Notifications will work.
But if you will make SQL and then make PXProjection, then cache of Acumatica still will not be used, hence GI will not be able to track notifications.
IMHO I'd suggest you to create multiple GI which are used for SQL view, and collect information for multiple tables via multiple GIs.
BTW, the same is true regarding PXDataBase.Update and PXDatabase.Insert and PXDataBase.Delete. If anywhere in the code one of these is used, then it will affect how cache of Acumatica will refresh and push notifications as well. Both of them are dependent of usage of cache.
Related
Is there any suitescript API to pull the Application Performance Monitor metrics from the NS database or any other alternate way to get this data. We need the data to store for future reference to optimize the transaction records. Can anyone please help in this to get the data? Thanks!
It's not public but yes you can make requests to the underlying Suitelets that power the NetSuite APM. To see this, go to one of the APM modules, inspect the page and choose the network tab. Once this is open, you can refresh the data and then inspect the request that was made to fetch the data used to populate these metrics.
Good luck!
I have an Angular 7 project that is currently and has lots of components that talk to an API and updates data from it.
The is constantly refreshed using setTimeout so it's getting very busy with all the components refreshing data from the API.
I am therefore thinking of adding ngrx/Store to the project.
Is ngrx/Store a solution for this kind of issue or should I look for other solutions?
To make it's easy for you. I would say yes because ngrx provide API to break the application into smaller peice.
If you have multiple feature you will have 1 main store and for every feature you will have feature store connect to main store. So you can easy manage everything into feature seperation concern.
Another thing is ngrx provide effect middleware allow user to working with side effect like API call so you will have completely seprate code when working with API
I would recommend you read there small demo here for every feature they create effects, actions, reducers, selector, services.
I'm currently learning how to use CloudKit Framework and lack of documentation or examples showing how to sync Core Data and CloudKit.
I have watched all WWDC videos (2014, 2015, 2016) Dedicated to CloudKit, but none of them telling us how to implement syncing with Core Data. I can't find any fresh examples, tutorial or books, showing how to implement this syncing.
I know that it is effective to use Operations API by CloudKit (not Convenience API) and to Subscribe to changes as it said in the new WWDC 2016 videos, dedicated to CloudKit, but mapping with CoreData is a real problem.
For example, let's say I would like to create an app similar to Notes app. while offline, user can create his notes and work with them saving them to his core data database. When the device going online the app checks what changed on the server and saves newly created records to server (CloudKit).
When the app starts, it also fetches for changes from the CloudKit and if there are changes , it updates local cache (Core Data) with the new changes.
I would appreciate to have a common pattern of syncing. Where to place syncing with Core Data methods and how they should look like?
Would appreciate any information or help about this.
I'm using Swift 3, Xcode 8 , iOS 10.
As of iOS 13, there are new API's that simplify this synchronization for developers. I would recommend you to watch the WWDC19 session about the new synchronization between CoreData and CloudKit. Please note that these new API's only work for iOS 13+.
Video: https://developer.apple.com/videos/play/wwdc2019/202/
In short, you need to start using NSPersistentCloudKitContainer instead of NSPersistentContainer. This will let the syncing work automatically using automatic conflict resolution with a last-writer-wins merge strategy. If you want to build a good working app, you'll also need to do some modifications to improve the syncing for your app.
Official documentation can be found at:
Setting Up Core Data with CloudKit
Syncing a Core Data Store with CloudKit
Data modeling for collaboration (conflict-free replicated data type)
At the end of the session they also demonstrated an example of better sync merging than the default 'last-writer-wins merge strategy'. The usage of Causal Trees allow multiple users to edit the same string (and to some extend other types of data) without losing any data. I would really recommend everyone to read this article from Archagon that describes how this works and how to implement it (also with CloudKit syncing, but not the new automatic one). As demonstrated in the session, you can also implement this with the new automatic syncing between CoreData and CloudKit.
Core Data already provides the user with the ability to sync to iCloud. There's no need to use CloudKit.
Design For Core Data In iCloud
But yes, Core Data with iCloud has been deprecated. Even so, it has not been discontinued, and there are no immediate plans at apple to discontinue it, they just want to discourage its use. But it also has problems with rationalising updates from multiple devices.
In any case, I have been looking into the question of how to do this with cloud kit myself. Two answers; the first is to use the following;
Seam in GitHub
The second is to do it manually;
Designing for CloudKit
The key here is that Cloud Kit needs the record metadata to be able to handle record updates reliably, so you have to save that metadata in your Core Data database. The CKRecord class includes a method encodeSystemFields(with:) which will encode those fields into a Data record that can be stored in your database, and then your can use the appropriate decoder when you need to restore the CKRecord.
Anyway, I am about to start doing this myself. I'll update this with more information when I have it.
Apple has recently published a guide that seems to answer this question. Check out Apple's Maintaining a Local Cache of CloudKit Records to see how to store CloudKit data on device.
While this guide doesn't provide sample code to write to the device, it does answer the rest of the question. This tells you how to fetch changes from CloudKit and create data which can be stored on the device.
I have two Azure Websites set up - one that serves the client application with no database, another with a database and WebApi solution that the client gets data from.
I'm about to add a new table to the database and populate it with data using a temporary Seed method that I only plan on running once. I'm not sure what the best way to go about it is though.
Right now I have the database initializer set to MigrateDatabaseToLatestVersion and I've tested this update locally several times. Everything seems good to go but the update / seed method takes about 6 minutes to run. I have some questions about concurrency while migrating:
What happens when someone performs CRUD operations against the database while business logic and tables are being updated in this 6-minute window? I mean - the time between when I hit "publish" from VS, and when the new bits are actually deployed. What if the seed method modifies every entry in another table, and a user adds some data mid-seed that doesn't get hit by this critical update? Should I lock the site while doing it just in case (far from ideal...)?
Any general guidance on this process would be fantastic.
Operations like creating a new table or adding new columns should have only minimal impact on the performance and be transparent, especially if the application applies the recommended pattern of dealing with transient faults (for instance by leveraging the Enterprise Library).
Mass updates or reindexing could cause contention and affect the application's performance or even cause errors. Depending on the case, transient fault handling could work around that as well.
Concurrent modifications to data that is being upgraded could cause problems that would be more difficult to deal with. These are some possible approaches:
Maintenance window
The most simple and safe approach is to take the application offline, backup the database, upgrade the database, update the application, test and bring the application back online.
Read-only mode
This approach avoids making the application completely unavailable, by keeping it online but disabling any feature that changes the database. The users can still query and view data while the application is updated.
Staged upgrade
This approach is based on carefully planned sequences of changes to the database structure and data and to the application code so that at any given stage the application version that is online is compatible with the current database structure.
For example, let's suppose we need to introduce a "date of last purchase" field to a customer record. This sequence could be used:
Add the new field to the customer record in the database (without updating the application). Set the new field default value as NULL.
Update the application so that for each new sale, the date of last purchase field is updated. For old sales the field is left unchanged, and the application at this point does not query or show the new field.
Execute a batch job on the database to update this field for all customers where it is still NULL. A delay could be introduced between updates so that the system is not overloaded.
Update the application to start querying and showing the new information.
There are several variations of this approach, such as the concept of "expansion scripts" and "contraction scripts" described in Zero-Downtime Database Deployment. This could be used along with feature toggles to change the application's behavior dinamically as the upgrade stages are executed.
New columns could be added to records to indicate that they have been converted. The application logic could be adapted to deal with records in the old version and in the new version concurrently.
The Entity Framework may impose some additional limitations in the options, because it generates the SQL statements on behalf of the application, so you would have to take that into consideration when planning the stages.
Staging environment
Changing the production database structure and executing mass data changes is risky business, especially when it must be done in a specific sequence while data is being entered and changed by users. Your options to revert mistakes can be severely limited.
It would be necessary to do extensive testing and simulation in a separate staging environment before executing the upgrade procedures on the production environment.
I agree with the maintenance window idea from Fernando. But here is the approach I would take given your question.
Make sure your database is backed up before doing anything (I am assuming its SQL Azure)
Put up a maintenance page on the Client Application
Run the migration via Visual Studio to your database(I am assuming you are doing this through the console) or a unit test
Publish the website/web api websites
Verify your changes.
The main thing is working with the seed method via Entity Framework is that its easy to get it wrong and without a proper backup while running against Prod you could get yourself in trouble real fast. I would probably run it through your test database/environment first (if you have one) to verify what you want is happening.
I recently submitted an upgrade of my app which included a lightweight coredata migration (including new fields in existing tables and a couple of new tables). I followed every tip regarding this migration, including some I found on this site.
I thoroughly tested the update on three different devices and it all went ok!!!
However, this update is crashing an all my devices and probably on all my customers. I can't explain why this is happening.
Could you please help me understand this debacle?
To truly test your app and migration, you need to run your original app to create data store according to the original data model. Then you need to run your new app, opening data store that was generated with original app. This can be a real pain and is easier (at least initially) to do in Simulator because you have more control over the file system and can swap in a saved original data store. On iDevice you need to regenerate original data store for each test.
If you are testing on your own development devices then you have already migrated your data store. Is it possible that your test devices created their data stores with new data model - and never actually performed a migration?
I only generally use automatic migration during beta testing, for quick revisions, other than that I always use a mapping model, so that you have control.
the other issue is that if your model shifts far enough between releases, auto migration from v1-v2 could be fine, and v2-v3 could be ok, but v1-v3 could be too drastic to be inferred. by making maps for them, you retain control of the migration.