Syncing Core Data with CloudKit publicCloudDatabase - core-data

How can I store/sync core data records with CloudKit's publicCloudDatabase?
Syncing works, but only with the privateCloudDatabase - which massively limits the use cases for combining CloudKit and Core Data. I hope I missed some fine tuning…
My aim: entries to the db must be visible to all users of my app/webservice (CloudKit JS).

Related

How can I do bulk inserts into the Common Data Service?

I have 1000 records that I need to sync daily from an API. I am currently bulk inserting them into a SQL Database, however I would like to use Dataverse/a Common Data Service database instead.
The Logic App connector seems to do 1 record at a time and the SDK does PUTS and POSTS. How can I either insert 1000 records into the Common Data Service in bulk OR somehow synchronise my SQL DB with the CDS?
As far as I know there is no another way to do that without programming. You can extended your Power Automate Flow with Azure Functions to insert these records in a single transaction.
In this link explain how can be do it.
https://learn.microsoft.com/en-us/powerapps/developer/data-platform/webapi/execute-batch-operations-using-web-api#when-to-use-batch-requests
Please let me know wtih anything
If you want to regularly ingest data (1000 rows) into Dataverse (CDS), then use Dataflows. The following link to MS Docs describes how to set up scheduled bulk data updates. It is therefore a pull rather than push model.
https://learn.microsoft.com/en-us/powerapps/maker/data-platform/create-and-use-dataflows

Logic App to push data from Cosmosdb into CRM and perform an update

I have created a logic app with the goal of pulling data from a container within cosmosdb (with a query), looping over the results and then pushing this data into CRM (or Common Data Service). When the data is pushed to CRM, an ID will be generated. I wish to then update cosmosdb with this new ID. Here is what I have so far:
This next step is querying for the data within our cosmosdb database and selecting all IDS with a length that is greater than 15. (This tells us that the ID is not yet within the CRM database)
Then we loop over the results and push this into CRM (Dynamics365 or the Common Data Service)
Dilemma: The first part of this process appears to be correct, however, I want to make sure that I am on the right track with this. Furthermore, once the data is successfully pushed to CRM, CRM automatically generates an ID for each record. How would I then update cosmosDB with the newly generated IDs?
Any suggestion is appreciated
Thanks
I see a red flag in your approach here with this query with length(c.id) > 15. This is not something I would do. I don't know how big your database is going to be but generally not very performant to do high volumes of cross partition queries, especially if the database is going to keep growing.
Cosmos DB already provides an awesome streaming capability so rather than doing this in a batch I would use Change Feed and use that to accomplish whatever your doing here in your Logic App. This will likely give you better control of the process and likely allow you to get the id back out of your CRM app to insert back into Cosmos DB.
Because you will be writing back to Cosmos DB, you will need a flag to ignore the update in Change Feed when the item is updated.

Sequelize - Change Watcher (one database, two different apps)

I have two different Node Projects that access to the same database with sequelize.
One of the Node Apps (kind of backoffice) updates some tables and the other one use the data of that tables to performs some operation.
The thing is that this data should not be changed constantly and the second app need to be as fast as possible, thats why the second app querys the tables once (when app starts) and then stores this data in memory so it can do the operations faster (because the are no i/o to database).
My problem is that sometimes, this data may change throw the first app, and as this two apps have no contact between them (for security reasons) the only way I see is to have some "dirty" flag in some table of the database, and then make the first app to change it after some update and the second app to query each X seconds in order to check if this flag has been changed.
I don't like this approach and that's why I'm posting this question:
Does Sequelize provides a better or fancy way to do this ?
like some kind of "changes/dirty" watcher
Thanks in advance

Copy documents from one DocumentCollection to another?

In my Azure CosmosDB, that I use with the Gremlin API there is one database called graphdb with several {DocumentCollections}.
I would like to copy a selected set of Vertices and Edges from one collection (graphdb) to another (Tintin).
I managed to do this by transferring all data via the client, but it would be much easier if data stayed in Azure. Thus I tried some SQL in the Azure portal like:
SELECT *
INTO Tintin
FROM graphdb;
However, this seems unsupported.
Now you cannot join multiple collections and you query violates this rule.
But I think +1 for your idea, you should post it on https://feedback.azure.com/

Real-Time Database Messaging

We've got an application in Django running against a PGSQL database. One of the functions we've grown to support is real-time messaging to our UI when data is updated in the backend DB.
So... for example we show the contents of a customer table in our UI, as records are added/removed/updated from the backend customer DB table we echo those updates to our UI in real-time via some redis/socket.io/node.js magic.
Currently we've rolled our own solution for this entire thing using overloaded save() methods on the Django table models. That actually works pretty well for our current functions but as tables continue to grow into GB's of data, it is starting to slow down on some larger tables as our engine digs through the current 'subscribed' UI's and messages out appropriately which updates are needed as which clients.
Curious what other options might exist here. I believe MongoDB and other no-sql type engines support some constructs like this out of the box but I'm not finding an exact hit when Googling for better solutions.
Currently we've rolled our own solution for this entire thing using
overloaded save() methods on the Django table models.
Instead of working on the app level you might want to work on the lower, database level.
Add a PostgreSQL trigger after row insertion, and use pg_notify to notify external apps of the change.
Then in NodeJS:
var PGPubsub = require('pg-pubsub');
var pubsubInstance = new PGPubsub('postgres://username#localhost/tablename');
pubsubInstance.addChannel('channelName', function (channelPayload) {
// Handle the notification and its payload
// If the payload was JSON it has already been parsed for you
});
See that and that.
And you will be able to to the same in Python https://pypi.python.org/pypi/pgpubsub/0.0.2.
Finally, you might want to use data-partitioning in PostgreSQL. Long story short, PostgreSQL has already everything you need :)

Resources