I've got a really simple azure Function with a CosmosDbTrigger set up (taken nearly straight from the examples just as a minimal repro):
[FunctionName("ProcessEmail")]
public static void Run([CosmosDBTrigger("mydb", "mycollection")]IReadOnlyList<Document> documents, TraceWriter log)
{
log.Verbose("Documents modified " + documents.Count);
log.Verbose("First document Id " + documents[0].Id);
}
This was super simple to set up and works perfectly.
However, in my case I am only interested in being notified when a record is inserted - not when it is updated.
Is it possible to have the trigger only occur when a document is inserted?
If not, is it possible to tell, per-document, whether it was an insertion or an update that triggered this run?
If not, what's my best option here? Have a flag on the document for whether or not this phase of it has been processed?
We had a similar requirement for an update-only CosmosDB trigger in one of our function apps. However, we ended up using a flag since controlling the change feed is not yet supported according to the docs.
Today, you see all operations in the change feed. The functionality
where you can control change feed, for specific operations such as
updates only and not inserts is not yet available. You can add a “soft
marker” on the item for updates and filter based on that when
processing items in the change feed.
Related
I have a Flutter app that lets users rent items from eachother with Firestore RTDB. In my rental document, I have a field status that determines the status of the rental (think of it like shipping an item, where items can have status of 'ordered', 'shipped', 'delivered' etc). My status variable is a number between 0 and 5, and each number represents a different phase. When the status variable changes, I want to notify the other user in the rental with a push notification. But I don't know which of the following methods is best.
The first way is to use a cloud function that triggers every time the rental document is updated. But I only check for the status field. It would look something like this:
exports.notify = functions.firestore.document('rentals/{rentalId}')
.onUpdate(async (snapshot, context) => {
const oldSnap = snapshot.before.data(); // previous document
const newSnap = snapshot.after.data(); // current document
// status changes from 0 to 1
if (oldSnap.status === 0 && newSnap.status === 1) {
// do something
}
})
The one downside I can think of is I would have to do another read to get the device push token of the other user. Also, for every rental document update this cloud function will trigger, and ultimately may not even need to execute in the first place
The other way would be to have a notifications collection that stores notifications, and have a cloud function that triggers when a new notification document is added. Then, on the client side, when the user taps a button, update the status in the rental as well as create a new notification document.
Firestore.instance
.collection('rentals')
.document(rentalId)
.updateData({'status': newStatus});
Firestore.instance.collection('notifications').add({
'title': title,
'body': body,
'pushToken': <TOKEN HERE>,
});
In comparison to method 1, this does an extra write instead of a read.
Which method is better?
Both approaches can technically work and are valid. Which one you choose is depending on the use-case, and (given that both can work here) on personal preference. That's why I'll simply highlight a few key differences below, and explain when I personally choose to use which one.
The first approach you describe is treating your database like a state machine, where each state and state transition has specific meaning. You then use Cloud Functions to trigger code in the state transition.
The second approach treats the database as a queue, where the presence of data indicates what needs to happen. So Cloud Functions then triggers on the simple presence of the document.
I typically use a queue based approach for production work, since it makes it very easy to see how much work is left to be done. Anything in your notifications collection is a notification that needs to be sent.
In the state-transition data model it is much harder to see this information easily. In fact, you'll need to add extra fields to the document in order to be able to get this list of "pending notifications". For example: rentals with a pending notification are rentals where the timestamp that the status changed from 0 to 1 (a field you'll need to add, e.g. status_1_timestamp) is smaller than the timestamp the last notification was sent (a field like notification_timestamp).
But I sometimes use the state transition approach too. Usually when I want to transform the existing document, or because it's just a cool use-case to show (as in most cases the Firebase/Firestore SDKs would not expose both the old and new state).
I'd probably pick the queue based approach here, but as said before: that's a personal preference for me based on the reasoning above. If those reasons don't apply to you, or you have different reasons, that can be fine too.
Using the Manufacturing package (JAMS), I'm trying to write custom code to trigger a process after a Move transaction is Released. I should be able to do a PXOverride of the MoveEntry class' Release method, but at runtime Acumatica complains that I can't because Release is not a member of MoveEntry. It appears to be a problem because MoveEntry is derived from MoveEntryBase which is written in a way which can't be overridden.
I also tried to override the INReceiptEntry class' Release method, since releasing a Move transaction creates INReceipt records and releases them. So I thought I could trigger my process there after each INReceiptEntry Release call. However, when I override this, it isn't getting called when a Move transaction is Released. I thought about also possibly overriding the Persist of INReceiptEntry and check for Released=true. However, every time Persist is called, Released=false. Possibly the cache isn't updated, I don't know.
Is there any way I can trigger a process immediately after a Move Transaction is finished Releasing?
ERP v17.210.0034
JAMS v17.210.0034.42 - 2018.06.06
You should be able to override AMReleaseProcess ReleaseDocProc(AMBatch doc)
Just check the doc for the correct AMDocType as this process is for all MFG transactions.
If you want to override on the IN side it would be similar to override INDocumentRelease ReleaseDocProcR(JournalEntry je, INRegister doc) and check the doc type
Trying to look at the buttons on MoveEntry will not always run as a user can use the release process screen (same setup found in Inventory). The button on the IN entry screens are not used for releasing IN transactions (uses INDocumentRelease)
We have a C++ legacy application that connects to an Oracle 11g database. The application uses Microsoft Data Access Objects (DAO) library to allow record browsing and modification. We also have some triggers on tables to track row updates and insertions.
The problem is that the triggers don't fire for the CLOB columns that we have in our tables. It gets fired for other columns but for this one CLOB column, it neither fires during update nor during delete. I've added the trigger for all three: UPDATE, INSERT and DELETE.
Is there some option that manages triggers for CLOBs? Or some other setting that might be affecting this? Any ideas where should I look for a solution?
I found a possible explanation to this non-firing trigger:
Your trigger actually works -- WHEN it is fired!
The problem is -- you are NOT updating the table when you set the lob value. You might be modifying the lob contents but -- and this is key -- you are NOT modifying the lob locator in the table itself. The row values of the table are not changing, the trigger does NOT fire for the dbms_lob.copy (or write, or trim, or append, or ...)
I think the solution proposed here at askTom requires to have a specific procedure but I haven't quite understand it.
I am trying to update an existing activity stream entry e.g. the title of the entry.
Here I found the code for the creation of the new entry:
Link
But I could not find any reference how do I update an existing entry?
Additional information:
I use IBM Connections 4.5 and the IBM SBT
I create the entries with a system user to other users with the flag
'actionable'
Here my questions:
Which URL?
Which Method (PUT?)?
Which Json?
And another question about the actionable flag:
How can I change the actionable flag for an entry of another user? The description for my own entries is described on slide 37 here: Link
Thank you so much !
Markus
Ok, I think I fully understand the issue now. As suggested this is not supported, but there is a way you can achieve the same result.
First of all why isn't it supported . . .
Events are a point in time (and they were accurate at that point in time)
A new event on the same object supersedes it (as it's now the most interesting) but
doesn't invalidate it (it can still be seen in history)
The Actionable view does not show a rolled up view, instead it shows all events that are marked actionable (and there may be multiple actions related to any given Object)
What you can do . . .
If you want to replace an entry in the Actionable view, you can remove the event from the actionable view (it is just removed from that view and could still be seen in event history)
You can then add another event to the actionable view (which as the latest event will also supersede events in other rolled up views)
Removing the actionable flag is here ->http://www-10.lotus.com/ldd/appdevwiki.nsf/xpDocViewer.xsp?lookupName=IBM+Connections+4.5+API+Documentation#action=openDocument&res_title=Support_for_Saved_and_Actionable_events&content=pdcontent
an activity stream object is treated an an immutable object in IBM Connections.
you can Create Delete and Read.
You can use a rollup-id in IBM Connections.
In order to address the scenario where a user posts a file and 200 people 'like' it, filling up their Activity Stream, rollup needs to be performed. This means:
Only the latest event on any given object is shown
The 2 most recent comments are returned.
http://www-10.lotus.com/ldd/appdevwiki.nsf/xpAPIViewer.xsp?lookupName=IBM+Connections+4.0+API+Documentation#action=openDocument&res_title=Support_for_Rollup&content=apicontent
Just to extend the last answer, delete is not directly supported in the Activity Stream in IBM Connections, though a means of propagating delete based on deletion of an Object was introduced in IBM Connections 4.5.
However it does seem like submitting a new event with an appropriate rollup id is what you're looking for. That way users will see the latest, but the history remains and can be seen if desired.
I need to delete some records related to the current record when it is deactivated. I can get the the event when the record is deactivated but I have looked around for some time on Google and this site for the code to delete records in javascript but I can't find any, though I know there must be some out there.
Can anyone help?
Thanks
I would be alright with doing this with a plugin, all I would need to know is how to pick up that the record has been deactivated
You can register a plugin on the SetState and SetStateDynamic messages (recommend the Pre event in your scenario). Each of these messages will pass an EntityMoniker in the InputParameters property bag which refers to the record that is being deactivated.
In your code you will need to:
Check that the new state in the SetState request is deactivated (since of course a record can usually be reactivated and you don't want to try deleting things then too, presumably)
Pick up the EntityMoniker from IPluginExecutionContext.InputParameters
Run your query to identify and delete related records
Exit the plugin to allow the SetState transaction to complete
If you really want to delete a record with JavaScript there is a sample on the MSDN.
Its a little long winded (its a CRUD example - create, retrieve, update & delete). But it should contain the information you need.
Note there is also an example on that page which doesnt use jQuery (if using jQuery is a problem).
That said I think for this operation would will find it easier to implement, test and maintain with a plugin (so I would go for Greg's answer).
Additionally a plugin will apply in all contexts, e.g. if you deactivate the record in a workflow your JavaScript will not run, but a plugin will.