I'm trying to read in and loop through a configuration file which contains different mongoURIs and then trying to monitor their activity using mongo-oplog. I don't really know how to set the listeners (such as for update, insert, and delete) for all of these databases dynamically. Any ideas as to how I can go about doing so?
This is what I ended up doing (in case it helps someone out in the future).
I stored each URI in a list along with anther list that contains it's details like DB and collection name, and it turns out that you can loop through the list and set up each mongo-oplog one by one.
After you're done doing that the listeners are still active and whatever action you specified will be executed without you having to do anything related to the mongo-oplog again.
Related
In maximo, can we delete a work order? The select action menu gives me
BMXAA4612E - Cannot delete because it is, or had at one time been approved.
I have created numerous test work orders and it is getting difficult to track new work orders.
Short Answer: You can't.
Standard Maximo will not let you delete it past a certain point. It keeps this data around for a sort of auditing (and because there could be a lot of dependencies to undo).
Bad Answer: With some database queries. You can of course delete about anything if you start modifying Maximo's underlying database directly. The Work Order object has a number of related tables though, so make sure you delete all referenced data from those as well. And there might be other places you need to update too, like a PM due date, depending on the situation.
In a normal Maximo environment, it is very common to have a lot of open work orders at any given time. You are going to need to develop ways to handle the fluff. Closing your old test work orders helps, because Maximo filters out closed work orders by default. Standard filters with some specialized data and saved queries are some other options.
Clear attribute: FIRSTAPPRSTATUS
update woactivity set FIRSTAPPRSTATUS = null
where ;
Then deleting should be possible
--> However it is not (in maximo 7.6)
For Maximo 7.6, change the workorder status to WAPPR from the backend or through MIF and then use MIF to delete the record(s)
I created an action that executes set FIRSAPPRSTATUS null and created an escalation with the condition of selecting a work order. After the escalation is completed, deletion will be available if the remaining conditions for deleting the work order are met.
I have a strange problem: I want to access documents in a different database (same server). My approach is very close to this one discussed here: http://www-10.lotus.com/ldd/nd85forum.nsf/DateAllFlatWeb/517ef6249d5b9fa6852575cc00503786?OpenDocument
I have only 3 docs in the source database. 2 are created directly, one is copied from another database (these are just test document). We have a generic view that lists thos entries from a view, calcs the links in a form like this:
http://localhost/database.nsf/xpMBK.xsp?action=openDocument&db=dominotest%2Ftest%2Fulcbs%21%21projects%2FFKIE%2FEinsMuB.nsf&view=AMBKEinsAll&documentId=781F14A98A699548C1257C3200316BAC
As you can see we are using an Xpage in the current database and place parameters that point the Xpage to open the document to the source database (notation is server!!database here), a view (this is the one to which I want to return) and finally the unid of the source document.
Now the strange one:
I cannot open the copied document, receiving the
NotesException: Invalid universal id
lotus.domino.local.Database.getDocumentByUNID(Unknown Source)
error.
Even better: if I copy a document that works within the same database (the current one) this document can also not opened anymore!
What's this and can you give me a hint to solve this?
Thanks in advance!
If, by "copied", you mean either manually copied and pasted into the target database or programmatically duplicated via copyToDatabase(), the new copy of the document will be assigned a new UNID; it is not guaranteed to have the same UNID as the original did (and, in my experience, it's rare that it preserves the original). If you're duplicating the document programmatically, be sure to check its new UNID afterward and use that ID in your URL calculation instead.
I've had a problem very similar to this in the past, and the answer turned out to be that I wasn't opening the NSF file that I thought I was opening. I was using NotesDatabase.OpenByReplicaID, and there were two replicas of the database on the server, with different sets of documents. In that situation, Notes gets to pick one of the two replicas -- you have no control over it. The replica that was actually opening contained some of documents corresponding the the UNIDs that I was trying to access, but some of them really were not there and therefore the getDocumentByUNID() method was correct in throwing the "Invalid universal id" error. This was really, really hard to debug.
After I figured it out and removed the second replica from my server, the first thing I did (after testing and confirming the problem went away) was to write an agent that scans a server for duplicate replica IDs.
The UNID:S in a Domino database when it's copied to the database thru copyToDatabase is done like this.
One part of the UNID comes from the database one part is document unique. So if you copy a document from one database to another the document could get the same unid each time. If the unique combination doesn't have a valid document with that combination in the database, the document will get the same UNID everytime. In other cases the document will get a new Id.
More information can be found here
UNID and copytodatabase
Thank you guys for your ideas!
But I was completely wrong #facepalm
The problem was: a colleague coded a bean to access the other database and I didn't noticed that the config document pointed to a replica on another server, so when I copied the document within my database on my local server it was fairly clear that the xpage could not find the copied one - as it resided on the other machine.
Thank you anyway :)
I need to delete some records related to the current record when it is deactivated. I can get the the event when the record is deactivated but I have looked around for some time on Google and this site for the code to delete records in javascript but I can't find any, though I know there must be some out there.
Can anyone help?
Thanks
I would be alright with doing this with a plugin, all I would need to know is how to pick up that the record has been deactivated
You can register a plugin on the SetState and SetStateDynamic messages (recommend the Pre event in your scenario). Each of these messages will pass an EntityMoniker in the InputParameters property bag which refers to the record that is being deactivated.
In your code you will need to:
Check that the new state in the SetState request is deactivated (since of course a record can usually be reactivated and you don't want to try deleting things then too, presumably)
Pick up the EntityMoniker from IPluginExecutionContext.InputParameters
Run your query to identify and delete related records
Exit the plugin to allow the SetState transaction to complete
If you really want to delete a record with JavaScript there is a sample on the MSDN.
Its a little long winded (its a CRUD example - create, retrieve, update & delete). But it should contain the information you need.
Note there is also an example on that page which doesnt use jQuery (if using jQuery is a problem).
That said I think for this operation would will find it easier to implement, test and maintain with a plugin (so I would go for Greg's answer).
Additionally a plugin will apply in all contexts, e.g. if you deactivate the record in a workflow your JavaScript will not run, but a plugin will.
I have an NSTableView which is populated via a CoreData-backed NSArrayController. Users are able to edit any field they choose within the NSTableView. When they select the rows that they have modified and press a button, the data is sent to a third-party webservice. Provided the webservice accepts the updated values, I want to commit those values to my persistent store. If, however, the webservice returns an error (or simply fails to return), I want the edited fields to revert to their original values.
To complicate matters, I have a number of other editable controls, backed by CoreData, which do not need to resort to this behaviour.
I believe the solution to this problem revolves around the creation of a secondary Managed Object context, which I would use only for values edited within that particular NSTableView. But I'm confused as to how the two MOC would interact with each other.
What's the best solution to this problem?
The easiest solution would be to implement Core Data's undo functionality. That way you make the changes to Core Data but if the server returns the error, you just rollback the changes. See the Core Data docs for details.
I'm wondering what strategies people are using to handle the creation and editing of an entity in a master-detail setup. (Our app is an internet-enabled desktop app.)
Here's how we currently handle this: a form is created in a popup for the entity that needs to be edited, which we give a copy of the object. When the user clicks the "Cancel" button, we close the window and ignore the object completely. When the user clicks the "OK" button, the master view is notified and receives the edited entity. It then copies the properties of the modified entity into the original entity using originalEntity.copyFrom(modifiedEntity). In case we want to create a new entity, we pass an empty entity to the popup which the user can then edit as if it was an existing entity. The master view needs to decide whether to "insert" or "update" the entities it receives into the collection it manages.
I have some questions and observations on the above workflow:
who should handle the creation of the copy of the entity? (master or detail)
we use copyFrom() to prevent having to replace entities in a collection which could cause references to break. Is there a better way to do this? (implementing copyFrom() can be tricky)
new entities receive an id of -1 (which the server tier/hibernate uses to differentiate between an insert or an update). This could potentially cause problems when looking up (cached) entities by id before they are saved. Should we use a temporary unique id for each new entity instead?
Can anyone share tips & tricks or experiences? Thanks!
Edit: I know there is no absolute wrong or right answer to this question, so I'm just looking for people to share thoughts and pros/cons on the way they handle master/details situations.
There are a number of ways you could alter this approach. Keep in mind that no solution can really be "wrong" per se. It all depends on the details of your situation. Here's one way to skin the cat.
who should handle the creation of the copy of the entity? (master or detail)
I see the master as an in-memory list representation of a subset of persisted entities. I would allow the master to handle any changes to its list. The list itself could be a custom collection. Use an ItemChanged event to fire a notification to the master that an item has been updated and needs to be persisted. Fire a NewItem event to notify the master of an insert.
we use copyFrom() to prevent having to replace entities in a collection which could cause references to break. Is there a better way to do this? (implementing copyFrom() can be tricky)
Instead of using copyFrom(), I would pass the existing reference to the details popup. If you're using an enumerable collection to store the master list, you can pass the object returned from list[index] to the details window. The reference itself will be altered so there's no need to use any kind of Replace method on the list. When OK is pressed, fire that ItemChanged event. You can even pass the index so it knows which object to update.
new entities receive an id of -1 (which the server tier/hibernate uses to differentiate between an insert or an update). This could potentially cause problems when looking up (cached) entities by id before they are saved. Should we use a temporary unique id for each new entity instead?
Are changes not immediately persisted? Use a Hibernate Session with the Unit of Work pattern to determine what's being inserted and what's being updated. There are more examples of Unit of Work out there. You might have to check out some blog posts by the .NET community if there's not much on the Java end. The concept is the same animal either way.
Hope this helps!
The CSLA library can help with this situation a lot.
However, if you want to self implement :
You have a master object, the master object contains a list of child objects.
The detail form can edit a child object directly. Since everything is reference types, the master object is automatically updated.
The issue is knowing that the master object is dirty, and therefore should be persisted to your database or whatnot.
CSLA handles this with an IsDirty() property. In the master object you would query each child object to see if it is dirty, and if so persist everything (as well as tracking if the master object itself is dirty)
You can also handle this is the INotifyPropertyChanged interface.
As for some of your other questions :
You want to separate your logic. The entity can handle storage of its own properties, and integrity rules for itself, but logic for how different object interact with each other should be separate. Look into patterns such as MVC or MVP.
In this case, creation of a new child object should either be in the master object, or should be in a separate business logic object that creates the child and then adds it to the parent.
For IDs, using GUIDs as the ID can save you quite a bit of problems, because then you don't have to talk to the database to determine a correct ID. You can keep a flag on the object for if it is new or not (and therefore should be inserted or updated).
Again, CSLA handles all of this for you, but does have quite a bit of overhead.
regarding undo on cancel : CSLA has n-level undo implemented, but if you are trying to do it by hand, I would either use your CopyFrom function, or refresh the object's data from the persistance layer on cancel (re-fetch).
i just implemented such a model.but not using NH, i am using my own code to persist objects in Oracle Db.
i have used the master detail concept in the same web form.
like i have master entity grid and on detail action command i open a penal just below the clicked master record row.
On Detail Add mode, i just populate an empty entity whose id were generated in negative numbers by a static field.and on Save Detail button i saved that entity in the details list of the Master Record in Asp.NET Session.
On Detail Edit,View i populated the Detail Panel with selected Detail through ajax calls using Jquery and appended that penal just below the clicked row.
On Save Button i persisted the Master Session (containing list of Details) in database.
and i worked good for me as if multiple details a master need to fill.
also if you like you can use Jquery Modal to Popup that Panel instead of appending below the row.
Hope it helps :)
Thanks,