Guidewire PolicyCenter The object you are trying to update was changed by another user. Please try your change again - guidewire

I'm having this weard mistake "The object you are trying to update was changed by another user. Please try your change again." I would like to know what is the reason of this without context. There are no logs about it, no exception stacktrace, no information about this mistake in documentation. I believe this is something about Bundles but I want to now the exact reason

GW throws several exceptions related, by example ConcurrentDataChangeException, DBVersionConflictException depending of entity type. It occurs when a bean is modified concurrent by two or more transactions (bundle).

This error usually happens because on one bundle two transaction changes are trying to commit where prior bundle is still not committed.
Let us understand with one example- There is user which does a policy change to add any contact or any other business operation. And at the same time another user open same transaction in his GW PC UI and tries to do some business operation at this GW system throws this error on UI because the pervious bundle is still not committed.
The error trace leads you to some OOTB java classed and I think you can get it from PCLogs from PC UI Server logs.
Hopes this clarifies you.

Actually this happens because your object was updated by someone else on the DB during your db read from the DB and your attempt to write it back into the DB.
GW does this by leveraging the version check from your database object.
The exception message actually tells you who and when did the conflicting update. There're no stack traces that will point you to the cause of the other update.
Root causes might be several - from distributed cache in clustered env going out of sync to actually having some other party doing work on the same entities as you do. So the fix is per case really.

Related

Error issuing part using Maximo integration framework MXINVISSUE

We are upgrading from Maximo 7.5 to 7.6.1. Our web service that uses MXINVISSUEInterface is throwing an exception when we try to issue a part that is marked as a spare part and the work order has an asset. The exception says "BMXAA4195 - A value is required for the Organization field on the SPAREOBJECT object." The part is not in the SPAREPART table for the asset so it is trying to add it, but for some reason the ORGID is not populated from the MXINVISSUE_MATUSETRANSType object.
I re-generated the WSDL on the new server and rebuilt the solution, but after populate a new required field, I still get the same error.
Is there a system property that must be set. It works in 7.5 writing the record to MATUSETRANS and SPAREPART.
This sounds like a bug, so you might raise a Support Case with IBM about it. For a workaround until IBM releases a fix and you install said fix, consider the following options.
Can you set the Default Insert Site for the user using the web service?
Is it practical to put a Default Value on SPAREPART.ORGID?
Create an automation script called SPAREPART.NEW that will somehow figure out an ORGID to use. To "figure out", my first would be to check if mbo has an owner that has an ORGID and, assuming it does, use that.

XPages OpenLog - Logging to wrong database

Apologies Paul, this is a duplicate to the post I put on OpenNTF, however the site will not allow me to log in the last 2 days to follow up, plus the wider audience of Stack might find me someone with an identical issue.
To keep it short.
I have 1 openLog database in a folder structure, logs/xpageslog.nsf
During development, I could log to this database, for example, using Paul Withers XPages OpenLog Logger, to log uncaught exceptions with the following settings:
private String logDbName = "logs\\xpageslog.nsf"; // in OpenLogItem.java from OpenLogClass library
logDbName = "logs/xpages.nsf" // in OpenLogFunctions.ls
xsp.openlog.filepath=log/xpageslog.nsf // in xsp.properties
However, if I then change all the above, to simply go to xpageslog.nsf, in the root of the server (this is a 2nd openLog database) errors still get logged to the first database.
I've tried building, cleaning, re-compiling, all to no avail. It seem's to be that somewhere, or somehow, the references to the original database are not being overwritten.
Any ideas?
It is good practice to use restart task http instead of tell http restart. Both commands have different effects.
As confirmed in comments, this solved the problem.
Some use tell http quit followed by load http, the effect is the same as with restart task http. At the other hand, simple tell http restart does not fully initialize http task, it's kind of soft reset and I recommend not using it.

Syncing Problems with Xamarin Forms and Azure Easy Tables

I've been working on a Xamarin.Forms application in Visual Studio using Azure for the backend for a while now, and I've come across a really strange issue.
Please note, that I am following the methods mentioned in this blog
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution. What I mean by that is that if I create another solution that accesses the exact same backend, it can perform it's own create/sync data, but will not bring over the data generated by the other solution, even though they both seem to have the exact same access. This appears to be some kind of a security feature/issue, but I can't quite make sense of it.
Has anyone else encountered this at all? Was there a work-around at all? This could potentially cause problems down the road if I were to ever want to create another solution that accesses the same system/data for whatever reason.
For some strange reason the PullAsync() method seems to have some bizarre problems. Any data that I create and sync will only be pulled by PullAsync() from that solution.
According to your provided tutorial, I found that the related PullAsync is using Incremental Sync.
await coffeeTable.PullAsync("allCoffees", coffeeTable.CreateQuery());
Incremental Sync:
the first parameter to the pull operation is a query name that is used only on the client. If you use a non-null query name, the Azure Mobile SDK performs an incremental sync. Each time a pull operation returns a set of results, the latest updatedAt timestamp from that result set is stored in the SDK local system tables. Subsequent pull operations retrieve only records after that timestamp.
Here is my test, you could refer to it for a better understanding of Incremental Sync:
Client : await todoTable.PullAsync("todoItems-02", todoTable.CreateQuery());
The client SDK would check if there has a record with the id equals deltaToken|{table-name}|{query-id} from the __config table of your SQLite local store.
If there has no record, then the SDK would send a request as following for pulling your records:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'1970-01-01T00%3A00%3A00.0000000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Note: the $filter would be set as (updatedAt ge datetimeoffset'1970-01-01T00:00:00.0000000+00:00')
While there has a record, then the SDK would pick up the value as the latest updatedAt timestamp and send the request as follows:
https://{your-mobileapp-name}.azurewebsites.net/tables/TodoItem?$filter=(updatedAt%20ge%20datetimeoffset'2017-06-26T02%3A44%3A25.3940000%2B00%3A00')&$orderby=updatedAt&$skip=0&$top=50&__includeDeleted=true
Per my understanding, if you handle the same logical query with the same query id (non-null) in different mobile client, you need to make sure the local db is newly created by each client. Also, if you want to opt out of incremental sync, pass null as the query ID. In this case, all records are retrieved on every call to PullAsync, which is potentially inefficient. For more details, you could refer to How offline synchronization works.
Additionally, you could leverage fiddler for capturing the network traces when you invoke the PullAsync, in order to troubleshoot your issue.

Azure failed request error details

I've got an Azure app up and running, but various requests generate a 500 error. There are no other details that come back from the server to let me know exactly what the problem is. No stack trace, no error message. The only thing I get back from the server are the http headers indicating I've got an error.
I've done a little looking around but can't seem to find a way to retrieve the error details that I'm looking for. I've seen some articles that suggest that I enable logging, but I'm not sure 1) how to do that, 2) where those log files would go and 3) how to access said log files. I've seen posts that say to add a whole bunch of code to my application to enable logging, but all I'm looking for is an error message and a stack trace from a 500 error. Do I really have to add a bunch of code to my app to see that information? If not, how can I get at it?
Thanks!
Chris
The best long-term solution is to enable Azure Diagnostics, which I think is what you're referring to. If you want a quick-and-dirty solution, you can log errors out to a file and then RDP into the role instances to view them. This is very similar to what you would do on a server in your own datacenter.
You can create the logs however you like. I've used log4net and RollingFileAppenders with some success. Setting the logfile path to something like "\logs\mylog.txt" will place the logs in the E: drive of the VM. Note you'll still need code somewhere in your app to capture the error and write it to the log - typically the global error handler in Global.asax is a good place for that.
You'll also have to enable RDP access to your role instances. There are many articles detailing how to do that. Here's one.
This is not a generally recommended approach because the logs may disappears when the role recycles or is recreated. It's also a pain in the butt to log to keep an eye on all those different servers.
One other warning - it's possible that the 500 error is due to some failure in your web.config. If that is the case, all the the application-level error logging in the world isn't going to help you. So be sure that your web.config is valid, and also check the Windows Event Logs while you're RDP'd into the server.
500 internal server error is most generally caused by some problem on the server when it was not able to understand incoming requests or there was some problem in configuration. So, try to run the app locally and see if there is some problem. You can record errors in a database in catches/application_error and also can use tracing. Believe me they are very helpful and worth a few extra lines of code.
For tracing have a look here, http://msdn.microsoft.com/en-us/magazine/ff714589.aspx

Syncing Local Domain Entity Changes When Using CQRS

Lets suppose I have a basic CustomerEntity which has the following attributes
Name
Surname
IsPreferred
Taking CQRS in it's simplest form I would have the following services
CustomerCommandService
CustomerQueryService
If on the CustomerCommandService I call UpgradeToPreferred(CustomerEntity) The store behind it will update and any queries will reflect this. So far so good.
My question is how to I sync this back to the local entity I have? I have called the UpgradeToPreferred() method on the service not on the entity so it will not reflect in the local copy unless I query the CustomerQueryService and get the update which seems a tad redundant.
..Or am I doing it wrong?
EDIT:
To clarify, the question is. If I am going through a command service to modify the entity in storage and not calling the command on the entity directly or editing it's properties how should I handle the same modification on the entity I have in memory.
Few things wrong here. Your comand service takes a command, not an entity. So if you want to upgrade that customer to be preferred, then the command would be the intent (makecustomerpreferred) and the data needed to perfomr the command (a customer identification would suffice). The service would load up the entity using the identification, and invoke the makepreferred behavior on the entity. The entity would be changed internally. Persistence would map it back to the database. Ergo, no need to resync with the database.

Resources