VdmComplex Changes Not Working with PATCH - sap-cloud-sdk

Using the SAP B1 .edmx with 3.39.0 and trying to update DeliveryNotes with new DocumentPackages. However, the list of DocumentPackage that eventually gets passed by the execution of the update operation is empty.
Code:
var packagesUpdateDocument = new Document();
packagesUpdateDocument.setDocEntry(1);
var documentPackages = new ArrayList<DocumentPackage>();
var documentPackage = new DocumentPackage();
documentPackage.setNumber(10);
documentPackages.add(documentPackage);
packagesUpdateDocument.setDocumentPackages(documentPackages);
var updateDeliveryPackagesRequest = service.withServicePath("etc")
.updateDeliveryNotes(packagesUpdateDocument);
var updateDeliveryPackagesResponse = updateDeliveryPackagesRequest.tryExecute(serviceLayerDestination);
Looking at the logs of the service layer I can see this is the request which was eventually sent by the client:
PATCH /b1s/v2/DeliveryNotes(1)
{"DocEntry":1,"DocumentPackages":[{}],"#odata.type":"SAPB1.Document"}
From my understanding, PATCH requests will automatically disregard anything the generated client deems as 'unchanged.'
Printing the changed fields:
System.out.println(packagesUpdateDocument.getChangedFields());
Yields:
{
DocEntry=175017,
DocumentPackages=
[DocumentPackage
(
super=VdmObject(customFields={},
changedOriginalFields={}),
odataType=SAPB1.DocumentPackage,
number=10,
)
]
.....
}
I believe the Package is not recording the fields which have changed. Although I am not certain.
Is there a step I am missing or is this a feature gap?

As of SAP Cloud SDK 3.42.0 we support updating complex properties with PATCH out-of-the-box. See the release notes for more details.

Yes this is a feature gap currently. PATCH will only consider properties of the root entity and navigation properties while disregarding changes in complex properties.
Until that is supported updating with PUT instead via the .replacingEntity() option should work.

Related

Microsoft.TeamFoundation.Build.WebApi Get Build Status Launched by PR policy

In our pipeline we programmatically create a pull request (PR). The branch being merged into has a policy on it that launches a build. This build takes a variable amount of time. I need to query the build status until it is complete (or long timeout) so that I can complete the PR, and clean up the temp branch.
I am trying to figure out how to get the build that was kicked off by the PR so that I can inspect the status by using Microsoft.TeamFoundation.Build.WebApi, but all overloads of BuildHttpClientBase.GetBuildAsync require a build Id which I don't have. I would like to avoid using the Azure Build REST API. Does anyone know how I might get the Build kicked off by the PR without the build ID using BuildHttpClientBase?
Unfortunately the documentation doesn't offer a lot of detail about functionality.
Answering the question you asked:
Finding a call that provides the single deterministic build id for a pull request doesn't seem to be very readily available.
As mentioned, you can use BuldHttpClient.GetBuildsAsync() to filter builds based on branch, repository, requesting user and reason.
Adding the BuildReason.PullRequest value in the request is probably redundant according to the branch you will need to pass.
var pr = new GitPullRequest(); // the PR you've received after creation
var requestedFor = pr.CreatedBy.DisplayName;
var repo = pr.Repository.Id.ToString();
var branch = $"refs/pull/{pr.PullRequestId}/merge";
var reason = BuildReason.PullRequest;
var buildClient = c.GetClient<BuildHttpClient>();
var blds = await buildClient.GetBuildsAsync("myProject",
branchName: branch,
repositoryId: repo,
requestedFor: requestedFor,
reasonFilter: reason,
repositoryType: "TfsGit");
In your question you mentioned wanting the build (singular) for the pull request, which implies that you only have one build definition acting as the policy gate. This method can return multiple Builds based on the policy configurations on your target branch. However, if that were your setup, it would seem logical that your question would then be asking for all those related builds for which you would wait to complete the PR.
I was looking into Policy Evaluations to see if there was a more straight forward way to get the id of the build being run via policy, but I haven't been able to format the request properly as per:
Evaluations are retrieved using an artifact ID which uniquely identifies the pull request. To generate an artifact ID for a pull request, use this template:
vstfs:///CodeReview/CodeReviewId/{projectId}/{pullRequestId}
Even using the value that is returned in the artifactId field on the PR using the GetById method results in a Doesn't exist or Don't have access response, so if someone else knows how to use this method and if it gives exact build ids being evaluated for the policy configurations, I'd be glad to hear it.
An alternative to get what you actually desire
It sounds like the only use you have for the branch policy is to run a "gate build" before completing the merge.
Why not create the PR with autocomplete.
Name - autoCompleteSetBy
Type - IdentityRef
Description - If set, auto-complete is enabled for this pull request and this is the identity that enabled it.
var me = new IdentityRef(); // you obviously need to populate this with real values
var prClient = connection.GetClient<GitHttpClient>();
await prClient.CreatePullRequestAsync(new GitPullRequest()
{
CreatedBy = me,
AutoCompleteSetBy = me,
Commits = new GitCommitRef[0],
SourceRefName = "feature/myFeature",
TargetRefName = "master",
Title = "Some good title for my PR"
},
"myBestRepository",
true);

How can I manipulate the Audit screen (SM205510) through code

I'm trying to manipulate the Audit screen (SM205510) through code, using a graph object. The operation of the screen has processes that seem to work when a screen ID is selected in the header. This is my code to create a new record:
Using PX.Data;
Using PX.Objects.SM;
var am = PXGraph.CreateInstance<AUAuditMaintenance>();
AUAuditSetup auditsetup = new AUAuditSetup();
auditsetup.ScreenID = "GL301000";
auditsetup = am.Audit.Insert(auditsetup);
am.Actions.PressSave();
Now, when I execute the code above, it creates a record in the AUAuditSetup table just fine - but it doesn't automatically create the AUAuditTable records the way they are auto-generated in the screen (I realize that the records aren't in the database yet) - but how can I get the graph object to auto-generate the AUAuditTable records in the cache the way they are in the screen?
I've tried looking at the source code for the Audit screen - but it just shows blank, like there's nothing there. I look in the code repository in Visual Studio and I don't see any file for AUAuditMaintenance either, so I can't see any process that I could run in the graph object that would populate those AUAuditTable records.
Any help would be appreciated.
Thanks...
If I had such a need, to manipulate Audit screen records, I'd rather create my own graph and probably generate DAC class. Also I'd add one more column UsrIsArtificial and set it to false by default. And then manage them as ordinary records, but each time I'll add something, I'd set field UsrIsArtificial to false.
You can hardly find how that records are managed at graph level because that records are created and handled on on Graph level, but on framework level. Also think twice or even more about design, as direct writing into Audit history may cause confusion for users in the system of what was caused by user, and what was caused by your code. From that point of view I would rather add one more additional table, then add confusion to existing one.
Acumatica support provided this solution, which works beautifully (Hat tip!):
var screenID = "GL301000"; //"SO303000";
var g = PXGraph.CreateInstance<AUAuditMaintenance>();
//Set Header Current
g.Audit.Current = g.Audit.Search<AUAuditSetup.screenID>(screenID);
if (g.Audit.Current == null) //If no Current then insert
{
var header = new AUAuditSetup();
header.ScreenID = screenID;
header.Description = "Test Audit";
header = g.Audit.Insert(header);
}
foreach (AUAuditTable table in g.Tables.Select())
{
table.IsActive = true;
//Sets Current for Detail
g.Tables.Current = g.Tables.Update(table);
foreach (AUAuditField field in g.Fields.Select())
{
field.IsActive = false;
g.Fields.Update(field);
}
}
g.Actions.PressSave();

How to load document out of database instead of memory

Using Raven client and server #30155. I'm basically doing the following in a controller:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
model.UpdateEntity(document) // overwrite document property values with those of edit model.
document.Update(store); // tell document to update itself if it passes some conflict checking
}
Then in document.Update, I try do this:
var old = store.Load<T>(this.Id);
if (old.Date != this.Date)
{
// Resolve conflicts that occur by moving document period
}
store.Update(this);
Now, I run into the problem that old gets loaded out of memory instead of the database and already contains the updated values. Thus, it never goes into the conflict check.
I tried working around the problem by changing the Controller.Update method into:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
store.Dispose();
model.UpdateEntity(document) // overwrite document property values with those of edit model.
store = provider.StartTransaction(false);
document.Update(store); // tell document to update itself if it passes some conflict checking
}
This results in me getting a Raven.Client.Exceptions.NonUniqueObjectException with the text: Attempted to associate a different object with id
Now, the questions:
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
RavenDB leans on being able to serve the documents from memory (which is faster). By checking for persisting objects for the same id, hard to debug errors are prevented.
EDIT: See comment of Rayen below. If you enable concurrency checking / provide etag in the Store, you can bypass the error.
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
Apparantly not.
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
I went with refactoring the document.Update method to also have an optional parameter to receive the old date period, since #1 and #2 don't seem possible.
RavenDB supports optimistic concurrency out of the box. The only thing you need to do is to call it.
session.Advanced.UseOptimisticConcurrency = true;
See:
http://ravendb.net/docs/article-page/3.5/Csharp/client-api/session/configuration/how-to-enable-optimistic-concurrency

Incremental loading in Azure Mobile Services

Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.

What to do when get "The model used to open the store is incompatible with the one used to create the store"?

I had a core data EntityDescription and I created data in it. Then, I changed the EntityDescription, added new one, deleted the old one using the editor for xcdatamodeld file.
Now any of my code for core data causes this error "The model used to open the store is incompatible with the one used to create the store}". The detail is below. What should I do? I prefer to remove everything in the data model and restart new one.
Thanks for any suggestion!
reason=The model used to open the store is incompatible with the one used to create the store}, {
metadata = {
NSPersistenceFrameworkVersion = 320;
NSStoreModelVersionHashes = {
Promotion = <472663da d6da8cb6 ed22de03 eca7d7f4 9f692d88 a0f273b7 8db38989 0d34ba35>;
};
NSStoreModelVersionHashesVersion = 3;
NSStoreModelVersionIdentifiers = (
);
NSStoreType = SQLite;
NSStoreUUID = "9D6F4C7E-53E2-476A-9829-5024691CED03";
"_NSAutoVacuumLevel" = 2;
};
Or if you're in dev mode, you can also just delete the app and run it again.
Deleting the app is sometimes not the case! Suggest, your app has already been published! You can't just add new entity to the data base and go ahead - you need to perform migration!
For those who doesn't want to dig into documentation and is searching for a quick fix:
Open your .xcdatamodeld file
click on Editor
select Add model version...
Add a new version of your model (the new group of datamodels added)
select the main file, open file inspector (right-hand panel) and under Versioned core data model select your new version of data model for current data model
THAT'S NOT ALL ) You should perform so called "light migration".
Go to your AppDelegate and find where the persistentStoreCoordinator is being created
Find this line if (![_persistentStoreCoordinator addPersistentStoreWithType:NSSQLiteStoreType configuration:nil URL:storeURL options:nil error:&error])
Replace nil options with #{NSMigratePersistentStoresAutomaticallyOption:#YES, NSInferMappingModelAutomaticallyOption:#YES} (actually provided in the commented code in that method)
Here you go, have fun!
P.S. This only applies for lightweight migration. For your migration to qualify as a lightweight migration, your changes must be confined to this narrow band:
Add or remove a property (attribute or relationship).
Make a nonoptional property optional.
Make an optional attribute nonoptional, as long as you provide a default value.
Add or remove an entity.
Rename a property.
Rename an entity.
Answer borrowed from Stas
If this is a non-production app, just delete your local database (appname.sqlite) and restart the app.
I find I'm always doing this, and so provide the following additional detail:
Under XCode 4 (4.3.2) you should find your datastore here:
/Users/~/Library/Application Support/iPhone Simulator/simulatorVersion/Applications/yourAppIdentifier/Documents
Or you can use Spotlight, if you first enable searching for System Files; I've found it fastest to save such a search to the menu bar.
If this is a non-production app, just delete your local database (appname.sqlite) and restart the app.
Delete your app on simulator and restart:
On simulator, go to Hardware -> Home:
Click and hold mouse button on your application icon:
Click on "X" in app icon to delete:
Go back to Xcode and restart your application(Command+R):
or:
PS.:
If the error appears again, review your code because the problem should be in the syntax or discrepancy between what you want to list with the data model that you have.
Reset your simulator and run again. If you were to run with a different device in the simulator, it would work. If you are running with an iphone 6s simulator and you try to run 6s plus, it would still work without resetting.
If running on a phone, make sure to delete the app and rerun it
I have faced the same issue using Xcode 7 beta 1 and the following action has resolved the issue.
Menu==>> click on Window>Projects>select project on the left hand side and click on delete button which is located on the right side.
If still doesn't work,
=> reset the simulator and run the app

Resources