I need to have a menu structure that changes depending on what page the user is currently viewing. Hence I need to disable caching for certain nodes as these may change for every request. How do I do this?
I have tried setting up the DynamicNode in the following way:
var dynamicNode = new DynamicNode()
{
Title = title,
Action = actionName,
Controller = controllerName,
RouteValues = routeValues,
Attributes = attributes,
ChangeFrequency = ChangeFrequency.Always,
LastModifiedDate = DateTime.Now,
UpdatePriority = UpdatePriority.Automatic,
};
But that seems tohave no effect.
I have also set cacheDuration="0" in the Web.config file, no effect.
I've also set the following in the GetCacheDesctription of the DynamicNodeProvider
return new CacheDescription("GuideDynamicNodeProvider")
{
AbsoluteExpiration = DateTime.Now,
};
Also with no effect.
Am I using these settings incorrectly? The documentation on this aspect is rather lacking.
Disabling caching for specific nodes is not supported. However, you can disable caching for the entire sitemap by setting the cache duration to 0.
If what you are trying to do is refresh nodes when the data changes, you can use the SiteMapCacheReleaseAttribute or call SiteMaps.ReleaseSiteMap() when the data is updated.
On the other hand, if data is updated in your database from a source that is not under your control, you can implement ICacheDependency yourself to create a SqlCacheDependency class and then inject it using DI. Have a look at the RuntimeFileCacheDependency class to see how that can be done.
Note that the reason the ChangeMonitor is put into a list is so it will support the RuntimeCompositeCacheDependency, which allows you to configure multiple cache dependencies for the same cache.
Related
Using the SAP B1 .edmx with 3.39.0 and trying to update DeliveryNotes with new DocumentPackages. However, the list of DocumentPackage that eventually gets passed by the execution of the update operation is empty.
Code:
var packagesUpdateDocument = new Document();
packagesUpdateDocument.setDocEntry(1);
var documentPackages = new ArrayList<DocumentPackage>();
var documentPackage = new DocumentPackage();
documentPackage.setNumber(10);
documentPackages.add(documentPackage);
packagesUpdateDocument.setDocumentPackages(documentPackages);
var updateDeliveryPackagesRequest = service.withServicePath("etc")
.updateDeliveryNotes(packagesUpdateDocument);
var updateDeliveryPackagesResponse = updateDeliveryPackagesRequest.tryExecute(serviceLayerDestination);
Looking at the logs of the service layer I can see this is the request which was eventually sent by the client:
PATCH /b1s/v2/DeliveryNotes(1)
{"DocEntry":1,"DocumentPackages":[{}],"#odata.type":"SAPB1.Document"}
From my understanding, PATCH requests will automatically disregard anything the generated client deems as 'unchanged.'
Printing the changed fields:
System.out.println(packagesUpdateDocument.getChangedFields());
Yields:
{
DocEntry=175017,
DocumentPackages=
[DocumentPackage
(
super=VdmObject(customFields={},
changedOriginalFields={}),
odataType=SAPB1.DocumentPackage,
number=10,
)
]
.....
}
I believe the Package is not recording the fields which have changed. Although I am not certain.
Is there a step I am missing or is this a feature gap?
As of SAP Cloud SDK 3.42.0 we support updating complex properties with PATCH out-of-the-box. See the release notes for more details.
Yes this is a feature gap currently. PATCH will only consider properties of the root entity and navigation properties while disregarding changes in complex properties.
Until that is supported updating with PUT instead via the .replacingEntity() option should work.
I'm trying to manipulate the Audit screen (SM205510) through code, using a graph object. The operation of the screen has processes that seem to work when a screen ID is selected in the header. This is my code to create a new record:
Using PX.Data;
Using PX.Objects.SM;
var am = PXGraph.CreateInstance<AUAuditMaintenance>();
AUAuditSetup auditsetup = new AUAuditSetup();
auditsetup.ScreenID = "GL301000";
auditsetup = am.Audit.Insert(auditsetup);
am.Actions.PressSave();
Now, when I execute the code above, it creates a record in the AUAuditSetup table just fine - but it doesn't automatically create the AUAuditTable records the way they are auto-generated in the screen (I realize that the records aren't in the database yet) - but how can I get the graph object to auto-generate the AUAuditTable records in the cache the way they are in the screen?
I've tried looking at the source code for the Audit screen - but it just shows blank, like there's nothing there. I look in the code repository in Visual Studio and I don't see any file for AUAuditMaintenance either, so I can't see any process that I could run in the graph object that would populate those AUAuditTable records.
Any help would be appreciated.
Thanks...
If I had such a need, to manipulate Audit screen records, I'd rather create my own graph and probably generate DAC class. Also I'd add one more column UsrIsArtificial and set it to false by default. And then manage them as ordinary records, but each time I'll add something, I'd set field UsrIsArtificial to false.
You can hardly find how that records are managed at graph level because that records are created and handled on on Graph level, but on framework level. Also think twice or even more about design, as direct writing into Audit history may cause confusion for users in the system of what was caused by user, and what was caused by your code. From that point of view I would rather add one more additional table, then add confusion to existing one.
Acumatica support provided this solution, which works beautifully (Hat tip!):
var screenID = "GL301000"; //"SO303000";
var g = PXGraph.CreateInstance<AUAuditMaintenance>();
//Set Header Current
g.Audit.Current = g.Audit.Search<AUAuditSetup.screenID>(screenID);
if (g.Audit.Current == null) //If no Current then insert
{
var header = new AUAuditSetup();
header.ScreenID = screenID;
header.Description = "Test Audit";
header = g.Audit.Insert(header);
}
foreach (AUAuditTable table in g.Tables.Select())
{
table.IsActive = true;
//Sets Current for Detail
g.Tables.Current = g.Tables.Update(table);
foreach (AUAuditField field in g.Fields.Select())
{
field.IsActive = false;
g.Fields.Update(field);
}
}
g.Actions.PressSave();
Using Raven client and server #30155. I'm basically doing the following in a controller:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
model.UpdateEntity(document) // overwrite document property values with those of edit model.
document.Update(store); // tell document to update itself if it passes some conflict checking
}
Then in document.Update, I try do this:
var old = store.Load<T>(this.Id);
if (old.Date != this.Date)
{
// Resolve conflicts that occur by moving document period
}
store.Update(this);
Now, I run into the problem that old gets loaded out of memory instead of the database and already contains the updated values. Thus, it never goes into the conflict check.
I tried working around the problem by changing the Controller.Update method into:
public ActionResult Update(string id, EditModel model)
{
var store = provider.StartTransaction(false);
var document = store.Load<T>(id);
store.Dispose();
model.UpdateEntity(document) // overwrite document property values with those of edit model.
store = provider.StartTransaction(false);
document.Update(store); // tell document to update itself if it passes some conflict checking
}
This results in me getting a Raven.Client.Exceptions.NonUniqueObjectException with the text: Attempted to associate a different object with id
Now, the questions:
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
Why would Raven care if I try and associate a new object with the id as long as the new object carries the proper e-tag and type?
RavenDB leans on being able to serve the documents from memory (which is faster). By checking for persisting objects for the same id, hard to debug errors are prevented.
EDIT: See comment of Rayen below. If you enable concurrency checking / provide etag in the Store, you can bypass the error.
Is it possible to load a document in its database state (overriding default behavior to fetch document from memory if it exists there)?
Apparantly not.
What is a good solution to getting the document.Update() to work (preferably without having to pass the old object along)?
I went with refactoring the document.Update method to also have an optional parameter to receive the old date period, since #1 and #2 don't seem possible.
RavenDB supports optimistic concurrency out of the box. The only thing you need to do is to call it.
session.Advanced.UseOptimisticConcurrency = true;
See:
http://ravendb.net/docs/article-page/3.5/Csharp/client-api/session/configuration/how-to-enable-optimistic-concurrency
Is there a means of reconfiguring the GridCacheConfiguration at runtime for GridGain?
The end goal is to be able to add a grid cache at runtime after having started up the Grid.
final GridConfiguration gridConfiguration = new GridConfiguration();
gridConfiguration.setMarshaller(new GridOptimizedMarshaller());
Grid grid = GridGain.start(gridConfiguration);
...
<later on>
GridCacheConfiguration newCacheConfig = ...; //defines newConfig
grid.configuration().setCacheConfiguration(newCacheConfig);
grid.cache("newConfig"); // <-- throws a cache not defined error!
Adding caches usually has to do with handling different data types (generics), which GridGain addresses with GridCacheProjections, like so:
GridCacheProjection<Integer, MyType> prj = cache.projection(Integer.class, MyType.class);
You can create as many different projection from the same cache as needed. In addition to specifying data types, you can also use projections to turn cache flags on and off, or to provide filtered view for the cache with projection filters.
Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.