Persistent Store save boolean as objects - object

I am developing a BlackBerry App and saving a lot of values as String and boolean to the Persistent Store. I am aware of the fact that String and boolean values do not get deleted from Persistent when an App is deleted/uninstalled from the handset. I am also aware of the fact that in order to delete these values I need to store this in a "project" specific class. I have been struggling with it and so would want a temporary work around. I am saving a boolean value in order to determine which screen the App should load it.
PersistentStoreHelper.persistentHashtable.put("flagged",Boolean.TRUE);
if (PersistentStoreHelper.persistentHashtable.containsKey("flagged"))
{
boolean booleanVal = ((Boolean) PersistentStoreHelper.persistentHashtable.get("flagged")).booleanValue();
if (booleanVal==true)
{
pushScreen(new MyScreen());
}
}
else
{
pushScreen(new MyScreen(false));
}
Is it possible to store this boolean value as an Object so that it gets deleted when the App is uninstalled/deleted. Please help as well as comment if I am missing out on anything.

Once again, I'd recommend you change your PersistentStoreHelper to this version online.
You certainly can get Boolean and String values to be deleted from the persistent store when your app is uninstalled, but they need to be inside an object that can only exist in your app.
For example:
PersistentStoreHelper store = PersistentStoreHelper.getInstance();
store.put("flagged", Boolean.TRUE);
// commit will save changes to the `flagged` variable
store.commit();
and then retrieve it later with:
PersistentStoreHelper store = PersistentStoreHelper.getInstance();
boolean isFlagged = ((Boolean)store.get("flagged")).booleanValue();
The key that makes this work is inside my PersistentStoreHelper class, it saves everything inside a subclass of Hashtable that's unique to my/your app (MyAppsHashtable). You need to store your String or Boolean objects inside that app-unique Hashtable subclass, not in a normal java.util.Hashtable.
Again, please make this easy on yourself, and use the code I posted.
Note: also, you probably know this, but you may need to reboot your device to see the app, and its persistent store data, fully deleted.
Update
If you had changed the original PersistentStoreHelper class that I had posted online, because you wanted access to the containsKey() method, or other methods in the Hashtable class, you can solve that problem by simply adding code like this:
public boolean containsKey(String key) {
return persistentHashtable.containsKey(key);
}
to the PeristentStoreHelper class. Please don't make persistentHashtable a public static member. As you need to use more Hashtable methods, just add wrappers for them, like I show above with containsKey(). Of course, you can achieve the same thing as containsKey() by simply using this code:
boolean containsFlagged = (store.get("flagged") != null);
Update 2
If you get stuck with old persistent data, that you need to clean out, you can modify PersistentStoreHelper to detect and correct the situation, like this (hattip to #adwiv for suggestions):
private PersistentStoreHelper() {
persistentObject = PersistentStore.getPersistentObject(KEY);
Object contents = persistentObject.getContents();
if (contents instanceof MyAppsHashtable) {
persistentHashtable = (MyAppsHashtable)contents;
} else {
// store might be empty, or contents could be the wrong type
persistentHashtable = new MyAppsHashtable();
persistentObject.setContents(persistentHashtable);
persistentObject.commit();
}
}

Related

Fetching OpenIdConnectConfiguration while offline/no connection to AuthServer

I've been working on how to save OpenIdConnecConfiguration locally in the odd case that the AuthServer is not reachable but the frontend client (e.g. Phone) still has a valid refresh token which still needs to be validated again when signing in. It is also needed to be saved locally to a file in the case that the backend (e.g. WCF) has restarted due to a update or the frequent restarts it has (once a day)
What I've done so far, I've saved the JSON object of the ".well-known/openid-configuration" to a file/variable and now I want to create the OpenIdConnectConfiguration object.
OpenIdConnectConfiguration.Create(json) does a lot of the work but the signingKeys do not get created. I think maybe it's because the authorization endpoint needs to be created in some other manner maybe?
Or maybe I'm doing this the wrong way and there is another solution to this issue. I'm working in C#.
Edit: I know there are some caveats to what I'm doing. I need to check once in awhile to see if the public key has been changed, but security wise it should be fine to save the configuration because it's already public. I only need the public key to validate/sign the jwt I get from the user and nothing more.
Figured out a solution after looking through OpenIdConnectConfiguration.cs on the official github.
When fetching the OpenIdConnectConfiguration the first time, use Write() to get a JSON string and use it to save it to file.
Afterwards when loading the file, use Create() to create the OpenIdConnectConfiguration again from the JSON string (This had the issue of not saving the signingKeys as said in the question, but alas! there is a fix)
Lastly to fix the issue with the signingKeys not being created, (this is what I found out from the github class) all we need to do is loop through the JsonWebKeySet and create them as is done in the class. We already have all the information needed from the initial load and therefore only need to create them again.
I'll leave the code example below of what I did. I still need to handle checking if he key has been changed/expired which is the next step I'll be tackling.
interface IValidationPersistence
{
void SaveOpenIdConnectConfiguration(OpenIdConnectConfiguration openIdConfig);
OpenIdConnectConfiguration LoadOpenIdConnectionConfiguration();
}
class ValidationPersistence : IValidationPersistence
{
private readonly string _windowsTempPath = Path.GetTempPath();
private readonly string _fileName = "TestFileName";
private readonly string _fullFilePath;
public ValidationPersistence()
{
_fullFilePath = _windowsTempPath + _fileName;
}
public OpenIdConnectConfiguration LoadOpenIdConnectionConfiguration()
{
FileService fileService = new FileService();
OpenIdConnectConfiguration openIdConfig = OpenIdConnectConfiguration.Create(fileService.LoadFromJSONFile<string>(_fullFilePath));
foreach (SecurityKey key in openIdConfig.JsonWebKeySet.GetSigningKeys())
{
openIdConfig.SigningKeys.Add(key);
}
return openIdConfig;
}
public void SaveOpenIdConnectConfiguration(OpenIdConnectConfiguration openIdConfig)
{
FileService fileService = new FileService();
fileService.WriteToJSONFile(OpenIdConnectConfiguration.Write(openIdConfig), _fullFilePath);
}
}

Data View Customization in Extension

I have overwritten the data view for a custom graph in an extension, which returns the correct data without issue, both by re-declaring the view, and using the delegate object techniques. The issue is that when I do, the AllowSelect/AllowDelete modifications on the view in the primary graph stop working, once I comment out the overwrite, the logic works as normal.
Not sure what I'm missing, but any thoughts would be appreciated
Edit: To clarify, on the main graph, without the extension, the data retrieval and Allow... work without issue
public class FTTicketEntry : PXGraph<FTTicketEntry, UsrFTHeader>
{
public PXSelect<UsrFTHeader> FTHeader;
public PXSelect<UsrFTGridLabor, Where<UsrFTGridLabor.ticketNbr, Equal<Current<UsrFTHeader.ticketNbr>>>> FTGridLabor;
And with the extension, the data is returned correctly from the modified view, but the Allow... do not work from the main graph, only when entered on the extension
public class FTTicketEntryExtension : PXGraphExtension<FTTicketEntry>
{
public PXSelect<UsrFTGridLabor, Where<UsrFTGridLabor.ticketNbr, Equal<Current<UsrFTHeader.ticketNbr>>, And<UsrFTGridLabor.projectID, Equal<Current<UsrFTHeader.projectID>>, And<UsrFTGridLabor.taskID, Equal<Current<UsrFTHeader.taskID>>>>>> FTGridLabor;
I have also tried the other process on the extension with the same results, the data is filtered correctly, but the Allow... commands fail.
public PXSelect<UsrFTGridLabor, Where<UsrFTGridLabor.ticketNbr, Equal<Current<UsrFTHeader.ticketNbr>>>> FTGridLabor;
public virtual IEnumerable fTGridLabor()
{
foreach (PXResult<UsrFTGridLabor> record in Base.FTGridLabor.Select())
{
UsrFTGridLabor p = (UsrFTGridLabor)record;
if (p.ProjectID == Base.FTHeader.Current.ProjectID && p.TaskID == Base.FTHeader.Current.TaskID)
{
yield return record;
}
}
}
My main concern with not wanting to use PXSelectReadOnly, is that there is a status field on the header which drives when certain combinations of the conditions are required and are called on the rowselected events, sometimes all and sometimes none, and the main issue is that I obviously don't want to have to replicate all of the UI logic into the extension, when overwriting the view was the main intent of the extension for the screen.
Appreciate the assistance, and hopefully you see something I'm overlooking or have missed
Thanks
Every BLC instance stores all actual data views and actions within 2 collections: Views and Actions. Whenever, you customize a data view or an action with a BLC extension, the original data view / action gets replaced in the appropriate collection by your custom object declared within the extension class. After the original data view or action was removed from the appropriate collection, it's quite obvious that any change made to the original object will not make any effect, since the original object is not used by the BLC anymore.
The easiest way to access actual object from either of these 2 collections would be as follows: Views["FTGridLabor"].Allow... = value;
Alternatively, you might operate with AllowInsert, AllowUpdate and AllowDelete properties on the cache level: FTGridLabor.Cache.Allow... = value;
By changing AllowXXX properties on the cache level, you completely eliminate the need for setting AllowXXX on the data view, since PXCache.AllowXXX properties have higher priority when compared to identical properties on the data view level:
public class PXView
{
...
protected bool _AllowUpdate = true;
public bool AllowUpdate
{
get
{
if (_AllowUpdate && !IsReadOnly)
{
return Cache.AllowUpdate;
}
return false;
}
set
{
_AllowUpdate = value;
}
}
...
}
With all that said, to resolve your issue with UI Logic not applying to modified view, please consider one of the following options:
Set AllowXXX property values in both the original BLC and its extensions via the object obtained from the Views collection:
Views["FTGridLabor"].Allow... = value;
operate with AllowXXX property values on the cache level: FTGridLabor.Cache.Allow... = value;
First check if your DataView should/should not be a variant of PXSelectReadonly.
Without more information my advice would be to set the Allow properties in Initialize method of your extension:
public override void Initialize()
{
// This is similar to PXSelectReadonly
DataView.AllowDelete = false;
DataView.AllowInsert = false;
DataView.AllowUpdate = false;
}

Orchard CMS front-end all possible content filtering by user permissions

Good day!
In my Orchard, I have several content types all with my custom part attached. This part defines to what users this content is available. For each logged user there is external service, which defines what content user can or cannot access. Now I need access restriction to apply everywhere where orchard display content lists, this includes results by specific tag from a tag cloud, or results listed from Taxonomy term. I seems can’t find any good way to do it except modifying TaxonomyServices code as well as TagCloud services, to join also my part and filter by it. Is this indeed the only way to do it or there are other solutions? I would like to avoid doing changes to built-in modules if possible but cannot find other way.
Thanks in advance.
I'm currently bumbling around with the same issue. One way I'm currently looking at is to hook into the content manager.
[OrchardSuppressDependency("Orchard.ContentManagement.DefaultContentManager")]
public class ModContentManager : DefaultContentManager, IContentManager
{
//private readonly Lazy<IShapeFactory> _shapeFactory;
private readonly IModAuthContext _modAuthContext;
public ModContentManager(IComponentContext context,
IRepository<ContentTypeRecord> contentTypeRepository,
IRepository<ContentItemRecord> contentItemRepository,
IRepository<ContentItemVersionRecord> contentItemVersionRepository,
IContentDefinitionManager contentDefinitionManager,
ICacheManager cacheManager,
Func<IContentManagerSession> contentManagerSession,
Lazy<IContentDisplay> contentDisplay,
Lazy<ISessionLocator> sessionLocator,
Lazy<IEnumerable<IContentHandler>> handlers,
Lazy<IEnumerable<IIdentityResolverSelector>> identityResolverSelectors,
Lazy<IEnumerable<ISqlStatementProvider>> sqlStatementProviders,
ShellSettings shellSettings,
ISignals signals,
//Lazy<IShapeFactory> shapeFactory,
IModAuthContext modAuthContext)
: base(context,
contentTypeRepository,
contentItemRepository,
contentItemVersionRepository,
contentDefinitionManager,
cacheManager,
contentManagerSession,
contentDisplay,
sessionLocator,
handlers,
identityResolverSelectors,
sqlStatementProviders,
shellSettings,
signals) {
//_shapeFactory = shapeFactory;
_modAuthContext = modAuthContext;
}
public new dynamic BuildDisplay(IContent content, string displayType = "", string groupId = "") {
// So you could do something like...
// var myPart = content.As<MyAuthoPart>();
// if(!myPart.IsUserAuthorized)...
// then display something else or display nothing (I think returning null works for this but
//don't quote me on that. Can always return a random empty shape)
// else return base.BuildDisplay(content, displayType, groupId);
// ever want to display a shape based on the name...
//dynamic shapes = _shapeFactory.Value;
}
}
}
Could also hook into the IAuthorizationServiceEventHandler, which is activated before in the main ItemController and do a check to see if you are rendering a projection or taxonomy list set a value to tell your content manager to perform checks else just let them through. Might help :)

What is the best way to recycle Domino objects in Java Beans

I use a function to get access to a configuration document:
private Document lookupDoc(String key1) {
try {
Session sess = ExtLibUtil.getCurrentSession();
Database wDb = sess.getDatabase(sess.getServerName(), this.dbname1);
View wView = wDb.getView(this.viewname1);
Document wDoc = wView.getDocumentByKey(key1, true);
this.debug("Got a doc for key: [" + key1 + "]");
return wDoc;
} catch (NotesException ne) {
if (this.DispLookupErrors)
ne.printStackTrace();
this.lastErrorMsg = ne.text;
this.debug(this.lastErrorMsg, "error");
}
return null;
}
In another method I use this function to get the document:
Document wDoc = this.lookupDoc(key1);
if (wdoc != null) {
// do things with the document
wdoc.recycle();
}
Should I be recycling the Database and View objects when I recycle the Document object? Or should those be recycled before the function returns the Document?
The best practice is to recycle all Domino objects during the scope within which they are created. However, recycling any object automatically recycles all objects "beneath" it. Hence, in your example method, you can't recycle wDb, because that would cause wDoc to be recycled as well, so you'd be returning a recycled Document handle.
So if you want to make sure that you're not leaking memory, it's best to recycle objects in reverse order (e.g., document first, then view, then database). This tends to require structuring your methods such that you do whatever you need to/with a Domino object inside whatever method obtains the handle on it.
For instance, I'm assuming the reason you defined a method to get a configuration document is so that you can pull the value of configuration settings from it. So, instead of a method to return the document, perhaps it would be better to define a method to return an item value:
private Object lookupItemValue(String configKey, itemName) {
Object result = null;
Database wDb = null;
View wView = null;
Document wDoc = null;
try {
Session sess = ExtLibUtil.getCurrentSession();
wDb = sess.getDatabase(sess.getServerName(), this.dbname1);
wView = wDb.getView(this.viewname1);
wDoc = wView.getDocumentByKey(configKey, true);
this.debug("Got a doc for key: [" + configKey + "]");
result = wDoc.getItemValue(itemName);
} catch (NotesException ne) {
if (this.DispLookupErrors)
ne.printStackTrace();
this.lastErrorMsg = ne.text;
this.debug(this.lastErrorMsg, "error");
} finally {
incinerate(wDoc, wView, wDb);
}
return result;
}
There are a few things about the above that merit an explanation:
Normally in Java, we declare variables at first use, not Table of Contents style. But with Domino objects, it's best to revert to TOC so that, whether or not an exception was thrown, we can try to recycle them when we're done... hence the use of finally.
The return Object (which should be an item value, not the document itself) is also declared in the TOC, so we can return that Object at the end of the method - again, whether or not an exception was encountered (if there was an exception, presumably it will still be null).
This example calls a utility method that allows us to pass all Domino objects to a single method call for recycling.
Here's the code of that utility method:
private void incinerate(Object... dominoObjects) {
for (Object dominoObject : dominoObjects) {
if (null != dominoObject) {
if (dominoObject instanceof Base) {
try {
((Base)dominoObject).recycle();
} catch (NotesException recycleSucks) {
// optionally log exception
}
}
}
}
}
It's private, as I'm assuming you'll just define it in the same bean, but lately I tend to define this as a public static method of a Util class, allowing me to follow this same pattern from pretty much anywhere.
One final note: if you'll be retrieving numerous item values from a config document, obviously it would be expensive to establish a new database, view, and document handle for every item value you want to return. So I'd recommend overriding this method to accept a List<String> (or String[ ]) of item names and return a Map<String, Object> of the resulting values. That way you can establish a single handle for the database, view, and document, retrieve all the values you need, then recycle the Domino objects before actually making use of the item values returned.
Here's an idea i'm experimenting with. Tim's answer is excellent however for me I really needed the document for other purposes so I've tried this..
Document doc = null;
View view = null;
try {
Database database = ExtLibUtil.getCurrentSessionAsSigner().getCurrentDatabase();
view = database.getView("LocationsByLocationCode");
doc = view.getDocumentByKey(code, true);
//need to get this via the db object directly so we can safely recycle the view
String id = doc.getNoteID();
doc.recycle();
doc = database.getDocumentByID(id);
} catch (Exception e) {
log.error(e);
} finally {
JSFUtils.incinerate(view);
}
return doc;
You then have to make sure you're recycling the doc object safely in whatever method calls this one
I have some temporary documents which exist for a while as config docs then no longer needed, so get deleted. This is kind of enforced by an existing Notes client application: they have to exist to keep that happy.
I've written a class which has a HashMap of Java Date, String and Doubles with the item name as the key. So now I have a serializable representation of the document, plus the original doc noteID so that it can be found quickly and amended/deleted when it's not needed anymore.
That means the config doc can be collected, a standard routine creates a map of all items for the Java representation, taking into account the item type. The doc object can then be recycled right away.
The return object is the Java class representation of the document, with getValue(String name) and setValue(String name, val) where val can be Double String or Date. NB: this structure has no need for rich text or attachments, so it's kept to simple field values.
It works well although if the config doc has lots of Items, it could mean holding a lot of info in memory unnecessarily. Not so in my particular case though.
The point is: the Java object is now serializable so it can remain in memory, and as Tim's brilliant reply suggests, the document can be recycled right away.

Subsonic - Where do i include my busines logic or custom validation

Im using subsonic 2.2
I tried asking this question another way but didnt get the answer i was looking for.
Basically i ususally include validation at page level or in my code behind for my user controls or aspx pages. However i haev seen some small bits of info advising this can be done within partial classes generated from subsonic.
So my question is, where do i put these, are there particular events i add my validation / business logic into such as inserting, or updating. - If so, and validation isnt met, how do i stop the insert or update. And if anyone has a code example of how this looks it would be great to start me off.
Any info greatly appreciated.
First you should create a partial class for you DAL object you want to use.
In my project I have a folder Generated where the generated classes live in and I have another folder Extended.
Let's say you have a Subsonic generated class Product. Create a new file Product.cs in your Extended (or whatever) folder an create a partial class Product and ensure that the namespace matches the subsonic generated classes namespace.
namespace Your.Namespace.DAL
{
public partial class Product
{
}
}
Now you have the ability to extend the product class. The interesting part ist that subsonic offers some methods to override.
namespace Your.Namespace.DAL
{
public partial class Product
{
public override bool Validate()
{
ValidateColumnSettings();
if (string.IsNullOrEmpty(this.ProductName))
this.Errors.Add("ProductName cannot be empty");
return Errors.Count == 0;
}
// another way
protected override void BeforeValidate()
{
if (string.IsNullOrEmpty(this.ProductName))
throw new Exception("ProductName cannot be empty");
}
protected override void BeforeInsert()
{
this.ProductUUID = Guid.NewGuid().ToString();
}
protected override void BeforeUpdate()
{
this.Total = this.Net + this.Tax;
}
protected override void AfterCommit()
{
DB.Update<ProductSales>()
.Set(ProductSales.ProductName).EqualTo(this.ProductName)
.Where(ProductSales.ProductId).IsEqualTo(this.ProductId)
.Execute();
}
}
}
In response to Dan's question:
First, have a look here: http://github.com/subsonic/SubSonic-2.0/blob/master/SubSonic/ActiveRecord/ActiveRecord.cs
In this file lives the whole logic I showed in my other post.
Validate: Is called during Save(), if Validate() returns false an exception is thrown.
Get's only called if the Property ValidateWhenSaving (which is a constant so you have to recompile SubSonic to change it) is true (default)
BeforeValidate: Is called during Save() when ValidateWhenSaving is true. Does nothing by default
BeforeInsert: Is called during Save() if the record is new. Does nothing by default.
BeforeUpdate: Is called during Save() if the record is new. Does nothing by default.
AfterCommit: Is called after sucessfully inserting/updating a record. Does nothing by default.
In my Validate() example, I first let the default ValidatColumnSettings() method run, which will add errors like "Maximum String lenght exceeded for column ProductName" if product name is longer than the value defined in the database. Then I add another errorstring if ProductName is empty and return false if the overall error count is bigger than zero.
This will throw an exception during Save() so you can't store the record in the DB.
I would suggest you call Validate() yourself and if it returns false you display the elements of this.Errors at the bottom of the page (the easy way) or (more elegant) you create a Dictionary<string, string> where the key is the columnname and the value is the reason.
private Dictionary<string, string> CustomErrors = new Dictionary<string, string>
protected override bool Validate()
{
this.CustomErrors.Clear();
ValidateColumnSettings();
if (string.IsNullOrEmpty(this.ProductName))
this.CustomErrors.Add(this.Columns.ProductName, "cannot be empty");
if (this.UnitPrice < 0)
this.CustomErrors.Add(this.Columns.UnitPrice, "has to be 0 or bigger");
return this.CustomErrors.Count == 0 && Errors.Count == 0;
}
Then if Validate() returns false you can add the reason directly besides/below the right field in your webpage.
If Validate() returns true you can safely call Save() but keep in mind that Save() could throw other errors during persistance like "Dublicate Key ...";
Thanks for the response, but can you confirm this for me as im alittle confused, if your validating the column (ProductName) value within validate() or the beforevalidate() is string empty or NULL, doesnt this mean that the insert / update has already been actioned, as otherwise it wouldnt know that youve tried to insert or update a null value from the UI / aspx fields within the page to the column??
Also, within asp.net insert or updating events we use e.cancel = true to stop the insert update, if beforevalidate failes does it automatically stop the action to insert or update?
If this is the case, isnt it eaiser to add page level validation to stop the insert or update being fired in the first place.
I guess im alittle confused at the lifecyle for these methods and when they come into play

Resources