When releasing documents the scan operator should get logged to a file. I know this is a kofax system variable but how do I get it from the ReleaseData object?
Maybe this value is hold by the Values collection? What is the key then? I would try to access it by using
string scanOperator = documentData.Values["?scanOperator?"].Value;
Kofax's weird naming convention strikes again - during setup, said items are referred to as BatchVariableNames. However, during release they are KFX_REL_VARIABLEs (an enum named KfxLinkSourceType).
Here's how you can add all available items during setup:
foreach (var item in setupData.BatchVariableNames)
{
setupData.Links.Add(item, KfxLinkSourceType.KFX_REL_VARIABLE, item);
}
The following sample iterates over the DocumentData.Values collection, storing each BatchVariable in a Dictionary<string, string> named BatchVariables.
foreach (Value v in DocumentData.Values)
{
switch (v.SourceType)
{
case KfxLinkSourceType.KFX_REL_VARIABLE:
BatchVariables.Add(v.SourceName, v.Value);
break;
}
}
You can then access any of those variables by key - for example Scan Operator's User ID yields the scan user's domain and name.
Related
In my setup form I configure some settings for my custom module. The settings are stored in the custom storage of the batch class. Given the variable IBatchClass batchClass I can access the data by executing
string data = batchClass.get_CustomStorageString("myKey");
and set the data by executing
batchClass.set_CustomStorageString("myKey", "myValue");
When the custom module gets executed I want to access this data from the storage. The value I get returned is the key for the batchfield collection or indexfield collection or batch variables collection. When creating Kofax Export Connector scripts I would have access to the ReleaseSetupData object holding these collections.
Is it possible to access these fields during runtime?
private string GetFieldValue(string fieldName)
{
string fieldValue = string.Empty;
try
{
IIndexFields indexFields = null; // access them
fieldValue = indexFields[fieldName].ToString();
}
catch (Exception e)
{
}
try
{
IBatchFields batchFields = null; // access them
fieldValue = batchFields[fieldName].ToString();
}
catch (Exception e)
{
}
try
{
dynamic batchVariables = null; // access them
fieldValue = batchVariables[fieldName].ToString();
}
catch (Exception e)
{
}
return fieldValue;
}
The format contains a string like
"{#Charge}; {Current Date} {Current Time}; Scan Operator: {Scan
Operator's User ID}; Page: x/y"
and each field wrapped by {...} represents a field from one of these 3 collections.
Kofax exposes a batch as an XML, and DBLite is basically a wrapper for said XML. The structure is explained in AcBatch.htm and AcDocs.htm (to be found under the CaptureSV directory). Here's the basic idea (just documents are shown):
AscentCaptureRuntime
Batch
Documents
Document
For a standard server installation, the file would be located here: \\servername\CaptureSV\AcBatch.htm. A single document has child elements itself such as index fields, and multiple properties such as Confidence, FormTypeName, and PDFGenerationFileName.
Here's how to extract the elements from the active batch (your IBatch instance) as well as accessing all batch fields:
var runtime = activeBatch.ExtractRuntimeACDataElement(0);
var batch = runtime.FindChildElementByName("Batch");
foreach (IACDataElement item in batch.FindChildElementByName("BatchFields").FindChildElementsByName("BatchField"))
{
}
The same is true for index fields. However, as those reside on document level, you would need to drill down to the Documents element first, and then retrieve all Document children. The following example accesses all index fields as well, storing them in a dictionary named IndexFields:
var documents = batch.FindChildElementByName("Documents").FindChildElementsByName("Document");
var indexFields = DocumendocumentstData.FindChildElementByName("IndexFields").FindChildElementsByName("IndexField");
foreach (IACDataElement indexField in indexFields)
{
IndexFields.Add(indexField["Name"], indexField["Value"]);
}
With regard to Batch Variables such as {Scan Operator's User ID}, I am not sure. Worst case scenario is to assign them as default values to index or batch fields.
Is there a relation between Retrieve and Update in a Dynamic CRM Plugin?
For example if I am retrieving only one field:
Entity e = (Entity)service.Retrieve("EntityLogicalName", EntityGuid,
new ColumnSet(new string[] {"entityid"}));
Can I update another field in the Entity e that has NOT been retrieved?
For example:
e.Attributes["AnotherEntityField1] = "test1";
e.Attributes["AnotherEntityField2] = "test2";
service.update(e);
By NOT including all fields that to be updated in the Retrieve, may this cause some hidden issues?
Assuming, as it appears, that you are just retrieving the entity's primary key, entityid, you won't need to do the retrieve.
Entity e = new Entity("EntityLogicalName") { Id = EntityGuid };
e.Attributes.Add("AnotherEntityField1", "test1");
e.Attributes.Add("AnotherEntityField2", "test2");
service.Update(e);
If you are doing a retrieve to confirm the record exists you need to try/catch or use a retrieve multiple since a Retrieve will throw an exception if the record does not exist.
What you are trying to do is perfectly acceptable and won't cause any problems. Since you obtained the Entity instance via a Retrieve operation the required LogicalName and Id will be set correctly for an update.
Your code would need to read as below for adding new attributes not retrieved initially otherwise you'll get a KeyNotFoundException as the Entity type is just a wrapper over Dictionary<string,string>.
e.Attributes.Add("AnotherEntityField2","test2");
When you're trying to update an entity you don't need that an field exists in the attributes colletion, but to avoid the exception "The given key is not presented in the dictionary" is a good practise to check first if the Attributes Colletion contains the field you want to updated. If yes just update it otherwise you have to add it to the Attributes Colletion of the entity.
if(e.Attributes.Contains("AnotherEntityField1"))
{
e.Attributes["AnotherEntityField1"] = "test1";
}
else
{
e.Attributes.Add("AnotherEntityField1", "test1");
}
// now update operation
I'm trying to store a "Role" object and then get a list of Roles, as shown here:
public class Role
{
public Guid RoleId { get; set; }
public string RoleName { get; set; }
public string RoleDescription { get; set; }
}
//Function store:
private void StoreRole(Role role)
{
using (var docSession = docStore.OpenSession())
{
docSession.Store(role);
docSession.SaveChanges();
}
}
// then it return and a function calls this
public List<Role> GetRoles()
{
using (var docSession = docStore.OpenSession())
{
var Roles = from roles in docSession.Query<Role>() select roles;
return Roles.ToList();
}
}
However, in the GetRoles I am missing the last inserted record/document. If I wait 200ms and then call this function the item is there.
So I am not in sync. ?!
How can I solve this, or alternately how could I know when the result is in the document store for querying?
I've used transactions, but cannot figure this out. Update and delete are just fine, but when inserting I need to delay my 'List' call.
You are treating RavenDB as if it is a relational database, and it isn't. Load and Store are ACID operations in RavenDB, Query is not. Indexes (necessary for queries) are updated asynchronously, and in fact, temporary indexes may have to be built from scratch when you do a session.Query<T>() without a durable index specified. So, if you are trying to query for information you JUST stored, or if you are doing the FIRST query that requires a temporary index to be created, you probably won't get the data you expect.
There are methods of customizing your query to wait for non-stale results but you shouldn't lean on these too much because they're indicative of a bad design - it is better to figure out a better way to do the same thing in a way that embraces eventual consistency, either changing your model (so you get consistency via Load/Store - perhaps you could have one document that defines ALL of the roles in a list?) or by changing the application flow so you don't need to Store and then immediately Query.
An additional way of solving this is to query the index with WaitForNonStaleResultsAsOfLastWrite() turned on inside the save function. That way when the save is completed the index will be updated to at least include the change you just made.
You can read more about this here
I have a sharepoint field in a list that can be either a user or a group. Using the Server Object Model, I can identify easily whether the user is a group or not.
However, I cannot find a way to achieve this using the Managed Client Object model. Is there a way to know.
I only managed to make it work by looping the list of groups and checking if the there is a group with the name. Howver, this is not exactly correct or efficient. Maybe there is a way to find out using the ListItem of the user. But I did not see any fields that show that user is administrator. I have also tried EnsureUser. This crashes if the user is not a group. So I could work out by using a try/catch but this would be very bad programming.
Thanks,
Joseph
To do this get the list of users from ClientContext.Current.Web.SiteUserInfoList and then check the ContentType of each item that is returned to determine what it is.
Checking the content type is not very direct though, because all you actually get back from each item is a ContentTypeID, which you then have to look-up against the content types of the user list at ClientContext.Current.Web.SiteUserInfoList.ContentTypes. That look-up will return a ContentType object, and you can read from the Name property of that object to see what the list item is.
So an over simplified chunk of code to do this would be:
using Microsoft.SharePoint.Client;
...
ClientContext context = ClientContext.Current;
var q = from i in context.Web.SiteUserInfoList.GetItems(new CamlQuery()) select i;
IEnumerable<ListItem> Items = context.LoadQuery(q);
context.ExecuteQueryAsync((s, e) => {
foreach (ListItem i in Items) {
//This is the important bit:
ContentType contenttype = context.Web.SiteUserInfoList.ContentTypes.GetById(i["ContentTypeId"].ToString());
context.Load(contenttype); //It's another query so we have to load it too
switch (contenttype.Name) {
case "SharePointGroup":
//It's a SharePoint group
break;
case "Person":
//It's a user
break;
case "DomainGroup":
//It's an Active Directory Group or Membership Role
break;
default:
//It's a mystery;
break;
}
}
},
(s, e) => { /* Query failed */ }
);
You didn't specify your platform, but I did all of this in Silverlight using the SharePoint client object model. It stands to reason that the same would be possible in JavaScript as well.
Try Microsoft.SharePoint.Client.Utilities.Utility.SearchPrincipals(...):
var resultPrincipals = Utility.SearchPrincipals(clientContext, clientContext.Web, searchString, PrincipalType.All, PrincipalSource.All, null, maxResults);
The return type, PrincipalInfo, conveniently has a PrincipalType property which you can check for Group.
I'm getting a strange error while trying to select a row from a table under Windows Azure Table Storage. The exception "An item with the same key has already been added." is being thrown even though I'm not inserting anything. The query that is causing the problem is as follows:
var ids = new HashSet<string>() { id };
var fields = new HashSet<string> {"#all"};
using (var db = new AzureDbFetcher())
{
var result = db.GetPeople(ids, fields, null);
}
public Dictionary<string, Person> GetPeople(HashSet<String> ids, HashSet<String> fields, CollectionOptions options)
{
var result = new Dictionary<string, Person>();
foreach (var id in ids)
{
var p = db.persons.Where(x => x.RowKey == id).SingleOrDefault();
if (p == null)
{
continue;
}
// do something with result
}
}
As you can see, there's only 1 id and the error is thrown right at the top of the loop and nothing is being modified.
However, I'm using "" as the Partition Key for this particular row. What gives?
You probably added an object with the same row key (and no partition key) to your DataServiceContext before performing this query. Then you're retrieving the conflicting object from the data store, and it can't be added to the context because of the collision.
The context tracks all object retrieved from the Tables. Since entities are uniquely identified by their partitionKey/rowKey combination, a context, like the tables, cannot contain duplicate partitionkey/rowkey combinations.
Possible causes of such a collison are:
Retrieving an entity, modifying it, and then retrieving it again using the same context.
Adding an entity to the context, and then retrieving one with the same keys.
In both cases, the context the encounters it's already tracking a different object which does however have the same keys. This is not something the context can sort out by itself, hence the exception.
Hope this helps. If you could give a little more information, that would be helpful.