How to return an output from javascript function using invoke js in blueprism - blueprism

I want to read multiple key-value pairs from a webpage and write it to a collection using blueprism.
I want to use javascript.
I am able to read text from webpage but couldnt understand how to write that data into blueprism data item or a collection.

Blue Prism provides no facility to return data directly from a JavaScript call back into the calling Object. Your best bet is to use a script that generates a hidden input element in the DOM and appends the data you want to exfiltrate:
var hiddenElement = document.querySelector('#bp-output');
if (typeof hiddenElement === 'undefined') {
hiddenElement = document.createElement('input');
hiddenElement.type = 'hidden';
hiddenElement.id = 'bp-output';
document.body.appendChild(hiddenElement);
}
hiddenElement.value = /* some functionality to set the value of the newly-created hidden element */;
You'll need to model this element in your object's application modeler, but it's fairly simple to do - you don't need to match on any attributes other than "ID" or "Web ID", and it's a match only to the string bp-output.
From there, you can use a typical Read stage to read the value out of the value attribute of your element.
For more complex data structures like Collections, you will have to utilize some serialization trickery to get to where you want to be. For example, if you're trying to read a table into a Collection via JavaScript, your /* functionality to set the value of the newly-created hidden element */ in the example above may need to leverage some code from this SO thread to serialize the table itself to a CSV string. Once you've read the string from the hidden element's value, you could use the CSV-related actions in the vendor-provided Utility - Strings VBO to serialize this to a proper Collection for your use in your Objects/Processes.

Related

How to add collections in transformations when writing(creating) a Document in MarkLogic

I wrote a transformation in xquery which unquotes an XML-String and inserts a element with its content. This works fine.
I need to create a collection dependant on the root element of this element as well. I can't do this on new documents as xdmp:document-add-collections() is not working. How do I add the collection to new Documents in transformations?
Here my ServerSide xQuery Code:
xquery version "1.0-ml";
module namespace transform = "http://marklogic.com/rest-api/transform/smtextdocuments";
import module namespace mem = "http://xqdev.com/in-mem-update" at '/MarkLogic/appservices/utils/in-mem-update.xqy';
declare function transform(
$context as map:map,
$params as map:map,
$content as document-node()
) as document-node()
{
let $uri := base-uri($content)
let $doccont := $content/smtextdocuments/documentcontent
let $newcont := xdmp:unquote($doccont)
let $contname := node-name($newcont/*)
let $result := if ( exists($content/smtextdocuments/content))
then mem:node-replace($content/smtextdocuments/content, <content>11{$newcont}</content>)
else mem:node-insert-after($doccont, <content>{$newcont}</content>)
let $log := xdmp:log($content)
return (
$result,
xdmp:document-add-collections($uri, fn:string($contname)),
xdmp:document-remove-collections($uri, "raw")
)
};
The script ist running with the java api (4.0.4) create methode via parameter ServerTransform transform. As per documentation the transformation script is running before the document is stored in the Database.
Its a new document; I need to transform the content and then create the collection.
I can see the document after the create, the content is available. Just the collection is missing. I can try xdmp:document-insert method but is it correct writing the document while create is running?.
The transform mechanism of the Java API / REST API takes responsibility for the document write. At present, there's no way for the transform to supply collections to the writer. That would be a reasonable request for enhancement.
The transform shouldn't attempt to write the document, because the writer would also attempt to write the same document.
One alternative would be to transform the document in Java before writing it and specify the collection as part of the write request.
Another alternative would be to rewrite the transform as a resource service extension, implement the write within the resource service extension, and modify the Java client to send the document to the resource service extension.
Depending on the model, a final alternative might be to use a range index on an element within the document to collect documents into sets instead of using a collection on the document.
Hoping that helps,
What do you mean by "new documents"? Is the document already inserted into the MarkLogic database at the time you are adjusting the collections of it? If not, you may want to modify your return to ($result, xdmp:document-insert($uri, $result, xdmp:default-permissions(), fn:string($contname)) ) for that case.
Otherwise, can you edit your question to share the error or problem more specifically you are facing?
It is a pity that REST transforms do not allow this, like MLCP transforms do. Until changed you have the options drawn by ehennum, or you can consider delaying adding of collections to a pre- or post-commit trigger. It takes some overhead, but it sometimes makes perfect sense to do something like that in a trigger, since it makes sure it is always enforced, and a good place to do content validation, audit logging, and things like that as well.
HTH!

Servicestack Deserialize Redis Response GetAllItemsFromList

So using lists within Servicestack/Redis, when pulling them back from the server I am getting a list of strings (which each the same CLASS just different data in each one).
I did not see a way of using "typed" lists which would allow Servicestack to serialize/deserialize as I add, get items from the List. So my question is:
List<string> resp = rc.GetAllItemsFromList (key);
Gives me back a LIST (Collection) of strings. Each one being a JSON representation of Class ABC.
I'd rather have a list of <ABC> returned. If not, I know I can iterate through the collection of strings deserializing each. But want to know if there is a better way to be doing this than that.
To get a List of Types back you'd use the IRedisTypedClient API and access the Typed List APIs in IRedisList by accessing the Lists[] collection, e.g:
var redisAbc = redis.As<Abc>();
List<Abc> results = redisAbc.Lists[key].GetAll();

Referencing external doc in CouchDB view

I am scraping an 90K record database using JSON-RPC and I am trying to put in some basic error checking. I want to start by scraping the database twice using two different settings and adding a prefix to the second scrape. This way I can check to ensure that the two settings are not producing different records (due to dropped updates, etc). I wanted to implement the comparison using a view which compares each document from the first scrape with it's twin produced by the second scrape and then emit the names of records with a difference between them.
However, I cannot quite figure out how to pull in another doc in the view, everything I have read only discusses external docs using the emit() function, which is too late to permit me to compare it. In the example below, the lookup() function would grab the referenced document.
Is this just not possible?
function(doc) {
if(doc._id.slice(0,1)!=='$' && doc._id.slice(0,1)!== "_"){
var otherDoc = lookup('$test" + doc._id);
if(otherDoc){
var keys = doc.value.keys();
var same = true;
keys.forEach(function(key) {
if ((key.slice(0,1) !== '_') && (key.slice(0,1) !=='$') && (key!=='expires')) {
if (!Object.equal(otherDoc[key], doc[key])) {
same = false;
}
}
});
if(!same){
emit(doc._id, 1);
}
}
}
}
Context
You are correct that this is not possible in CouchDB. The whole point of the map function is that it must be idempotent, otherwise you lose all the other nice benefits of a pre-calculated index.
This is why you cannot access external resources in the map function, whether they be other records or the clock. Any time you run a map you must always get the same result if you put the same record into it. Since there are no relationships between records in CouchDB, you cannot promise that this is possible.
Solution
However, you can still achieve your end goal, just be different means. Some possibilities...
Assuming there is some meaningful numeric value in each doc, you could use a view to take the sum of all those values and group them by which import you did ({key: <batch id>, value: <meaningful number>}). Then compare the two numbers in your client or the browser to see if they match.
A brute force approach would be to use a view to pair the docs that should match. Each doc is on a different row, but they're grouped by a common field. Then iterate through the entire index comparing the pairs. This would certainly be the quickest to code and doesn't depend on your application or data.
Implement a validation function to enforce a schema on your data. Just be warned that this will reduce your write throughput since each written record will be piped out of Erlang and into the JS engine. Also, this is only applicable if you're worried about properly formed records instead of their precise content, which might not be the case.
Instead of your different batch jobs creating different docs, have them place them into the same doc. The structure might look like this: { "_id": "something meaningful", "batch_one": { ..data.. }, "batch_two": { ..data.. } } Then your validation function could compare them or you could create a view that indexes all the docs that don't match. All depends on where in your pipeline you want to do the error checking and correction.
Personally I like the last option better, but only if you don't plan to use the database as is in production. Ie., you wouldn't want to carry around all that extra data in each record.
Hope that helps.
Cheers.

Retrieving all possible values for a field via a RESTlet

Is there an api call that will retrieve all possible values for a field via a RESTlet script for Netsuite?
For example, I want to return all of the possible class field values (Class 1, Class 2, ...) for an inventory item.
I have already tried nlapiGetFieldValues('class') but without success. I'm guessing that is a client side only call?
Similar to what Suite Resources said, but use some pre-existing records for the classes you want to evaluate:
switch(true){
case req.type == 'customer':
var x = nlapiLoadRecord('class',1000);
and either
return x; OR return x.getAllFields() OR return JSON.stringify(x);
case req.type == 'salesorder':
...... etc.
}
I'd personally just return the whole record to get subfields and prototype functions.
RESTLets are written in SuiteScript, so look at the supported records.
Class (classification).
You can write up a saved search within the UI, then use nlapiSearchRecord in your RESTLet. Loop through the results of the search and append to an array of objects representing the record. Then use JSON.stringify and return results. Pretty easy.
Try coding it up and post the code if you have issues.

CRM 2011 JavaScript How to access data stored in an entity passed from a lookup control?

As the question suggests, I need to find out how to access entity data that has been passed into a JavaScript function via a lookup.
JavaScript Code Follows:
// function to generate the correct Weighting Value when these parameters change
function TypeAffectedOrRegionAffected_OnChanged(ExecutionContext, Type, Region, Weighting, Potential) {
var type = Xrm.Page.data.entity.attributes.get(Type).getValue();
var region = Xrm.Page.data.entity.attributes.get(Region).getValue();
// if we have values for both fields
if (type != null && region != null) {
// create the weighting variable
var weighting = type[0].name.substring(4) + "-" + region;
// recreate the Weighting Value
Xrm.Page.data.entity.attributes.get(Weighting).setValue(weighting);
}
}
As you can see with the following line using the name operator I can access my Type entity's Type field.
// create the weighting variable
var weighting = type[0].name.substring(4) + "-" + region;
I am looking for a way now to access the values stored inside my type object. It has the following fields new_type, new_description, new_value and new_kind.
I guess I'm looking for something like this:
// use value of entity to assign to our form field
Xrm.Page.data.entity.attributes.get(Potential).setValue(type[0].getAttribute("new_value"));
Thanks in advance for any help.
Regards,
Comic
REST OData calls are definitely the way to go in this case. You already have the id, and you just need to retrieve some additional values. Here is a sample to get you started. The hardest part with working with Odata IMHO is creating the Request Url's. There are a couple tools, that you can find on codeplex, but my favorite, is actually to use LinqPad. Just connect to your Org Odata URL, and it'll retrieve all of your entities and allow you to write a LINQ statement that will generate the URL for you, which you can test right in the browser.
For your instance, it'll look something like this (it is case sensitive, so double check that if it doesn't work):
"OdataRestURL/TypeSet(guid'" + type[0].Id.replace(/{/gi, "").replace(/}/gi, "") + "'select=new_type,new_description,new_value,new_kind"
Replace OdataRestURL with whatever your odata rest endpoint is, and you should be all set.
Yes Guido Preite is right. You need to retrieve the entity by the id which come form the lookup by Rest Sync or Async. And then get Json object. However for make the object light which is returned, you can mention which fields to be backed as part of the Json. Now you can access those fields which you want.

Resources