I have a document that I’d like to pre-populate. In the document the person’s name is repeated multiple times so I’ve set up a text fields, given them the same label e.g. “CandidateName” and set them all to not required and read only.
The reason why all of them are set to read only is because I programmatically set them when I call DocuSign via API (TextTabs). As result only the final field is populated. All the previous ones are blank.
Cheers
If your template contains multiple fields that have the same tabLabel and you want to populate all of those fields with the same value by using the API, you need to prefix the tabLabel value with \\*.
For example, here's the JSON for the tabs portion of a CreateEnvelope request that would populate every field which has the label CandidateName with the value John Smith.
"tabs":
{
"textTabs": [
{
"tabLabel": "\\*CandidateName",
"value": "John Smith"
}
]
}
Related
I am new to constructing Rest API GET queries.
Need help in GET request of Rest API by passing the fields in sysparm_fields.
I trying to query a Table via Rest API URI by passing only a limited number of columns of the table using Sysparam_fields.
However few column names have Spaces and parenthesis in it. So the JSON result set is excluding those columns and respective data when I do a GET request.
So if I am querying with column names "assigned_to (need the empid)" and "Number", I only get "Number" data.
If I dont pass, sysparm_fields it is returning the result set with all the columns including "assigned_to (need the empid)"
My end point looks like this
https://MyInstance.service-now.com/api/now/table/ticket?sysparm_exclude_reference_link=true&sysparm_fields=number,assigned_to (need the empid)&sysparm_query=sys_updated_onBETWEENjavascript:gs.dateGenerate('2020-09-01','00:00:00')#javascript:gs.dateGenerate('2020-09-01','23:59:59')
The result is only
{
"result": [
{
"number": "TK00001"
}
}
If my URI is
https://MyInstance.service-now.com/api/now/table/ticket?sysparm_exclude_reference_link=true&sysparm_query=sys_updated_onBETWEENjavascript:gs.dateGenerate('2020-09-01','00:00:00')#javascript:gs.dateGenerate('2020-09-01','23:59:59')
Then I get the result
{
"result": [
{
"number": "TK00001",
"assigned_to (need the empid)":"MYQ001",
"Other_Field1":"Other Value 1",
:
"Other_FieldN":"Other Value N"
}
}
So how do I pass the column names that has Spaces and parenthesis like "assigned_to (need the empid)"
Thanks for the Help in Advance
we got a unique scenario while using Azure search for one of the project. So, our clients wanted to respect user's privacy, hence we have a feature where a user can restrict search for any PII data. So, if user has opted for Privacy, we can only search for him/her with UserID else we can search using Name, Phone, City, UserID etc.
JSON where Privacy is opted:
{
"Id": "<Any GUID>",
"Name": "John Smith", //searchable
"Phone": "9987887856", //searchable
"OtherInfo": "some info" //non-searchable
"Address" : {}, //searchable
"Privacy" : "yes", //searchable
"UserId": "XXX1234", //searchable
...
}
JSON where Privacy is not opted:
{
"Id": "<Any GUID>",
"Name": "Tom Smith", //searchable
"Phone": "7997887856", //searchable
"OtherInfo": "some info" //non-searchable
"Address" : {}, //searchable
"Privacy" : "no", //searchable
"UserId": "XXX1234", //searchable
...
}
Now we provide search service to take any searchText as input and fetch all data which matches to it (all searchable fields).
With above scenario,
We need to remove those results which has "Privacy" as "yes" if searchText is not matching with UserId
In case searchText is matching with UserId, we will be including it in result.
If "Privacy" is set "no" and searchText matches any searchable field, it will be included in result.
So we have gone with "Lucene Analysers" to check it while querying, resulting in a very long query as shown below. Let us assume searchText = "abc"
((Name: abc OR Phone: abc OR UserId: abc ...) AND Privacy: no) OR
((UserId: abc ) AND Privacy: yes)
This is done as we show paginated results i.e. bringing data in batches like 1 - 10, 11 - 20 and so on, hence, we get top 10 records in each query with total result count.
Is there any other optimised approach to do so??
Or Azure search service facilitates any internal mechanism for conditional queries?
If I understand your requirement correctly, it can be solved quite easily. You determine which property should be searchable and not in your data model. You don't need to construct a complicated query that repeats the end user input for every property. And you don't need to do any batching or processing of results.
If searchText is your user's input, you can use this:
(*searchText* AND Privacy:false)
This will search all searchable fields, but it will only return records that have allowed search in PII data.
You also have a requirement that allows the users to search for userid in all records regardless of the PII setting for the record. To support this, extend the query to:
(*searchText* AND Privacy:false) OR (UserId:*searchText*)
This allows users to search all fields in records where Privacy is false, and for all other records it allows search in the UserId only. This query pattern will solve all of your requirements with one optimized query.
From the client side you could dynamically add the ¨SearchFields¨ parameter as part of the query, that way if the user got the Privacy flag set to true, only UserId is set as part of the available Search fields.
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.search.models.searchparameters.searchfields?view=azure-dotnet
I'm trying to create a simple mobile app that queries an API and parses the response to display certain values.
The mobile has 2 fields viz:
Button to query the api
Large text box to display the contents
In my livecode stack, I've the following inclusions:
JSON Library
mergJSON
tsNet
The api response is as follows:
{
"data": [
{
"id": 1,
"date_created": "2021-11-08T17:12:03Z",
"date_updated": "2021-11-22T16:08:55Z",
"first_name": "John",
"last_name": "Doe",
"email": "john.doe#unknown.com",
"phone": "9876543210",
"dob": "1980-01-01",
"password": "xxxxxxxxx",
"plan_start": "2021-11-22T16:07:46Z",
"plan_expiry": "2021-12-21T16:06:25Z"
}
]
}
I want to parse the JSON to display the email field value in the textbox.
In my livecode stack:
The button is named as "getdata"
The textbox is named as "flddata"
In the button script, I've added the following code:
put "<api url endpoint>" into tUrl
put "Authorization: Bearer xxxxxxxxx" into tHeaders
put tsNetGetSync(tUrl, tHeaders, tRecvHeaders, tResult, tBytes) into tData
put JSONToArray(tData) into tDataArray
put tDataArray["email"] into field "flddata"
But this doesn't work. Nothing happens. For the life of me, I can't figure out what's wrong. Any help would be appreciated. Thanks a ton!
To access the "email" key of the array that is built from the JSON you shared. You must first access the "data" key and then key 1. So the last line of your code would be as follows:
put tDataArray ["data"] [1] ["email"] into field "flddata"
Tips:
Put a break point on that line. This will allow you to see the contents of the variables so that you can see the structure of the array.
It looks like it might be a multidimensional array. Here's a simple way to get a look at how it's structured:
Drag a Tree View widget onto your card.
Set the arrayData property of the widget to your array tDataArray. Like this:
set the arrayData of widget "Tree View" to tDataArray
You should see the structure of the array in your tree view widget. It's possible that the array that was created looks something like this:
put tDataArray[1]["email"] into field "flddata"
According to DocuSign: How to prefill multiple text tabs with the same label?, by appending \\* to my Text Tab label will make it work and it does.
However, when I have Text Tab labels that end with the same character sequence, the incorrect value will be set: ServiceName and Name Text Tabs will both get populated with the value I set for Name.
Is there a way for this not to happen?
The \\* syntax is a wildcard, so if you want your API request to populate the Name field but not the ServiceName field, then you can do so by having your API request specify tabLabel = Name\\* (this will populate any field that has a label starting with the value "Name").
For example, this JSON within my Create Envelope API request...
"tabs": {
"textTabs": [
{
"tabLabel": "Name\\*",
"value": "value_inserted_via_API"
}
]
}
...populates the Name field in the Envelope, but not the ServiceName field:
How to update the text of second comment to "new content"
{
name: 'Me',
comments: [{
"author": "Joe S.",
"text": "I'm Thirsty"
},
{
"author": "Adder K.",
"text": "old content"
}]
}
Updating the embedded array basically involves two steps:
1.
You create a modified version of the whole array. There are multiple operations that you can use to modify an array, and they are listed here: http://www.rethinkdb.com/api/#js:document_manipulation-insert_at
In your example, if you know that the document that you want to update is the second element of the array, you would write something like
oldArray.changeAt(1, oldArray.nth(1).merge({text: "new content"}))
to generate the new array. 1 here is the index of the second element, as indexes start with 0. If you do not know the index, you can use the indexesOf function to search for a specific entry in the array. Multiple things are happening here: changeAt replaces an element of the array. Here, the element at index 1 is replaced by the result of oldArray.nth(1).merge({text: "new content"}). In that value, we first pick the element that we want to base our new element from, by using oldArray.nth(1). This gives us the JSON object
{
"author": "Adder K.",
"text": "old content"
}
By using merge, we can replace the text field of this object by the new value.
2.
Now that we can construct the new object, we still have to actually store it in the original row. For this, we use update and just set the "comments" field to the new array. We can access the value of the old array in the row through the ReQL r.row variable. Overall, the query will look as follows:
r.table(...).get(...).update({
comments: r.row('comments').changeAt(1,
r.row('comments').nth(1).merge({text: "new content"}))
}).run(conn, callback)
Daniel's solution is correct. However, there are several open issues on Github for planned enhancements, including:
generic object and array modification (https://github.com/rethinkdb/rethinkdb/issues/895)
being able to specify optional arguments for merge and update (https://github.com/rethinkdb/rethinkdb/issues/872)
being able to specify a conflict resolution function for merge (https://github.com/rethinkdb/rethinkdb/issues/873)
...among other related issues. Until those are introduced into ReQL (particularly #895), Daniel's approach is the correct one.