display computed text in JSON format - xpages

In an xpage I have several calls to collect data in json format from several notesviews via java class files.
To check or visualize the data I have a "debug mode" option to display this data in computed fields.
The data is json but I would like to have it formatted in the computed text so it is easier to read.
Does anyone know how I can format the display to it is easier to read in stead of one line of text?
e.g. from
{"locationName":"","gender":"Male","companyName":"","name":"Patrick Kwinten","docUNID":"845AB7AF45FF1260C1257E88003DACFA","notesName":"CN=Patrick Kwinten\/O=quintessens","branchName":"Quintessens Global Services","phone": ["+49 1525 161 223"],"info": ["IT Specialsit"],"sourceUNID":"","pictureURL":"http:\/\/dev1\/apps\/banking\/ServiceData.nsf\/0\/845AB7AF45FF1260C1257E88003DACFA\/$FILE\/PortalPicture.jpg","mail": ["patrickkwinten#ghotmail.com"],"reportsTo":"CN=Eva Fahlgren\/O=quintessens","job":"Managaer","departmentName":"Collaboration Services"}
to
{
"locationName": "",
"gender": "Male",
"companyName": "",
"name": "Patrick Kwinten",
"docUNID": "845AB7AF45FF1260C1257E88003DACFA",
"notesName": "CN=Patrick Kwinten\/O=quintessens",
"branchName": "Quintessens Global Services",
"phone": [
"+49 1525 161 223"
],
"info": [
"IT Specialsit"
],
"sourceUNID": "",
"pictureURL": "http:\/\/dev1\/apps\/banking\/ServiceData.nsf\/0\/845AB7AF45FF1260C1257E88003DACFA\/$FILE\/PortalPicture.jpg",
"mail": [
"patrickkwinten#ghotmail.com"
],
"reportsTo": "CN=Eva Fahlgren\/O=quintessens",
"job": "Managaer",
"departmentName": "Collaboration Services"
}

I do it in a different way. I use Google Postman to fire the request (with headers or whatever you need) and then I get the result back in Postman and can view it as "pretty" - this way I don't have to build anything like this into the application - and I also prefer to see the "raw" data and not risk changing anything on manipulating it prior to displaying it the way you suggest :-)
Really can't live without this utility once I discovered it.
/John

Related

Convert to JSON and map to new JSON object in Alteryx

I am using Alteryx to take an Excel file and convert to JSON. The JSON output I'm getting looks different to what I was expecting and also the object starts with "JSON": which I don't want to happen and I would also like to know how/which components I would use to map fields to specific JSON fields instead of key value pairs if I need to later in the flow.
I have attached my sample workflow and excel which are:
Excel screenshot
Alteryx test flow
JSON output I am seeing:
[
{
"JSON": "{\"email\":\"test123#test.com\",\"startdate\":\"2020-12-01\",\"isEnabled\":\"0\",\"status\":\"active\"}"
},
{
"JSON": "{\"email\":\"myemail#emails.com\",\"startdate\":\"2020-12-02\",\"isEnabled\":\"1\",\"status\":\"active\"}"
}
]
What I expected:
[{
"email": "test123#test.com",
"startdate": "2020-12-01",
"isEnabled": "0",
"status": "active"
},
{
"email": "myemail#emails.com",
"startdate": "2020-12-02",
"isEnabled": "1",
"status": "active"
}
]
Also, what component would I use if I wanted to map the structure above to another JSON structure similar this one:
[{
"name":"MyName",
"accounType":"array",
"contactDetails":{
"email":"test123#test.com",
"startDate":"2020-12-01"
}
}
} ]
Thanks
In the workflow that you have built, you are essentially creating the JSON twice. The JSON Build creates the JSON structure, so if you then want to output it, select your file to output and then change the dropdown to csv with delimiter \0 and no headers.
However, try putting an output straight after your Excel file and output to JSON, the Output Tool will build the JSON for you.
In answer to your second question, build the JSON for Contact Details first as a field (remember to rename JSON to contactDetails). Then build from there with one of the above options.

How to page-wise index a blob document in Azure Cognitive Search?

I am new to Azure Search. I am indexing few pdf documents using this method
But, I want to get search result page-wise. It is currently providing result from the whole document, but instead of that I want the result to be shown from each page and I also need that particular file name and page number that has the highest score.
As you have noticed, the document cracking by default shoves all text into one field (content). If you have an OCR skill involved (assuming you have images within the PDF that contain text), it does the same thing by default in merged_content. I do not believe there is a way to force these two tasks to break your data out into pages.
I say "believe" because it difficult to find documentation on the shape of the document object that is input into your skillsets. For example, look at the input to this merge skillset. It uses /document/content and other document related data and pushes it all into a field called merged_content. If you could find documentation on all the fields in document, it MIGHT have your pages broken down.
{
"#odata.type": "#Microsoft.Skills.Text.MergeSkill",
"name": "#BookMergeSkill",
"description": "Some description",
"context": "/document",
"insertPreTag": " ",
"insertPostTag": " ",
"inputs": [
{
"name": "text",
"source": "/document/content"
},
{
"name": "itemsToInsert",
"source": "/document/normalized_images/*/text"
},
{
"name": "offsets",
"source": "/document/normalized_images/*/contentOffset"
}
],
"outputs": [
{
"name": "mergedText",
"targetName": "merged_content"
}
]
},
The only way I know to approach this is to use a custom skill, which would reside in an Azure Function and be called as part of the document skillset pipeline. Inside that Azure Function, you would have to use a PDF reader, like iText7, and crack open the documents yourself and return data that you would place in the index document as an array of text or custom objects.
We were going to go down a custom cracking process with a client (not to do this but for other reasons), but the project was canned due to the cost of holding large amounts of data within an index.

Instagram API: .caption.created_time vs .created_time

What use case would create a difference between .caption.created_time and .created_time in the metadata objects from the JSON response? My app has been monitoring media recent data from the tags endpoint for about a week, collecting 50 data points, and those two properties have always been the exact same Epoch time. However, these properties are different in the example response in Instagram's docs, albeit the difference is only four seconds. Copied below:
"caption": {
"created_time": "1296703540",
"text": "#Snow",
"from": {
"username": "emohatch",
"id": "1242695"
},
"id": "26589964"
},
"created_time": "1296703536",
The user may have created the post with the original caption but then edited the caption and saved 4 seconds after they posted the original. Fix a typo, etc.

Looking to covert list of PMID and DOIs to Bibliometric Excel files

So I'm not sure if I'm posting this to the right group, but hopefully someone can point me into the right direction.
Basically, I have a list of many PMID and DOI numbers for online journals. I'd like to be able to import these into Mendeley, and create an output of tab-delimited excel files with each heading (author name, issue, year, etc) separated into different columns. Is there any way to do this?
Thanks,
I would recommend the API for this task. With your list of DOIs or PMIDs, you can use the query the catalog and get back all the bibliographic data. The data will be returned as a JSON object that looks like this:
[
{
"title": "Bayesian network meta-analysis to evaluate interferon-free treatments in naive patients with genotype 1 hepatitis C virus infection",
"type": "journal",
"authors": [
{
"first_name": "Sabrina",
"last_name": "Trippoli"
},
{
"first_name": "Valeria",
"last_name": "Fadda"
},
{
"first_name": "Dario",
"last_name": "Maratea"
},
{
"first_name": "Andrea",
"last_name": "Messori"
}
],
"year": 2015,
"source": "European Journal of Gastroenterology & Hepatology",
"identifiers": {
"doi": "10.1097/MEG.0000000000000389",
"issn": "0954-691X"
},
"id": "f7d2d642-fe21-3a0c-b1fb-a5db3ec55b41",
"link": "http://www.mendeley.com/research/bayesian-network-metaanalysis-evaluate-interferonfree-treatments-naive-patients-genotype-1-hepatitis"
}
]
If you want it in a form amenable to manipulation in a spreadsheet, you'll need to convert the structured JSON into a flatfile format. Here's an example using R to create a dataframe from a DOI input. The dataframe can then be written out as a tab-delimited file using write.delim.
Of course, once you have an R dataframe, you may find it easier to do your analysis within R, as there are packages for doing bibliometric analysis available.
You may have hoping to use the desktop client to avoid having to write any code, but R is pretty easy to pick up and all the tricky parts of the code are already written for you, so I hope this gets you started.

Comment blocks around JSON responses

I've noticed that some web applications return AJAX responses with JSON data embedded within a comment block. For example, this would be a sample response:
/*{
"firstName": "John",
"lastName": "Smith",
"address": {
"streetAddress": "21 2nd Street",
"city": "New York",
"state": "NY",
"postalCode": 10021
},
"phoneNumbers": [
"212 555-1234",
"646 555-4567"
]} */
What is the benefit of embedding the JSON data in a comment block? Is there some sort of security exploit which is avoided by doing this?
It's done to avoid a third party site hijacking your data using a <script> tag and overriding the Object constructor to grab the data as it is built.
When the JSON data is surrounded by comments, it no longer is directly executable via a <script> tag, and thereby "more secure".
See the PDF at http://www.fortifysoftware.com/servlet/downloads/public/JavaScript_Hijacking.pdf for more information (with examples)

Resources