How to format presto api response as JSON using outputFormat and useIndexJoin? - presto

How to add two maps of key to values(array) together in Presto?
I can see the sample in the shared link. I need help with the implementation.

Related

Encoding Pagination Rules in Azure Data Factory Dataflow Source Options

I'm trying to using pagination rules in the source options for a DataFlow source that is using a DataSet which is using a REST linked service. Pagination works in this particular API where a header w/ a cursor value is returned, and that value is used in the next call's query parameter list: cursor=xxxx. It looks like the API is not having any of this because the cursors that they return aren't URI encoded, and I can't figure out how to set that up encoding in the pagination rule or anywhere else. I would assume that encoding would be automatic, but based on the number of results that I get back using both the ADF DataFlow and Postman, it appears that when I put a non-encoded cursor value in my parameter list, I get an empty result as well as an empty next-cursor header, and that seems to be what is ending the pagination.
So how do I encode URI values, either in the pagination rules of the DataFlow source, the REST DataSet or the REST Linked Service? There doesn't seem to be any available functions for that in the dynamic content option.

How to execute the same GET request using multiple data in postman?

I have request as below:
somehost:port/sometext?type={{type}}
I want to replace the {{type}} by three different values.
How do I implement it using postman scripts.
you need to use postman runner with a data (csv or json) file with the values of "type", and in runner select collection, then the data file and enter the iterations as required.
Refer the documentation

Nifi-Loading XML data into Cassandra

I am trying to insert XML data into Cassandra DB. Please can somebody suggest the flow in nifi. I have JMS on which I need to post messagedata and then consume & insert the data into Cassandra.
I'm not sure if you can directly ingest XML into Cassandra. However you could convert the XML to JSON using the TransformXml processor (and this XSLT), or as of NiFi 1.2.0, you can use ConvertRecord by specifying the input and output schemas.
If there are multiple XML records per flow file and you need one CQL statement per record, you may need SplitJson or SplitRecord after the XML-to-JSON conversion has taken place.
Then you can use ReplaceText to form a CQL statement to insert the JSON, then PutCassandraQL to push to Cassandra. Alternatively you can use CQL map syntax to insert into a map field, etc.

DocumentDB REST API: x-ms-documentdb-partitionkey is invalid

I am attempted to get a Document from DocumentDB using the REST API. I am using a partitioned Collection and therefore need to add the "x-ms-documentdb-partitionkey" header. If I add this, I get "Partition key abc is invalid". I can't find anywhere in the documentation that expects the key to be in specific format, but simply supplying the expected string value does not work. Does anyone know the expected format?
Partition key must be specified as an array (with a single element). For example:
x-ms-documentdb-partitionkey: [ "abc" ]
Partition key for a partitioned collection is actually the path to a property in DocumentDB. Thus you would need to specify it in the following format:
/{path to property name} e.g. /department
From Partitioning and scaling in Azure DocumentDB:
You must pick a JSON property name that has a wide range of values and
is likely to have evenly distributed access patterns. The partition
key is specified as a JSON path, e.g. /department represents the
property department.
More examples are listed in the link as well.

Schema crawler reading data from table

I understood we can read data from a table using command in Schema crawler.
How to do that programatically in java. I could see example to read schema , table etc. But how to get data?
Thanks in advance.
SchemaCrawler allows you to obtain database metadata, including result set metadata. Standard JDBC provides you a way to get data by using java.sql.ResultSet, and you can use SchemaCrawler for obtainting result set metadata using schemacrawler.utility.SchemaCrawlerUtility.getResultColumns(ResultSet).
Sualeh Fatehi, SchemaCrawler

Resources