ActivePivot retrieve CSV output by sending MDX query through python? - activepivot

How can I retrieve CSV output file by sending MDX query to ActivePivot by using python? Instead of XMLA or Web Services.

there is a POST endpoint to get CSV content from an MDX query, available since ActivePivot 5.4.
By calling http://<host>/<app>/pivot/rest/v3/cube/export/mdx/download with the following JSON payload:
{
"jsonMdxQuery":
{
"mdx" : "<your MDX query>",
"context" : {}
},
"separator" : ";"
}
you will receive the content of the answer as CSV with fields separated by ;.
However, note that the form of your MDX will impact the form of the CSV. For good results, I suggest you MDX queries in the form of:
SELECT
// Measures as columns
{[Measures].[contributors.COUNT], ...} ON COLUMNS
// Members on rows
[Currency].[Currency].[Currency].Members ON ROWS
FROM [cube]
It will generate a CSV as below:
[Measures].[Measures];[Currency].[Currency].[Currency];VALUE
contributors.COUNT;EUR;170
pnl.SUM;EUR;-8413.812452550741
...
Cheers

You can use the ActivePivot webservices or RESTful services then you write a python client and fire your MDX query:
With webservices: http://host:port/webapp/webservices
Look for IQueriesService the method executeMDX should help
or
with RESTful services: http://host:port/webapp/pivot/rest/v3/cube/query?_wadl
Look for
<resource path="mdx">
<method name="POST">
<request>
<representation mediaType="application/json"/>
</request>
<response>
<representation mediaType="application/json"/>
</response>
</method>
</resource>
You'll get the query result, loop over the retrieved records and build your own csv.
Another option (still with RESTful services) is to use the following endpoint
http://host:port/webapp/pivot/rest/v3/cube/export?_wadl
that allows you to export a query result in CSV directly.

Related

Import data from DBPedia into GraphDB

I am basically looking to use a SPARQL construct query to retrieve data from DBPEdia to a local version of GraphDB. The construct query should be able to map to as many relations and data related to music. I have tried running construct queries within the GraphDB Workbench but I am not exactly sure how to go about it.
In the online tutorials for GraphDB, they always import data using a file or an online resource, and I could not find any example where they get data directly in the database by using a construct query.
Any advice regarding this would be very appreciated. Thanks for taking the time to help.
GraphDB supports importing data already transformed into RDF data format. The easiest way to import data from an external endpoint like DBPedia is to use SPARQL federation. Here is a sample query that takes data from a remote endpoint and imports it to the currently selected GraphDB repository:
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
INSERT {
?s ?p ?o2
}
WHERE {
# Execute the query against DBPedia's endpoint
SERVICE <http://dbpedia.org/sparql> {
SELECT ?s ?p ?o2
{
# Select all triples for Madonna
?s ?p ?o
FILTER (?s = <http://dbpedia.org/resource/Madonna_(entertainer)>)
# Hacky function to rewrite all Literals of type rdf:langStrings without a language tag
BIND (
IF (
(isLiteral(?o) && datatype(?o) = rdf:langString && lang(?o) = ""),
(STRDT(STR(?o), xsd:string)),
?o
)
AS ?o2
)
}
}
}
Unfortunately, DBPedia and the underlying database engine are notorious for not strictly complying with SPARQL 1.1 and RDF 1.1 specifications. The service returns RDF literals of type rdf:langString without a proper language tag:
...
<result>
<binding name="s"><uri>http://dbpedia.org/resource/Madonna_(entertainer)</uri></binding>
<binding name="p"><uri>http://dbpedia.org/property/d</uri></binding>
<binding name="o"><literal datatype="http://www.w3.org/1999/02/22-rdf-syntax-ns#langString">Q1744</literal></binding>
</result>
...
The only way to overcome this is to add an extra filter which rewrites them on the fly.

Iterating over payload using special expression and construct dynamic input parameters for sql query

I am using Spring Integration in our project.My payload is as given below:
<?xml version="1.0" encoding="UTF-8"?>
<checkClaimSources>
<claimsCopy>
<ClmNum>07-111315-001-001</ClmNum>
<AltClaim />
</claimsCopy>
<claimsCopy>
<ClmNum>07-111315-001-002</ClmNum>
<AltClaim>test</AltClaim>
</claimsCopy>
</checkClaimSources>
I want to use int-jdbc:outbound-gateway to execute a select query where I have to provide all the <ClmNum> and <AltClaim> values as input parameters to the sql query. The number of <ClmNum> and <AltClaim> are dynamic here. Is there any way to iterate over all of those and pass them to the query?

Forcing a string field to DateTime in WCF query with Azure Table Storage

So, a quick overview of what I'm doing:
We're currently storing events to Azure Table storage from a Node.js cloud service using the "azure-storage" npm module. We're storing our own timestamps for these events in storage (as opposed to using the Azure defined one).
Now, we have coded a generic storage handler script that for the moment just stores all values as strings. To save refactoring this script, I was hoping there would be a way to tweak the query instead.
So, my question is, is it possible to query by datetime where the stored value is not actually a datetime field and instead a string?
My original query included the following:
.where( "_timestamp ge datetime'?'", timestamp );
In the above code I need to somehow have the query treat _timestamp as a datetime instead of a string...
Would something like the following work, or what's the best way to do it?
.where( "datetime _timestamp ge datetime'?'", timestamp );
AFAIK, if the attribute type is String in an Azure Table, you can't convert that to DateTime. Thus you won't be able to use .where( "_timestamp ge datetime'?'", timestamp );
If you're storing your _timestamp in yyyy-MM-ddTHH:mm:ssZ format, then you could simply do a string based query like
.where( "_timestamp ge '?'", timestamp );
and that should work just fine other than the fact that this query is going to do a full table scan and will not be an optimized query. However if you're storing in some other format, you may get different results.

MarkLogic - node.js Client API - queryBuilder query array of IDs

This question is similar to:
MarkLogic - XQuery - cts:element-range-query using variable length sequence or map
But this time I need to do the query using the queryBuilder in the node.js client API.
I have a collection of 100,000 records structured like this:
<record>
<pk>1</pk>
<id>1234</id>
</record>
<record>
<pk>2</pk>
<id>1234</id>
</record>
<record>
<pk>3</pk>
<id>5678</id>
</record>
<record>
<pk>4</pk>
<id>5678</id>
</record>
I have setup a range index on id.
I want to write a query using the queryBuilder node.js client API that will allow me to pass in an array of IDs and get out a list of records.
It needs to:
1) query a specific collection
2) leverage the range indexes for performance
Nevermind, I figured out the problem.
db.db.documents.query(
q.where(
q.collection('Records'),
q.or(
q.value('id', ['1', '2'])
)
).slice(1, 99999999)
)
I originally tried to pass an array into q.value and I was only getting limited results (Got 10 when I expected 20). So I was under the impression that I was doing it wrong.
It turns out I just needed to slice the where clause to include everything. Apparently if you don't specify how much to take it defaults to 10.
Also note that when I tried .slice(0) which would have been preferred, I got an exception.

What is the azure table storage query equivalent of T-sql's LIKE command?

I'm querying Azure table storage using the Azure Storage Explorer. I want to find all messages that contain the given text, like this in T-SQL:
message like '%SysFn%'
Executing the T-SQL gives "An error occurred while processing this request"
What is the equivalent of this query in Azure?
There's no direct equivalent, as there is no wildcard searching. All supported operations are listed here. You'll see eq, gt, ge, lt, le, etc. You could make use of these, perhaps, to look for specific ranges.
Depending on your partitioning scheme, you may be able to select a subset of entities based on specific partition key, and then scan through each entity, examining message to find the specific ones you need (basically a partial partition scan).
While an advanced wildcard search isn't strictly possible in Azure Table Storage, you can use a combination of the "ge" and "lt" operators to achieve a "prefix" search. This process is explained in a blog post by Scott Helme here.
Essentially this method uses ASCII incrementing to query Azure Table Storage for any rows whose property begins with a certain string of text. I've written a small Powershell function that generates the custom filter needed to do a prefix search.
Function Get-AzTableWildcardFilter {
param (
[Parameter(Mandatory=$true)]
[string]$FilterProperty,
[Parameter(Mandatory=$true)]
[string]$FilterText
)
Begin {}
Process {
$SearchArray = ([char[]]$FilterText)
$SearchArray[-1] = [char](([int]$SearchArray[-1]) + 1)
$SearchString = ($SearchArray -join '')
}
End {
Write-Output "($($FilterProperty) ge '$($FilterText)') and ($($FilterProperty) lt '$($SearchString)')"
}
}
You could then use this function with Get-AzTableRow like this (where $CloudTable is your Microsoft.Azure.Cosmos.Table.CloudTable object):
Get-AzTableRow -Table $CloudTable -CustomFilter (Get-AzTableWildcardFilter -FilterProperty 'RowKey' -FilterText 'foo')
Another option would be export the logs from Azure Table storage to csv. Once you have the csv you can open this in excel or any other app and search for the text.
You can export table storage data using TableXplorer (http://clumsyleaf.com/products/tablexplorer). In this there is an option to export the filtered data to csv.

Resources