ServiceStack.OrmLite get multiple result sets from a stored procedure - servicestack

I've been using SqlList() to receive result sets from SPs and it is handy.
var people = db.SqlList<Person>("EXEC GetRockstarsAged #age", new { "age", 42 });
but how can I use this OrmLite to get multiple result sets from a single SP?
Using the above approach only seems to retrieve the first result set.

Unfortunately, ServiceStack.OrmLite does not support multiple result sets unless combined with Dapper. ServiceStack MARS (Multiple Active Result Sets) using ORMLite and Output Parameters
Alternatively, you can use the .Net SqlCommand.
Return multiple recordsets from stored proc in C#
ServiceStack.OrmLite V4 notes: https://github.com/ServiceStack/ServiceStack.OrmLite
ServiceStack.OrmLite V3 notes: https://github.com/ServiceStack/ServiceStack.OrmLite/tree/v3

Related

[Shopware6]: How can I add SQL Filter to Criteria?

So, the criteria are already quite powerful. Yet I came across a case I seem to not be able to replicate on the criteria object.
I needed to filter out all entries that were not timely relevant.
In a world, where you'd be able to mix SQL with the field definition, it would look like this:
...->addFilter(
new RangeFilter('DATEDIFF(NOW(), INTERVAL createdAt DAY)', [RangeFilter::LTE => 1])
)
Unfortunately that doesn't work in our world.
When i pass the criteria to a searchfunction, i only get:
"DATEDIFF(NOW(), INTERVAL createdAt DAY)" is not a field on xyz
I tried to do it with ->addExtensions and several other experiments, but i couldn't get it to work. I resorted to using the queryBuilder from Doctrine, using queryParts, but the data i'm getting is not very clean and not assigned to an ORMEntity like it should be.
Is it possible to write a criteria that incooperates native SQL filtering?
The DAL is designed in a way that should explicitly not accept SQL statements as it is a core concept of the abstraction. As the DAL offers extendibility for third party extensions it should be preferred to raw SQL in most cases. I would suggest writing a lightweight query that only fetches the IDs using your SQL query and then use these pre-filtered IDs to fetch complete data sets using the DAL.
$ids = (new QueryBuilder($connection))
->select(['LOWER(HEX(id))'])
->from('product')
->where('...')
->execute()
->fetchFirstColumn();
$criteria = new Criteria($ids);
This should offer the best of both worlds, the freedom of using raw SQL and the extendibility features of the DAL.
In your specific case you could also just take the current day, remove the amount of days that should have passed and use this threshold date to compare it to the creation date:
$now = new \DateTimeImmutable();
$dateInterval = new \DateInterval('P1D');
$thresholdDate = $now->sub($dateInterval);
// filter to get all with a creation date greater than now -1 day
$filter = new RangeFilter(
'createdAt',
[RangeFilter::GTE => $thresholdDate->format(Defaults::STORAGE_DATE_TIME_FORMAT)]
);

Access CosmosDB from Azure Function (without input binding)

I have 2 collections in CosmosDB, Stocks and StockPrices.
StockPrices collection holds all historical prices, and is constantly updated.
I want to create Azure Function that listens to StockPrices updates (CosmosDBTrigger) and then does the following for each Document passed by the trigger:
Find stock with matching ticker in Stocks collection
Update stock price in Stocks collection
I can't do this with CosmosDB input binding, as CosmosDBTrigger passes a List (binding only works when trigger passes a single item).
The only way I see this working is if I foreach on CosmosDBTrigger List, and access CosmosDB from my function body and perform steps 1 and 2 above.
Question: How do I access CosmosDB from within my function?
One of the CosmosDB binding forms is to get a DocumentClient instance, which provides the full range of operations on the container. This way, you should be able to combine the change feed trigger and the item manipulation into the same function, like:
[FunctionName("ProcessStockChanges")]
public async Task Run(
[CosmosDBTrigger(/* Trigger params */)] IReadOnlyList<Document> changedItems,
[CosmosDB(/* Client params */)] DocumentClient client,
ILogger log)
{
// Read changedItems,
// Create/read/update/delete with client
}
It's also possible with .NET Core to use dependency injection to provide a full-fledged custom service/repository class to your function instance to interface to Cosmos. This is my preferred approach, because I can do validation, control serialization, etc with the latest version of the Cosmos SDK.
You may have done so intentionally, but just mentioning to consider combining your data into a single container partitioned by, for example, a combination of record type (Stock/StockPrice) and identifier. This simplifies things and can be more cost/resource efficient relative to multiple containers.
Ended up going with #Noah Stahl's suggestion. Leaving this here as an alternative.
Couldn't figure out how to do this directly, so came up with a work-around:
Add function with CosmosDBTrigger on StockPrices collection with Queue output binding
foreach over Documents from the trigger, serialize and add to the Queue
Add function with QueueTrigger, CosmosDB input binding for Stocks collection (with PartitionKey and Id set to StockTicker), and CosmosDB output binding for Stocks collection
Update Stock from CosmosDB input binding with values from the QueueTrigger
Assign updated Stock to CosmosDB output binding parameter (updates record in DB)
This said, I'd like to hear about more straightforward ways of doing this, as my approach seems like a hack.

How to write a LIKE query in Azure CosmosDB?

I want to retrieve data from Cosmos DB with the following query:
SELECT * FROM c WHERE c.pi like '09%001'
(This is a SQL query, which I can use in MySQL)
Here, pi is a string value, which can be 09001001 or 09025001.
Is there a way to use a LIKE command in Cosmos DB?
I know that cosmos DB uses CONTAINS, but this cannot be used when you want to match specifically the beginning or end of the string.
UPDATE :
You can now use the LIKE keyword to do text searches in Azure Cosmos DB SQL (core) API!
EXAMPLE:
SELECT *
FROM c
WHERE c.description LIKE "%cereal%"
OLD Answer:
This can be achieved in 2 ways
(i) Currently Azure Cosmosdb supports the CONTAINS, STARTSWITH, and ENDSWITH built-in functions which are equivalent to LIKE.
The keyword for LIKE in Cosmosdb is Contains .
SELECT * FROM c WHERE CONTAINS(c.pi, '09')
So, in your case if you want to match the pattern 09%001, you need to use:
SELECT * FROM c WHERE STARTSWITH(c.pi, '09') AND ENDSWITH(c.pi, '001')
(ii) As 404 mentioned, Use SQL API User Defined Functions which supports regex :
function executeRegex(str, pattern) {
let regex=RegExp(pattern);
return regex.test(str);
}
SELECT udf.EXECUTE_REGEX("foobar", ".*bar")
Another possibility is creating your own User Defined Function. As example here's a regex check:
function matchRegex(str, pattern) {
let regex=RegExp(pattern);
return regex.test(str);
}
Created under the name MATCH_REGEX it can then be used like:
SELECT udf.MATCH_REGEX("09001001", "^09.*001$")
As note: it'll kill any index optimization that for instance STARTSWITH would have. Although allows for much more complex patterns. It can therefor be beneficial to use additional filters that are capable of using the index to narrow down the search. E.g. using StartsWith(c.property1, '09') as addition to the above example in a WHERE clause.
UPDATE:
Cosmos now has a RegexMatch function that can do the same. While the documentation for the MongoApi mentions the $regex can use the index to optimize your query if it adheres to certain rules this does not seem to be the case for the SqlApi (at this moment).

Paging through Cassandra using QueryBuilder

The DataStax documentation says that to page through all data, the following CQL query is useful:
SELECT * FROM test WHERE token(k) > token(42);
Is it possible to build this query using the QueryBuilder? It provides a token method, but that seems to work only on column names, not on values.
Ideally, the value (in the example: 42) is of type Object, just like in the eq/gte/lte functions.
Try using automatic paging with the .fetchSize method. It uses token under the hood:
Automatic paging is introduced Cassandra 2.0. Automatic paging allows the developer to iterate on an entire ResultSet without having to care about its size: some extra rows are fetched as the client code iterate over the results while the old ones are dropped. The amount of rows that must be retrieved can be parameterized at query time. In the Java Driver this will looks like:
Statement stmt = new SimpleStatement("SELECT * FROM images");
stmt.setFetchSize(100);
ResultSet rs = session.execute(stmt);
Source: http://www.datastax.com/dev/blog/client-side-improvements-in-cassandra-2-0
QueryBuilder.fcall("token", value) ;
can solve the problem!

Incremental loading in Azure Mobile Services

Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.

Resources