LookUpRows on rowset created with function BuildRowSetFromString - exacttarget

Is it possible to apply a function like LookUpRows or Lookup to an array created with BuildRowSetFromString?
I have this:
SET #rowSet = BuildRowSetFromString(#ItemsString2, '|')
I'd like to know if there's a function on which I can do:
SET #var = LookupRows(#rowSet, ITEM_ID, ... )
I am trying already using a FOR loop. I want to know if there's a function that can do this.

No. I wish.
Best bet would be to use arrays in Server-Side JavaScript or possibly GTL.
If you want to over-engineer it, you can use XML and XPATH to do some array functions in AMPScript. I've written up a use-case with examples here on my personal blog.
Also, there is a lot more SFMC dicussion going on over in http://salesforce.stackexchange.com.

Related

How to work with result set of meta-functions in Vertica

I want to use result set of meta-function get_node_dependencies as a subquery. Is there some way to do it?
Something like this:
select v_txtindex.StringTokenizerDelim (dep, chr(10)) over () as words
from (
select get_node_dependencies() as dep
) t;
This query thows an error Meta-function ("get_node_dependencies") can be used only in the Select clause.
I know that there is a view vs_node_dependencies that returns the same data in more readable way, but the question is generic, not related to any specific meta-function.
Most Vertica meta functions returning a report are for informational purposes on the fly, and can only be used on the outmost part of a query - so you can't apply another function on their output.
But - as you are already prepared to go through development work to split that output into tokens, you might often be even better off by querying vs_node_dependencies directly. You'll also be more flexible - is my take on this.

Is there a way to pass a parameter to google bigquery to be used in their "IN" function

I'm currently writing an app that accesses google bigquery via their "#google-cloud/bigquery": "^2.0.6" library. In one of my queries I have a where clause where i need to pass a list of ids. If I use UNNEST like in their example and pass an array of strings, it works fine.
https://cloud.google.com/bigquery/docs/parameterized-queries
However, I have found that UNNEST can be really slow and just want to use IN on its own and pass in a string list of ids. No matter what format of string list I send, the query returns null results. I think this is because of the way they convert parameters in order to avoid sql injection. I have to use a parameter because I, myself want to avoid SQL injection attacks on my app. If i pass just one id it works fine, but if i pass a list it blows up so I figure it has something to do with formatting, but I know my format is correct in terms of what IN would normally expect i.e. IN ('', '')
Has anyone been able to just pass a param to IN and have it work? i.e. IN (#idParam)?
We declare params like this at the beginning of the script:
DECLARE var_country_ids ARRAY<INT64> DEFAULT [1,2,3];
and use like this:
WHERE if(var_country_ids is not null,p.country_id IN UNNEST(var_country_ids),true) AND ...
as you see we let NULL and array notation as well. We don't see issues with speed.

nodejs bigquery parameterised query inside IN() expression

I am trying to run a parameterised query using the npm module #google-cloud/bigquery.
Something like this:
SELECT * FROM myTable WHERE id IN (#ids);
I have no idea how bigQuery is expecting the parameter ids formatted.
My options.params look like something like this:
{ ids: '"1234", "4567"'}
But I don't get any result back. I know there are results, I can see them in bigquery and if I remove the parameter and just inject the string works just fine.
It seem pretty easy, but I can't figure out why it doesn't work, anyone who is willing to help me out?
Thank you in advance
Of course I found the solution as soon as I posted the question...
Thanks to this thread! Need to do some gymnastic...
So provided that the parameter is a string like:
'1234,5678'
We need to do:
WHERE id IN UNNEST(REGEXP_EXTRACT_ALL(#ids,"[0-9a-zA-Z]+"))
REGEXP_EXTRACT_ALL - returns an array
UNNEST - flattens the array for the IN clause as stated in the link above.

How to check if a table is already constructed?

Now I'm using Slick with Spray. I have to say Slick works much nicer alone, non-disturbingly with Spray than with Play (which is really troublesome).
However, I still can't solve a huge problem: database construction.
If there a way for me to maybe pass a list of TableQuery to a function, and it will match variables I passed in with tables in the database, and only create ones that are not created?
That would be really neat.
Assume I have two tables:
val articles = TableQuery[ArticleTable]
val users = TableQuery[UserTable]
I'm creating a function that may look like this:
def createDatabase(list: List[TableQuery[*]]) {
//.... (something like: (Article.articles.ddl ++ User.users.ddl).create)
}
Something like someTableQuery.baseTableRow.tableName should give you the table name. MTable.apply allows you to query for tables. Github search the slick code for examples of MTable.

CouchDB Reduce Function

I've defined a map function in CouchDB that produces data that looks like this:-
var data =
{"total_rows":10,"offset":0,"rows":[
{"id":"338b79f07dfe8b3877b3aa41a5bb8a58","key":"2000-06-23T23:59:00+00:00","value":{"country":"United States"}},
{"id":"338b79f07dfe8b3877b3aa41a5bb983e","key":"2000-06-23T23:59:00+00:00","value":{"country":"Norway"}},
{"id":"338b79f07dfe8b3877b3aa41a5ddfefe","key":"2000-06-23T23:59:00+00:00","value":{"country":"Hungary"}},
{"id":"338b79f07dfe8b3877b3aa41a5fe29d7","key":"2000-06-23T23:59:00+00:00","value":{"country":"United States"}},
{"id":"b6ed02fb38d6506d7371c419751e8a14","key":"2000-06-23T23:59:00+00:00","value":{"country":"Germany"}},
{"id":"b6ed02fb38d6506d7371c419753e20b6","key":"2000-06-23T23:59:00+00:00","value":{"country":"Hungary"}},
{"id":"b6ed02fb38d6506d7371c419755f34ad","key":"2000-06-23T23:59:00+00:00","value":{"country":"United States"}},
{"id":"b6ed02fb38d6506d7371c419755f3e17","key":"2000-06-23T23:59:00+00:00","value":{"country":"Germany"}},
{"id":"338b79f07dfe8b3877b3aa41a506082f","key":"2003-01-08T19:34:00+00:00","value":{"country":"United Kingdom"}},
{"id":"9366afb036bf8b63c9f45379bbe29509","key":"2003-01-08T19:34:00+00:00","value":{"":"United Kingdom"}}
]
}
I'm now trying to create reduce function that produces data that looks like this:-
{"rows":[
{"key":"United States","value":2},
{"key":"Norway","value":1},
{"key":"Hungary","value":2},
{"key":"Germany","value":2}
{"key":"United Kingdom","value":1}
]}
I need to the ability to use the "StartKey" and "EndKey" parameters in couch to define the date ranges. This is proving trickier than I expected. Can somebody show me what my reduce function should look like to handle this?
Regards,
Carlskii
I'm afraid there is a common anti-pattern of using map reduce.
Please read Example 3, “Don’t use this, it’s an example broken on purpose”

Resources