This appears to be pretty basic but I am unable to find a suitable pipeline expression function to achieve this.
I have set an array variable VAR1 with the following value, which is an output from a SQL Lookup activity in an ADF pipeline:
[
{
"Code1": "1312312"
},
{
"Code1": "3524355"
}
]
Now, I need to convert this into a comma separated string so I can pass it to a SQL query in the next activity - something like:
"'1312312','3524355'"
I am unable to find an expression function to iterate over the array elements, nor convert an array to a string. The only pipeline expression functions I see are to convert string to array and not the other way around.
Am I missing something basic? How can this be achieved?
Use 'join' function present in 'collection' functions in 'Add dynamic content'. For example:
join(variables('ARRAY_VARIABLE'), ',')
I had this same issue and was not totally satisfied just using the join function because it keeps the keys from the json object. Also, using an iterator approach can work but is needlessly expensive and slow if you have a long list. Here was my approach, using join and replace:
replace(replace(join(variables('VAR1'), ','), '{"Code1":', ''), '}', ''))
This will give you exactly the output you are looking for.
I got it working using a ForEach loop activity to iterate over my array and use a Set Variable task with a concat expression function to create my comma separated string.
Wish they had an iterator function in the expression language itself, that would have made it much easier.
In case, you just have two elements in the array, then you can do something like:
#concat(variables('variable_name')[0].Code1, ',', variables('variable_name')[1].Code1)
Related
I am building an Azure Data Factory. Inside a Data Flow I have an array of strings.
That array of strings I wish to merge into one single string.
ie. [ "value1", "value2" ] into "value1, value2"
Is that even possible, I canĀ“t find any function helping me out here?
I wish there existed a join function or foreach but can't find any?
Here you go:
dropLeft(toString(reduce(['value1','value2','value3','value4'], '', #acc + ', ' + #item, #result)), 2)
Results:
value1, value2, value3, value4
Does toString(myArray) do what you are looking for?
You can use the join function which is present in collection functions in Add dynamic content.
Syntax:
join([<collection>], '<delimiter>')
Example:
join(variables('ARRAY_VARIABLE'), ',')
Refer this to learn more about the Join.
Also, you can do it by using a ForEach loop activity to iterate over the array and use a Set Variable task with a concat expression function to create the comma separated string.
In case, you just have two elements in your array, then you can do like this.
#concat(variables('variable_name')[0].Code1, ',', variables('variable_name')[1].Code1)
I have a list of strings I get as a result of splitting a string. I need to remove the surrounding quotes from the strings in the list. Using method chaining how can I achieve this? I tried the below, but doesn't work.Says type interference failed.
val splitCountries: List<String> = countries.split(",").forEach{it -> it.removeSurrounding("\"")}
forEach doesn't return the value you generate in it, it's really just a replacement for a for loop that performs the given action. What you need here is map:
val splitCountries: List<String> = countries.split(",").map { it.removeSurrounding("\"") }
Also, a single parameter in a lambda is implicitly named it, you only have to name it explicitly if you wish to change that.
I have a string /sample/data. When I split using split I get the following result,
["","sample","data"]
I want to ignore the empty string(s). So I tried the following code,
"/sample/data".split('/').findAll(it != "")
It gives me an error "cannot call String[] findAll with argument bool".
How can I split and get a List without empty string in it?
split method returns array.
If you need List, use tokenize
"/sample/data".tokenize('/')
also you don't need to use findAll in this case.
You can do as below:
println "/sample/data".split('/').findAll {it}
findAll {it} would fetch all the non empty values.
Parens would work (see comments on question). So your solution is already close:
"/a/b".split("/").findAll()
Because most of the Groovy functions have a zero arity, which will call the function with an identity closure. And since an empty string is considered falsey, this will filter them out.
The findvalue function in HTML::TreeBuilder::XPath returns a concatenation of any values found by the xpath query.
Why does it do this, and how could a concatenation of the values be useful to anyone?
Why does it do this?
When you call findvalue, you're requesting a single scalar value. If there are multiple matches, they have to be combined into a single value somehow.
From the documentation for HTML::TreeBuilder::XPath:
findvalue ($path)
...If the path returns a NodeSet, $nodeset->xpath_to_literal is called automatically for you (and thus a Tree::XPathEngine::Literal is returned).
And from the documentation for Tree::XPathEngine::NodeSet:
xpath_to_literal()
Returns the concatenation of all the string-values of all the nodes in the list.
An alternative would be to return the Tree::XPathEngine::NodeSet object so the user could iterate through the results himself, but the findvalues method already returns a list.
How could a concatenation of the values be useful to anyone?
For example:
use strict;
use warnings 'all';
use 5.010;
use HTML::TreeBuilder::XPath;
my $content = do { local $/; <DATA> };
my $tree = HTML::TreeBuilder::XPath->new_from_content($content);
say $tree->findvalue('//p');
__DATA__
<p>HTML is just text.</p>
<p>It can still make sense without the markup.</p>
Output:
HTML is just text.It can still make sense without the markup.
Usually, though, it makes more sense to get a list of matches and iterate through them instead of doing dumb concatenation, so you should use findvalues (plural) if you could have multiple matches.
Use
( $tree->findvalues('//p') )[0] ;
instead.
I have a dataframe with a list of strings in it
df$a
=========
"4343-2"
"7889-5"
"4-3456"
"334-45"
"8765-4"
I'd like to perform a string operation on the list to remove the dash sign, so I did this..
df$a <- lapply(df$a, sub, "-","", df$a)
..which only produces a set of completely empty strings. What did I get wrong?
you can just use sub directly.
df$a <- sub('-', '', df$a)
Instead of the convoluted lapply you're doing since sub is "vectorized". You can also use gsub if you think there may be more than one dash per entry.