How to execute a script(say python or java etc) based on the value of the property in Apache NiFi - python-3.x

I have a flow file containing key value pairs, will read from Kafka and send to Nifi, using Getfile I will receive the file and then using Configure Process we can extract the contents of flowfile and keep them as flowfile attributes by adding matching regex.
Now after that I need to use specific attribute(s)(which I have got in the above step) and its value(s) and compare with the particular string value or will read the corresponding property value from nifi.properties file or custom properties file.
Now based on the validity I need to execute a script using ExecuteStream Command, suppose the extracted attribute value and nifi.properties or custom properties value matches then i should execute the script.
Here the query is how to compare the property values and execute the script.

Related

Copy CSV File with Multiline Attribute with Azure Synapse Pipeline

I have a CSV File in the Following format which want to copy from an external share to my datalake:
Test; Text
"1"; "This is a text
which goes on on a second line
and on on a third line"
"2"; "Another Test"
I do now want to load it with a Copy Data Task in an Azure Synapse Pipeline. The result is the following:
Test; Text
"1";" \"This is a text"
"which goes on on a second line";
"and on on a third line\"";
"2";" \"Another Test\""
So, yo see, it is not handling the Multi-Line Text correct. I also do not see an option to handle multiline text within a Copy Data Task. Unfortunately i'm not able to use a DataFlow Task, because it is not allowing to run with an external Azure Runtime, which i'm forced to use, due to security reasons.
In fact, i'm of course not speaking about this single test file, instead i do have x thousands of files.
My settings for the CSV File look like follows:
CSV Connector Settings
Can someone tell me how to handle this kind of multiline data correctly?
Do I have any other options within Synapse (apart from the Dataflows)?
Thanks a lot for your help
Well turns out this is not possible with a CSV File.
The pragmatic solution is to use "binary" files instead, to transfer the CSV Files and only load and transform them later on with a Python Notebook in Synapse.
You can achieve this in azure data factory by iterating through all lines and check for delimiter in each line. And then, use string manipulation functions with set variable activities to convert multi-line data to a single line.
Look at the following example. I have a set variable activity with empty value (taken from parameter) for req variable.
In lookup, create a dataset with following configuration to the multiline csv:
In foreach, where I iterate each row by giving items value as #range(0,sub(activity('Lookup1').output.count,1)). Inside for each, I have an if activity with following condition:
#contains(activity('Lookup1').output.value[item()]['Prop_0'],';')
If this is true, then I concat the current result to req variable using 2 set variable activities.
temp: #if(contains(activity('Lookup1').output.value[add(item(),1)]['Prop_0'],';'),concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],decodeUriComponent('%0D%0A')),concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],' '))
actual (req variable): #variables('val')
For false, I have handled the concatenation in the following way:
temp1: #concat(variables('req'),activity('Lookup1').output.value[item()]['Prop_0'],' ')
actual1 (req variable): #variables('val2')
Now, I have used a final variable to handle last line of the file. I have used the following dynamic content for that:
#if(contains(last(activity('Lookup1').output.value)['Prop_0'],';'),concat(variables('req'),decodeUriComponent('%0D%0A'),last(activity('Lookup1').output.value)['Prop_0']),concat(variables('req'),last(activity('Lookup1').output.value)['Prop_0']))
Finally, I have taken copy data activity with a sample source file with 1 column and 1 row (using this to copy our actual data).
Now, take source file configuration as shown below:
Create an additional column with value as final variable value:
Create a sink with following configuration and select mapping for only above created column:
When I run the pipeline, I get the data as required. The following is an output image for reference.

Cucumber - Can`t retrieve parameter value using .yaml file

I need to pass multiple parameters in cucumber, one of them are path parameter, and second is query parameter.
Both values are stored in .yaml file, but when Im passing parameter through data table, Cucumber resolving it and Im getting a value. However I need to pass path parameter right to the step definition.
Is it possible?
I`ve been searching in Cucumber doc and it seems like not.
Example of my step definition:
Given Send some request with specified item "${something.custom.path-parameter}" and query parameter
| query_parameter | ${value} |

JDBC variable names when called in the groovy script not giving the correct value for a CSV parameterization

I have a JMeter test where a CSV file containing multiple rows of comma separated values example:- internalID,drivername, usreg,canadareg. I am basically using the CSV file to compare the values with the database table values. To compare the values to database values, I am adding a JDBC request with a query 'select internalID,drivername,usreg,canadareg from data where internalid ='${internalID'}' and providing the variables names to store the column data result. I use the groovy JSR233 and call the variables names in the script by declaring String a = vars.get("dintID_${counter}") where dintID is the variable name provided in the JDBC . The issue is when I run the script the first line of data in CSV files gets executed successfully, then the second line data in CSV file is passed to SQL statement correct, however the vars.get("dintid_${counter}") always stays at previous record meaning it does not go to next internalid(dintID). I have checked that my counter is incrementing. No idea how to resolve the issue. Does anyone know what mistake I am doing.
If you take a look at JSR223 Sampler documentation you will see that:
The JSR223 test elements have a feature (compilation) that can significantly increase performance. To benefit from this feature:
Use Script files instead of inlining them. This will make JMeter compile them if this feature is available on ScriptEngine and cache them.
Or Use Script Text and check Cache compiled script if available property.
When using this feature, ensure your script code does not use JMeter variables or JMeter function calls directly in script code as caching would only cache first replacement. Instead use script parameters.
So if the counter is a JMeter Variable - it will always be the initial value and it won't increment on subsequent iterations.
So you need to change the line to:
String a = vars.get('dintID_' + vars.get('counter'))
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It

Skip element in BizTalk flat file assembly?

I've been tasked to map an input xml (actually an SAP idoc xml), and to generate a number of flat files. Each input xml may yield multiple output files (one output file per lot number), so I will be using xsl:key and the key() function in my mapping, based on the lot number
The thing is, the lot number itself will not be in the file itself, but the output file name needs to contain that lot number value.
So the question really is: can I map the lot number to the xml and have the flat file assembler skip it when it produces the file? Or is there another way the lot number can be applied as file name by the assembly without having it inside the file itself?
In your orchestration you can set a context property for each output message:
msgOutput(FILE.ReceivedFileName) = "DynamicStuff";
msgOutput then goes to the send shape.
In your send port you set the output file like this:
FixedStuff_%SourceFileName%.xml
The result:
FixedStuff_DynamicStuff.xml
If the value is not required in the message content, don't map it. That's it.
To insert at value in the file name, lot number in this case, you will need to promote that value to the FILE.ReceivedFileName Context Property. Then, you can use the %SourceFileName% Macro as part of the name setting in the Send Port. You can set FILE.ReceivedFileName by either Property Promotion or xpath() in an Orchestration.
Bonus: Sorting and Grouping in xslt is rather unwieldy, which is why I don't do that anymore. Instead, you can use SQL: BizTalk: Sorting and Grouping Flat File Data In SQL Instead of XSL

Pass DB Connection parameters to a Kettle a.k.a PDI table Input step dynamically from Excel

I have a requirement such that whenever i run my Kettle job, the database connection parameters must be taken dynamically from an excel source on each run.
Say i have an excel with column names : HostName, Username, Database, Password.
i want to pass these connection parameters to my table input step dynamically whenever the job runs.
This is what i was trying to do.
You can achieve this by
reading the DB connection parameters from a source (e.g. Excel or in my example a CSV file)
storing the parameters in variables
using the variables in your connection setting.
Proceed as follows
Create another transformation for setting the variables (you cannot do this in the same transformation that uses it):
In the Set Variables element configure the variables:
In the element reading/writing your data create a new connection and set the connection parameters using ${variable_name}. Note that you have to blindly write ${password} into the appropriate field. Also note that this may be a security issue because the value may show up as plain text in log files!
In your job call the variable transformation first and then the functional part:
All you need is the XLS input and the Set Variables step. Define your variables as being valid in the Root job and you can use them in subsequent jobs, as long as they're called by the same root job, when defining the connection.
The "Copy rows to result" and "Get rows from result" are used to send information (rows of data) from one transformation to the next transformation or job in the same parent job. They're not used to send data between steps, that's what the hops are for.

Resources