Azure Table Storage as Sink in Data Factory Row Key - azure

I want to pass a value for the row key as a parameter just like the partition key. But the UI only gives me an option to use unique identifier or a source column. Actually I need to use this same entity somewhere else. How will I query this entity if the row key is going to be a random value?

Based on the official statement , partition key can be set custom value yet row key only could be set column name from source or GUID default value.
I think it's because of guaranteed uniqueness constraint restrictions. So, if you want to control row key value , you could add row key into your source data.
Hope it helps you.

Related

Passing the Dataflow Parameter to Sink Key column in Azure Data factory

I wanted to implement SCD type 2 logic but using dynamic tables and dynamic key fields from Config Table, I have a challenge to pass the Data Flow Parameter as Sink Key Column for my Alter Row activity, it is not taking the parameter values and always gives the error as invalid key column name, I tried picking the Dataflow parameter for the expression builder at sink key column and trying to pass the value from alter row transformation and I have named the field with parameter in the select statement as well , any help or suggestion highly appreciated
Please clink below image
Sample How I wanted to Pass Dynamic Values in Sink Mapping
Trying to Give the Dynamic Value to Key Value
You have "List of columns" selected, so ADF is looking for a column in your target table that is literally called "$TargetPK1Parameter".
Change the selector to "Custom expression" and enter a string array parameter. The parameter can be an array of strings that represent names of key columns in your target table.
It should look something like this:
I encountered a similar problem when trying to pass a composite key, parameterized, as part of the update method to sink. This now allows me to fully parameterise my dataflow and it handles both composite keys and single columns keys.
Here's how the data looks in my config table:
UpsertKeyColumn = DOMNAME,DDLANGUAGE,AS4LOCAL,VALPOS,AS4VERS
A parameter value is set in the dataflow
Upsert_Key_Column = #item().UpsertKeyColumn
Finally, in the Sink settings, Custom Expression is selected for Key columns and the following expression is entered - split($upsert_key_column,',')

How to pass a Data Flow Parameter in Key Column in Sink Tanformation while updating a data?

I am implementing SCD Type2 through Data Flow. I having created a Parameter in it where I will pass a column name and this Parameter I am using in Sink Transformation in Key Column.
Passing a parameter in Key Column in Data Flow
I have selected the Add Dynamic Content and then Parameter, after that I selected the parameter I have created in Data Flow. Then it shows like "$Key_col".
But when I run the pipeline it gives me an error-
{"message":"at Sink 'sink1'(Line 56/Col 6): Column operands are not allowed in literal expressions. Details:at Sink 'sink1'(Line 56/Col 6): Column operands are not allowed in literal expressions","failureType":"UserError","target":"Update_Existing_Records","errorCode":"DFExecutorUserError"}
Can anyone please tell me how resolve this error or any workaround for this Problem.
Yes, this work. You just need to put single quotes around the parameter value like this:
"'$Key_col'"
I'm using double-quotes for string interpolation in this solution, so paste it in your expression exactly as that.
Key column doesn't support set with parameter. You only can choose the exist column in sink.
The column name that you pick as the key here will be used by ADF as part of the subsequent update, upsert, delete. Therefore, you must pick a column that exists in the Sink mapping. If you wish to not write the value to this key column, then click "Skip writing key columns".
Please reference: Mapping data flow properties.
The parameter Key_col is not exist in the sink, even if it has the same name.
Update:
Data Flow parameter:
If we want to using update, we must add an Alter row active:
Sink, key column choose exist column 'name':
Pipeline runs successful:
Hope this helps.

How to have unique key except primary key in cassandra?

I am not good in English!
There is a table in Cassandra 3.5 which all columns of a row don't come at same time. Unique of table is some columns that are unique in a row together, but some of them are null at first. I can not set them the primary key because of null value. I have identify a column with name id and type uuid in Cassandra.
How can I have a unique key with that columns together in Cassandra?
Is my data model true?
How can I solve this problem?
You can't. It's not a relational DB. Use clustering and/or partitioning keys to add an unique constraint.
See this answer
To store unique values, create a separate table having your unique value as a key. Check if it exists by requesting this table before inserting a row. But beware, even doing this, you cannot ensure it will be unique in your final table if you have two concurrent inserts.
Basically, I would recommend using Cassandra as it really is: A data store. And find a way to implement your business logic where it belongs: in your code.

Time UUID type in pycassa

I'm having problems with using the time_uuid type as a key in my columnfamily. I want to store my records, and have them ordered by when they were inserted, and then I figured that the time_uuid is a good way to go. This is how I've set up my column family:
sys.create_column_family("keyspace", "records", comparator_type=TIME_UUID_TYPE)
When I try to insert, I do this:
q=pycassa.ColumnFamily(pycassa.connect("keyspace"), "records")
myKey=pycassa.util.convert_time_to_uuid(datetime.datetime.utcnow())
q.insert(myKey,{'somedata':'comevalue'})
However, when I insert data, I always get an error:
Argument for a v1 UUID column name or value was neither a UUID, a datetime, or a number.
If I change the comparator_type to UTF8_TYPE, it works, but the order of the items when returned are not as they should be. What am I doing wrong?
The problem is that in your data model, you are using the time as a row key. Although this is possible, you won't get a meaningful ordering unless you also use the ByteOrderedPartitioner.
For this reason, most people insert time-ordered data using the time as a column name, not a row key. In this model, your insert statement would look like:
q.insert(someKey, {datetime.datetime.utcnow(): 'somevalue'})
where someKey is a key that relates to the entire time series that you're inserting (for example, a username). (Note that you don't have to convert the time to UUID, pycassa does it for you.) To store something more than a single value, use a supercolumn or a composite key.
If you really want to store the time in your row keys, then you need to specify key_validation_class, not comparator_type. comparator_type sets the type of the column names, while key_validation_class sets the type of the row keys.
sys.create_column_family("keyspace", "records", key_validation_class=TIME_UUID_TYPE)
Remember the rows will not be sorted unless you also use the ByteOrderedPartitioner.
The comparator for a column family is used for ordering the columns within each row. You are seeing that error because 'somedata' is valid utf-8 but not a valid uuid.
The ordering of the rows stored in cassandra is determined by the partitioner. Most likely you are using RandomPartitioner which distributes load evenly across your cluster but does not allow for meaningful range queries (the rows will be returned in a random order.)
http://wiki.apache.org/cassandra/FAQ#range_rp

Insert rows into Access db from C# using Microsoft.Jet.OLEDB.4.0, autonumber column is set to zero

I'm using C# and Microsoft.Jet.OLEDB.4.0 provider to insert rows into an Access mdb.
Yes, I know Access sucks. It's a huge legacy app, and everything else works OK.
The table has an autonumber column. I insert the rows, but the autonumber column is set to zero.
I Googled the question and read all the articles I could find on this subject. One suggested inserting -1 for the autonumber column, but this didn't work. None of the other suggestions I could find worked.
I am using OleDbParameter's, not concatenating a big SQL text string.
I've tried the insert with and without a transaction. No difference.
How do I get this insert to work (i.e. set the autonumber column contents correctly)?
Thanks very much in advance,
Adam Leffert
In Access it is possible to INSERT an explicit value into an IDENTITY (a.k.a. Automnumber) column. If you (or your middleware) is writing the value zero to the IDENTITY column and there is no unique constraint on the IDENTITY column then that might explain it.
Just to be clear you should be using the syntax
INSERT INTO (<column list>) ...
and the column list should omit the IDENTITY column. Jet SQL will allow you to omit the entire column list but then implicitly include the IDENTITY column. Therefore you should use the INSERT INTO () syntax to explicitly omit the IDENTITY column.
In Access/Jet, you can write explicit values to the IDENTITY column, in which case the value will obviously not be auto-generated. Therefore, ensure both you and your middleware (ADO.NET etc) are not explicitly writing a zero value to the IDENTITY column.
BTW just for the IDENTITY column in the below table will auto-generate the value zero every second INSERT:
CREATE Table Test1
(
ID INTEGER IDENTITY(0, -2147483648) NOT NULL,
data_col INTEGER
);
When doing the insert, you need to be sure that you are NOT specifying a value for the AutoNumber column. Just like in SQL Server you don't insert a value for an identity column.

Resources