SSIS Change data on Import from Excel to SQL Server table - excel

I'm changing all of excel data to numbers before importing to my sql table with SSIS.
This is my original table.
Need to achive this :
Is it possible to do it with SSIS?

In your data flow, you need to provide a translation or mapping from your Source item's value to the target's value. This is generally accomplished through the use of a Lookup Transformation. By default, a Lookup will fail out if no match is found and will perform case sensitive matches so an EX value would not match ex. An example of using the Lookup
Since the lookup gives you the ability to write a query, if you don't have these in a table, you can "cheat" and write the query like
SELECT D.CutId, D.Cut
FROM
(
VALUES
(1, 'EX')
, (2, 'ZZ')
) D(CutId, Cut);

Related

Azure Data Factory - Exists transformation in Data Flow with generic dataset

I'm having issues using the Exists Transformation within a Data Flow with a generic dataset.
I have two sources (one from staging table "sourceStg", one from DWH table "sourceDwh") and want to compare if the UniqueIdentifier-Column in the staging table is existing in the UniqueIdentifier-Column in the DWH table. For that I have a generic data set which I query with a SQL statement containing parameters.
When I open the "Exists settings" I cannot choose any Column from the source in the conditions since the source is generic and has no Projection until I run the data flow. However, I have a parameter which I get from the parent pipeline which provides me the name of the Column containing the UniqueIdentifier (both column names in staging / DWH are the same).
I tried to add following statement "byName($UniqueIdentifier)" in the left and right column field but the engine resolves them both as the sourceStg-Column since the prefix of the source-transformations is missing and it defaults to the first one. What I basically now try to achieve is having some statement as followed defining the correct source-transformation and the column containing the unique identifier with a parameter.
exists(sourceStg#$UniqueIdentifier == sourceDwh#$UniqueIdentifier)
But either the expression cannot be parsed or the result does not retrieve the actual UniqueIdentifier value from the column but writes the statement (e.g. sourceStg#$UniqueIdentifier) as column value.
The only workaround I found so far is having two derived columns which adds a suffix to the UniqueIdentifier-Column in one source and a new parameter $UniqueIdentiferDwh which is populate with the parameter $UniqueIdentifier and the same suffix as used in the derived column.
Any Azure Data Factory experts out there to help?
Thanks in advance!

Power Query how to make a Table with multiple values a parameter that uses OR

I have a question regarding Power Query and Tables as parameters for excel.
Right now I can create a table and use it as a parameter for Power query via Drill down.
But I'm unsure how i would proceed with a Table that has multiple values. How can a table be recognized with multiple "values" as a parameter
For example:
I have the following rawdata and parameter tables
Rawdata+parametertables
Now if I wanted to filter after Value2 with a parameter tables I would do a drill down of the parameter tables and load them to excel.
After that I have two tables that I can filter Value2 with an OR Function by 1 and 2
Is it possible to somehow combine this into 1 Table and that it still uses an OR Function to search
Value2
Im asking because I want it to be potentially possible to just add more and more parameters into the table without creating a new table everytime. Basically just copy paste some parameters into the parameter table and be done with it
Thanks for any help in advance
Assuming, you use Parameters only for filtering. There are other ways, but this one looks the best from performance point of view.
You may create Parameters table, so you have such tables:
Note, it's handy to have the same names (Value2) for key column in both tables, otherwise Table.Join will create additional column(s) after merging tables.
Add similar step to filter RawData table:
join = Table.Join(RawData, "Value2", Parameters, "Value2")

How do I create a measure in Power Pivot that pulls a value from another table?

I have two tables that use a unique concatenated column for their relationship. I simply want to make a measure that uses the values from C4 of Table1. I thought I could use a simple formula like =values(Table1[C4]) but I get an error of "A table of multiple values was supplied where a single value was expected."
Side note: I realize the concatenation is unnecessary here in this simple example, but it is necessary in the full data I am working with which is why I added it into this example.
Here's a simplified set of tables for what I am trying to do:
Table1
Table2
Relationships
First you should think. Do I really need a Calculated column? Can't this be calculated at runtime?
But if you still want to do it, you can use RELATED or RELATEDTABLE.
Keep in mind if you are pulling from RELATEDTABLE, returns many values. So you should apply some type of aggregation like SUMX or MAXX.
You can use context transition to retrieve the value.
= CALCULATE(MAX(Table1[C4])

Power Query: how can I make power query tolerate columns with the same name?

I have a table with several column having the same name.
This columns is updated and provided regularly to power bi.
The columns has several columns with the same name such as "Result", "Result" and do so on...
However, Power BI adds each time in an automated way a number after my columns.
When I try to "force" power bi not to have a number, I get the following message
"The name "Result" is already used for a column..."
How could I change this?
The only way would be for the people using my file to extract the data and correct the name manually in excel...which is not great
You cannot edit this behavior, PowerBI needs an unique identifier to reference the data, therefore the column name must be unique within a table (the complete identifier is given by table + column), otherwise the tool won't be able to reference the data.
This rule usually applies to any tool that manages data and sometimes to the data themselves (that's up to the format though). How can the tool get data from "Result" if more than one column has this identifier? which is the right one? The tool does not know and based on the context can give you an error or will fix this issue itself by making the names unique.
Note that also excel will append numbers to the columns (with the same name names) if you put the data in a proper table (insert-> table), in fact, an excel sheet can be considered unstructured free data, meanwhile, an excel table will enforce the data structure.
Most tools (like PowerBI) will also enforce data types.

How do we create a generic mapping dataflow in datafactory that will dynamically extract data from different tables with different schema?

I am trying to create a azure datafactory mapping dataflow that is generic for all tables. I am going to pass table name, the primary column for join purpose and other columns to be used in groupBy and aggregate functions as parameters to the DF.
parameters to df
I am unable to refernce this parameter in groupBy
Error: DF-AGG-003 - Groupby should reference atleast one column -
MapDrifted1 aggregate(
) ~> Aggregate1,[486 619]
Has anyone tried this scenario? Please help if you have some knowledge on this or if it can be handled in u-sql script.
We need to first lookup your parameter string name from your incoming source data to locate the metadata and assign it.
Just add a Derived Column previous to your Aggregate and it will work. Call the column 'groupbycol' in your Derived Column and use this formula: byName($group1).
In your Agg, select 'groupbycol' as your groupby column.

Resources