I have my source table in SAP HANA with almost 80 columns, the destination tables on azure DW/Mapping document has 60 columns. (They are mapped to source columns with different names). Now When I am trying to create a source dataset I gave the linked service which was created by my TL and Once I Select that HANA linked to service, It is not showing me any table option under linked service to select and import schemas. Why is that? It is throwing gateway timeout error.
PS: The Linked service was created by my manager and I don't know any credentials for that
SAP Hana only supports query, not table name.
Create a pipeline first and drag a copy activity into the pipeline. Reference your dataset in your copy activity.
Then you will see a browser table button. You could construct your query there.
When you want to edit your json, please use the button instead of the button in the activity node.
I solved it, the issues were source has column concatenation and destination table didn't have a correct logic in concatenation and few columns which were not present in source, I did the explicit mapping. And made it one-one relation from source to destination in Azure. It works now. Thank you all!
Related
I have a requirement where I need to move data from multiple tables in Oracle to ADLS.
The size of data is around 5TB. These files in ADLS, I might use it in future to connect Power BI.
Is their any easy and efficient way to do this.
Thanks in Advance !
You can do this by using Lookup activity and ForEach in Azure Data Factory.
Create a table or file to store the list of table names which needs to be extracted.
Use Lookup Activity get the tables list.
Pass the list to ForEach activity and by looping each table copy the current item() from oracle to ADLS.
In ForEach, settings->Items, add the following code in the Add Dynamic Content.
#activity('Get-Tables').output.value
Add a Copy activity inside ForEach activity.
In Copy data activity, source > Query and Input the following code:
SELECT * FROM #{item().Table_Name}
Now add the sink dataset(ADLS) and Execute your pipeline.
Please refer Microsoft Documentation to know about the creation of linked services for Oracle.
Please go through this article by Sean Forgatch in MODERN DATA ENGINEERING if you face any issues in the process.
I am using Excel as data source. These are my columns:
I am dropping the Attribute name in my Select transformer.
But the values are not dropped. Only the column names are dropped.
Can you please assist how can I drop the Column Attribute Name.
Thanks
This is one known issue in dataflow preview data with data misalignment, we had fixed it and wait for current deployment to take effective. It only impact data preview, so you can feel free to ignore the wrong data in preview and have dataflow pipeline debug/trigger run directly which can work well.
I am trying to do something that should be fairly simple, but I get no results.
I want to read an Excel file from our sharepoint site (O365) and insert the data from the first worksheet into a table in SQL Server.
Actually quite simple and straightforward. Well, it sounds like that.......
Apparently there is more than reading the file and inserting the file into SQL Server....
Who can provide me with info, tutorials or (even better) step-by-step instructions?
Bonus would be looping through the (online) folder and importing all excel files creating a table for each worksheet.
Edit: I am able to collect the Excel file and email it to me as an attachment.
I just have no clue how to insert it in SQL Server.
This is possible in 4 steps:
Trigger (duh)
Excel Business component - Get Tables
For each component - Value (list of items)
SQL Server component - Insert Row (V2) _ make sure you create parameters for all the columns and map them to the dynamic content offered
(I inserted a SQL Server component - Execute query after step 2 to truncate the destination table)
Not looping yet but this at least enables me to insert rows from an online Excel into a Azure SQL
You can use Azure Data Factory for that, create the connection to your excel file, then to SQL, and use a copy & transform Pipeline
I am trying to copy data from SAP Hana to Azure Data Lake Store (DLS) using a Copy Activity in a Data Pipeline via Azure Data Factory.
Our copy activity runs fine and we can see that rows made it from Hana to the DLS, but they don't appear to have column names (instead they are just given 0-indexed numbers).
This link says “For structured data sources, specify the structure section only if you want map source columns to sink columns, and their names are not the same.”
We are fine using the original column names from the SAP Hana table, so it seems like we shouldn't need to specify the structure section in our dataset. However, even when we do, we still just see numbers for column names.
We have also seen the translator property at this link, but are not sure if that is the route we need to go.
Can anyone tell me why we aren't seeing the original column names copied into DLS and how we can change that? Thank you!
UPDATE
Setting the firstRowAsHeader property of the format section on our dataset to true basically solved the problem. The console still shows the numerical indices, but now includes the headers we are after as the first row. Upon downloading and opening the file, we can see the numbers are not there (the console just shows them for whatever reason), and it is a standard comma-delimeted file with a header row and one row entry per line.
Example:
COLUMNA,COLUMNB
aVal1,bVal1
aVal2,bVal2
We can now tell our sources and sinks to write and expect this format when reading.
BONUS UPDATE:
To get rid of the numerical indices and see the proper column headers in the console, click Format in the top-left corner, and then check the "First row is a header" box toward the bottom of the resulting blade
See the update above.
The format.firstRowAsHeader property needed to be set to true
I have a sharepoint list which i have linked to in MS Access.
The information in this table needs to be compared to information in our datawarehouse based on keys both sets of data have.
I want to be able to create a query which will upload the ishare data into our datawarehouse under my login run the comparison and then export the details to Excel somewhere. MS Access seems to be the way to go here.
I have managed to link the ishare list (with difficulties due to the attachment fields)and then create a local table based on this.
I have managed to create the temp table in my Volatile space.
How do i append the newly created table that i created from the list into my temporary space.
I am using Access 2010 and sharepoint 2007
Thank you for your time
If you can avoid using Access I'd recommend it since it is an extra step for what you are trying to do. You can easily manipulate or mesh data within the Teradata session and export results.
You can run the following types of queries using the standard Teradata SQL Assistant:
CREATE VOLATILE TABLE NewTable (
column1 DEC(18,0),
column2 DEC(18,0)
)
PRIMARY INDEX (column1)
ON COMMIT PRESERVE ROWS;
Change your assistant to Import Mode (File-> Import Data)
INSERT INTO NewTable (?,?)
Browse for your file, this example would be a comma delineated file with two numeric columns and column one being the index.
You can now query or join this table to any information in the uploaded database.
When you are finished you can drop with:
DROP TABLE NewTable
You can export results using File->Export Data as well.
If this is something you plan on running frequently there are many ways to easily do these type of imports and exports. The Python module Pandas has simple functionality for reading a query directly into DataFrame objects and dropping those objects into Excel through the pandas.io.sql.read_frame() and .to_excel functions.