In the current Project i Need to Keep a Excel File which gets Values from a Machine to the Access Database to work with them and Import them in the Data Model.
Problem is some of the Values give invalid results due to the way they are saved. For example the timestamp is saved like
030420 instead of 03:04:20 and Access cant handle that and gives me a #NUMBER
I can not simply Change the datatype in Excel because the whole Excel gets refreshed every hour by a source that i cant influence.
Any help appreciated.
If Erik's proposal does not work, you can
- create a backup copy of your Excel source
- tweak the file: enter text in the first row of the problematic columns
- link the tweaked file into Access
- put back the real file in place.
Now the problematic columns should be read as Text, and you can build a query that solves any issue like conversion, null handling...
Link, don't import, the Excel file, and you have a linked table.
Now, use this linked table as source in a simpel select query where you modify the data and alias the fields as needed. For example:
Select
F1 As SomeName,
F2 As OtherName,
TimeSerial(Mid([F5],1,2),Mid([F5],3,2),Mid([F5],5,2)) As TrueTime
From
LinkedTable
Where
F7 Is Not Null
The use this query for your import.
Consider querying the Excel file instead of using a linked table.
The query can directly query an Excel range:
SELECT * FROM
[Excel 12.0 XML;DATABASE=PathToMyExcel;HDR=Yes;IMEX=1].[MyRange] t
Then, you can use functions like TimeSerial to cast numbers to time values.
I am using the Toad Automation Designer to export data of table to Excel.
Unfortunately my table contains more then 65000 rows (100k) and every single Excel file
can only contain 65k entries.
My workaround was to write two SQL statements to create two Excel files
select * from my_table offset 0 rows FETCH NEXT 65000 ROWS ONLY
select * from my_table offset 64999 rows
That is ok for the moment but I am looking for a more dynamic way to export the whole
table into several Excel files without writing multiple SQL statement because in future
I will have maybe 300k entries in my database table.
So I am looking for a possibility to write something like a little script in the
Automation Designer or something similar.
Any Ideas?
Had similar issue: Toad .xls download populates one tab (64K rows). Only difference is that my Toad (11) would then populate another tab with residual data (the next 64K rows), then another tab and so on until all data was downloaded... it sometimes took many tabs.
I changed the download file type to .xlsx and then all the data downloaded to one tab.
safety tip: After download, Excel will have no problem opening the downloaded xlsx - but I have to open/save/close the downloaded .xlsx in Excel prior to using an ms access import spec. (else access' import spec complains the file's unreadable).
I'm provided with a folder of excel files. Each represent one form with data entered in specific cells. Each file is of the same format and each would for ONE row of information to be imported into my sql server database.
I believe I can loop through each excel file in the folder, however I am having issues finding the right tools to extract these specific cells and merge them into a single row to insert into the table.
Power Query to the rescue! :)
http://excelunplugged.com/2015/02/10/get-data-from-folder-in-power-query/
Ended up writing some VBA instead to move the data into a tabular / List form in one excel sheet then used that Document to feed SSIS. So far, does not seem like SSIS can do that initial part.
We need to upload a small amount of additional records to a table from an Excel sheet. Is there a way to use the Access Import function to add the additional data to the table (truncate it). The table was created by uploading the same Excel sheet. But now, when records are added, we need to add them to the table. The tables are linked to SQL but I do not want to use an SSIS because there are only a few records and there must be a way to use Access functions. Suggestions please.
It may be easiest to link the excel sheet and run an append query to add data from Excel to existing table. Once linked, this can be done in the query design window.
You did not specify versions of Excel or Access.
I did this with a test 2003 Excel sheet with cells containing 1000+ characters. An import in Access 2003 detects the data type as a memo field, which is correct, when there are that many characters, so it should work for you. It may be your Excel data has other ingredients causing an import issue. How is the excel data derived?
Have you tried importing to Access? It should work fine. If your ultimate target is another database why use Access as an intermediary?
I agree a linked table seems like a really simple method to update a table if you are using Access, but that is your choice.
When an excel data source is used in SSIS, the data types of each individual column are derived from the data in the columns. Is it possible to override this behaviour?
Ideally we would like every column delivered from the excel source to be string data type, so that data validation can be performed on the data received from the source in a later step in the data flow.
Currently, the Error Output tab can be used to ignore conversion failures - the data in question is then null, and the package will continue to execute. However, we want to know what the original data was so that an appropriate error message can be generated for that row.
According to this blog post, the problem is that the SSIS Excel driver determines the data type for each column based on reading values of the first 8 rows:
If the top 8 records contain equal number of numeric and character types – then the priority is numeric
If the majority of top 8 records are numeric then it assigns the data type as numeric and all character values are read as NULLs
If the majority of top 8 records are of character type then it assigns the data type as string and all numeric values are read as
NULLs
The post outlines two things you can do to fix this:
First, add IMEX=1 to the end of your Excel driver connection string. This will allow Excel to read the values as Unicode. However, this is not sufficient if the data in the first 8 rows are numeric.
In the registry, change the value for HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Nod\Microsoft\Jet\4.0\Engines\Excel\TypeGuessRows to 0. This will ensure that the driver looks at all the rows to determine the data type for the column.
Yes, you can. Just go into the output column list on the Excel source and set the type for each of the columns.
To get to the input columns list right click on the Excel source, select 'Show Advanced Editor', click the tab labeled 'Input and Output Properties'.
A potentially better solution is to use the derived column component where you can actually build "new" columns for each column in Excel. This has the benefits of
You have more control over what you convert to.
You can put in rules that control the change (i.e. if null give me an empty string, but if there is data then give me the data as a string)
Your data source is not tied directly to the rest of the process (i.e. you can change the source and the only place you will need to do work is in the derived column)
If your Excel file contains a number in the column in question in the first row of data, it seems that the SSIS engine will reset the type to a numeric type. It kept resetting mine. I went into my Excel file and changed the numbers to "Numbers stored as text" by placing a single quote in front of them. They are now read as text.
I also noticed that SSIS uses the first row to IGNORE what the programmer has indicated is the actual type of the data (I even told Excel to format the entire column as TEXT, but SSIS still used the data, which was a bunch of digits), and reset it. Once I fixed that by putting a single-quote in my Excel file in front of the number in the first row of data, I thought it would get it right, but no, there is additional work.
In fact, even though the SSIS External DataSource Column now has the type DT_WSTR, it will still read 43567192 as 4.35671E+007. So you have to go back into your Excel file and put single quotes in front of all the numbers.
Pretty LAME, Microsoft! But there's your solution. I have no idea what to do if the Excel file is not under your control.
I was looking for a solution for the similar issue, but didn't find anything on the internet. Although most of the found solutions work at design time, they don't work when you want to automate your SSIS package.
I resolved the issue and made it work by changing the properties of "Excel Source". By default the AccessMode property is set to OpenRowSet. If you change it to SQL Command, you can write your own SQL to convert any column as you wish.
For me SSIS was treating the NDCCode column as float, but I needed it as a string and so I used following SQL:
Select [Site], Cstr([NDCCode]) as NDCCode From [Sheet1$]
Excel source is SSIS behaves crazy. SSIS determines the type of data in a particualr column by reading first 10 rows.. hence the issue. If you have a text column with null values in first 10 roes, SSIS takes the data type as Int. With a bit of struggle, here is a workaround
Insert a dummy row (preferrably first row) in the worksheet. I prefer doing this thru a Script task, you may consider using some service to preprocess the file before SSIS connects to it
With the duummy row, you are sure that the datatypes will be set as you need
Read the data using Excel source and filter out the dummy row before you take it for further processing.
I know it is a bit shabby, but it works :)
I could fix this issue. while creating the SSIS package, I manually changed the specific column to text (Open the excel file select the column, right click on column, select format cells, in number tab select Text and save the excel).
Now create the SSIS package and test it. It works. Now try to use the excel file where this column was not set as text.
It worked for me and I could execute the package successfully.
This should be resolved simply, just untick the box "Frist row as column names" and all data will be collected as text data type. Only downside of this choice is that you have to manage the columns names from the auto names (column 1, 2 etc) and handle the first row which contains the column names.
I had trouble implementing the solution here - I could follow the instructions, but it only gave new errors.
I solved my conversion issues by using a Data Conversion entity. This can be found on the SSIS Toolbox under Data Flow Transformations. I placed the Data Conversion between my Excel Source and OLE DB Destination, linked Excel to Data C, Data C to OLE DB, double clicked Data C to bring up a list of the data columns. Gave the problem column a new Alias, and changed the Data Type column.
Lastly, in the Mappings of the OLE DB Destination, use the Alias column name, rather than the original Excel column name. Job done.
You can use a Data Conversion component to convert to the desired data types.