I am trying to export some data to Excel via SQL-Server SSIS.
Everything works so far, but the data in Excel are not recognized as a date or decimal number, so that the user of the sheet first has to laboriously reformat.
Is there a way to export the data directly so that the correct format is recognized?
I select the data via OLE-DB-Source from a SQL-Server 2014 and than I do a data conversion to map the fields to the data formats of SSIS:
VARCHAR to DT_WSTR,
DECIMAL to DT_R8,
DATE to DT_DATE
Finally I use the excel-Export task to create a xlsx-File dynamically.
That works but I want that the data are in the format that the user of the Excel can directly filter and summarizing the data. That doesn't work.
I'm using the Dutch version of Word and Excel 2016 to fill in data from an Excel table in a Word document using Mail Merge. When doing so, my dates are represented as a number. I tried using the \# format, both in English as in Dutch, but nothing is working. I checked the Excel file and the data is properly formatted as a date. So far, I tried the following formats in my Word document, including adding and removing spaces before and after the quotation marks:
{MERGEFIELD FieldName \# "dd-mm-jjjj"}
{MERGEFIELD FieldName \# "dd-MM-yyyy"}
{MERGEFIELD FieldName .\# "dd-MM-yyyy"} (adding the dot was only mentioned on one website)
I import the data using the 'Use an Existing List' and 'Insert Merge Fields' function in Word.
Does anyone know what I should change to get a proper Date format in my Word document?
FYI, other numbering formats are working fine.
If the dates are being represented as numbers, that means you have mixed data types in the Excel column.
By default, Word 2002 & later use the OLE DB provider to get records from the data source. Because the OLE DB provider is designed to return data in a way that is compatible with databases, it requires a specific data type for each field, and every record in that field must be of that data type. When using other data sources, the OLE DB provider queries the first 8 records to determine the data type for each field (the 8 can be changed in the Windows Registry, but it’s not advisable to do so). This can lead to unexpected results with data sources such as Excel workbooks, where rows (records) in a column (field) can have different data types.
When the OLE DB provider gets data from a column with mixed data types, records that don’t conform to the determined data type for the column are liable to not be handled correctly. The most common common mailmerge issue arising out of this include:
numbers but not text or dates being output; and
dates being output as numbers,
for some records.
Ideally, one would ensure each field has only one data type. Workarounds include:
Inserting a dummy first record containing data in the format that
is not being output correctly; or
Reordering the data so the first
record has content in the format that is not otherwise being output
correctly.
If you're unable to do either, see Importing Date and Time Values From Excel and Access in my Microsoft Word Date Calculation Tutorial, avialble at:
http://www.msofficeforums.com/word/38719-microsoft-word-date-calculation-tutorial.html
or:
http://www.gmayor.com/downloads.htm#Third_party
Do read the document's introductory material.
I am having an issue with a program. The macro in Access imports an Excel file, then it appends an already created table with the imported data, and runs a check after to make sure that the table was appended correctly.
I have tried for hours to figure out why my imported tables are not being appended. I know that this is a formatting issue. There are six columns in my Excel table. The first three columns have Text Data Type. The fourth has Number Data Type. These are fine since the field properties match exactly for the imported table and the already existing table.
So my issue is with the last two columns. The last two columns in the existing table in Access have Date/Time Data Type. The format of these columns as shown in Field Properties is m/d/yy;# (I'm not sure what this means and an explanation of this might be helpful)
The imported table has Date/Time Data Type with format m/d/yyyy. I have tried changing the format in Excel to m/d/yy and m/d/yy;#. Both of which got the format to show correctly in Excel, just as it does in Access for the correct table. But when I import it does not keep this formatting.
The check does not register the rows with new dates and the table does not get appended. I am looking for an easy fix in Excel to change my data's formatting as there are 17 separate macros that I have to run each month.
I have a field in a SQL table which is of datatype 'money'. I'm loading the contents of this table to an excel destination using SSIS. Now the excel destination needs this money column to have a $ symbol i.e. basically format it to currency. It doesn't seems to work. I need to manually format the output each time. I can convert it using Derived column in SSIS and add a $ symbol; This would however convert the field to a string field and load it to excel which is not what I want.
Any inputs?
I tried adding a sample row with proper formatting and hiding the row in the excel destination before loading it. That doesn't work either.
Thanks.
Hidden sample row works for me. Here are the steps I suggest:
Create Excel file with column datatype defined as Currency with $ at
the start.
Add column name, e.g. "Total" In the second row put 0 in the cell.
Ensure that it is shown as "$0,00".
Create data flow with source as your SQL Server table and Excel
destination created based on your Excel file. Define mapping, etc.
Right click on created Excel destination, click "Show Advanced
Editor", go to "Input and Output Properties". Open "External
Columns" list and ensure that "Total" column is picked automatically
as currency [DT_CY]
Check this post as well: http://sqlserversolutions.blogspot.com/2011/10/numeric-gets-converted-to-text-in-excel.html
When an excel data source is used in SSIS, the data types of each individual column are derived from the data in the columns. Is it possible to override this behaviour?
Ideally we would like every column delivered from the excel source to be string data type, so that data validation can be performed on the data received from the source in a later step in the data flow.
Currently, the Error Output tab can be used to ignore conversion failures - the data in question is then null, and the package will continue to execute. However, we want to know what the original data was so that an appropriate error message can be generated for that row.
According to this blog post, the problem is that the SSIS Excel driver determines the data type for each column based on reading values of the first 8 rows:
If the top 8 records contain equal number of numeric and character types – then the priority is numeric
If the majority of top 8 records are numeric then it assigns the data type as numeric and all character values are read as NULLs
If the majority of top 8 records are of character type then it assigns the data type as string and all numeric values are read as
NULLs
The post outlines two things you can do to fix this:
First, add IMEX=1 to the end of your Excel driver connection string. This will allow Excel to read the values as Unicode. However, this is not sufficient if the data in the first 8 rows are numeric.
In the registry, change the value for HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Nod\Microsoft\Jet\4.0\Engines\Excel\TypeGuessRows to 0. This will ensure that the driver looks at all the rows to determine the data type for the column.
Yes, you can. Just go into the output column list on the Excel source and set the type for each of the columns.
To get to the input columns list right click on the Excel source, select 'Show Advanced Editor', click the tab labeled 'Input and Output Properties'.
A potentially better solution is to use the derived column component where you can actually build "new" columns for each column in Excel. This has the benefits of
You have more control over what you convert to.
You can put in rules that control the change (i.e. if null give me an empty string, but if there is data then give me the data as a string)
Your data source is not tied directly to the rest of the process (i.e. you can change the source and the only place you will need to do work is in the derived column)
If your Excel file contains a number in the column in question in the first row of data, it seems that the SSIS engine will reset the type to a numeric type. It kept resetting mine. I went into my Excel file and changed the numbers to "Numbers stored as text" by placing a single quote in front of them. They are now read as text.
I also noticed that SSIS uses the first row to IGNORE what the programmer has indicated is the actual type of the data (I even told Excel to format the entire column as TEXT, but SSIS still used the data, which was a bunch of digits), and reset it. Once I fixed that by putting a single-quote in my Excel file in front of the number in the first row of data, I thought it would get it right, but no, there is additional work.
In fact, even though the SSIS External DataSource Column now has the type DT_WSTR, it will still read 43567192 as 4.35671E+007. So you have to go back into your Excel file and put single quotes in front of all the numbers.
Pretty LAME, Microsoft! But there's your solution. I have no idea what to do if the Excel file is not under your control.
I was looking for a solution for the similar issue, but didn't find anything on the internet. Although most of the found solutions work at design time, they don't work when you want to automate your SSIS package.
I resolved the issue and made it work by changing the properties of "Excel Source". By default the AccessMode property is set to OpenRowSet. If you change it to SQL Command, you can write your own SQL to convert any column as you wish.
For me SSIS was treating the NDCCode column as float, but I needed it as a string and so I used following SQL:
Select [Site], Cstr([NDCCode]) as NDCCode From [Sheet1$]
Excel source is SSIS behaves crazy. SSIS determines the type of data in a particualr column by reading first 10 rows.. hence the issue. If you have a text column with null values in first 10 roes, SSIS takes the data type as Int. With a bit of struggle, here is a workaround
Insert a dummy row (preferrably first row) in the worksheet. I prefer doing this thru a Script task, you may consider using some service to preprocess the file before SSIS connects to it
With the duummy row, you are sure that the datatypes will be set as you need
Read the data using Excel source and filter out the dummy row before you take it for further processing.
I know it is a bit shabby, but it works :)
I could fix this issue. while creating the SSIS package, I manually changed the specific column to text (Open the excel file select the column, right click on column, select format cells, in number tab select Text and save the excel).
Now create the SSIS package and test it. It works. Now try to use the excel file where this column was not set as text.
It worked for me and I could execute the package successfully.
This should be resolved simply, just untick the box "Frist row as column names" and all data will be collected as text data type. Only downside of this choice is that you have to manage the columns names from the auto names (column 1, 2 etc) and handle the first row which contains the column names.
I had trouble implementing the solution here - I could follow the instructions, but it only gave new errors.
I solved my conversion issues by using a Data Conversion entity. This can be found on the SSIS Toolbox under Data Flow Transformations. I placed the Data Conversion between my Excel Source and OLE DB Destination, linked Excel to Data C, Data C to OLE DB, double clicked Data C to bring up a list of the data columns. Gave the problem column a new Alias, and changed the Data Type column.
Lastly, in the Mappings of the OLE DB Destination, use the Alias column name, rather than the original Excel column name. Job done.
You can use a Data Conversion component to convert to the desired data types.