I'm importing table from SQL Server to Excel by VBA.
Here is the part of code
For Each fld In rs.Fields
Sheet1.Cells(row, col).NumberFormat = "#"
Sheet1.Cells(row, col).Value = fld
col = col + 1
Next
I have field in SQL Server which is DateTime, but it is imported to Excel as int (some weird number appears). I've specified explicitly
Sheet1.Cells(row, col).NumberFormat = "#"
but that didn't help. How I can import DateTime field from SQL Server to Excel either as DateTime or Text
And I want to keep format the same as in SQL Server, which is 2017-11-01 00:00:00.000
I was able to fix it by converting datetime into varchar in query inside VBA
convert(varchar(55),runDate,121)
Excel somehow can't understand DateTime datatype
Related
I need your help.
I have an .xlsx file which looks like this:
My goal is to create a SSIS package which pushes this data into a DB table.
Now, col1 to col5 is ok, but each section has a name on top of it and that is supposed to be column 6 in the table.
So the final destination table looks like:
col1 | col2 | col3 | col4 | col5 | col6 (Firstname Lastname from the top)
So far I have tried:
Creating a recordset out of the excel sheet
read recordset row by row using Ado foreach enumerator
within the for each enumerator I have a set of variables that will represent the columns
these variables I am flushing out to a data flow task which converts
variables to columns using derived column and pushes it into the ODBC destination
Obviously this did not work out for me, I always get the message "0 rows inserted in the ODBC destination" when I run the package.
To be honest I am not really sure how to solve this problem.
Any help is highly appreciated!!!
Thanks in Advance!!!
Edit:
PS: I can not use any one time or Power BI / Query tricks here. It has to be pure SSIS.
The solution I can think of, is through script task, in these steps
Read the Excel file from row 4
Script task to add name to the data rows
Remove column headings via conditional split
for the first step open Excel source properties, unselect column names in the first row, and set the OpenRowset to read from row four (Sheet1$A4:E) :
for the second step create a script component transformation. Select the 5 columns as input and create a new output variable (Name, in this case)
The script itself:
public class ScriptMain : UserComponent
{
string keepname;
public override void Input0_ProcessInputRow(Input0Buffer Row)
{
if (Row.F5_IsNull)
{ keepname = Row.F1;
}
else
{
if (Row.F5 != "col5")
{
Row.Name = keepname;
}
}
}
}
Explanation: checks if it is a row containing name (col5 is null) and it so, it saves the name to variable keepname.
if is not that, and it is not a heading (col1='col1') then update the variable Name.
Last step is just a clean up via conditional split
What this does, is split the rows that are 'name rows' or headings. The you just need to carry one with default output.
Test:
I think that the best way to import a file like this one is to make a Script task to convert the Excel file into a CSV and then upload the CSV into the database as a simple flat file source.
Create two variables named VarDataPath with the path of Excel file and VarCSVPath with the path of the CSV generated file.
Create a Script task with the following VB.net snippet. Unfortunally I have not SSIS on my computer at the moment so I can't try my code.
Imports System
Imports System.Data
Imports System.Math
Imports Microsoft.SqlServer.Dts.Runtime
Public Class ScriptMain
Public Sub Main()
Dim xl As Object
Dim wb As Object
Dim sh As Object
Dim NumSheets As Integer
Dim counter As Long
Dim fs As Object
Dim conv_file As Object
Dim line As String
Dim strDataPath As String = Dts.Variables("VarDataPath").Value.ToString() 'This variable contains the Excel path
Dim strFileName As String = Dts.Variables("VarCSVPath").Value.ToString() 'This variable contains the CSV path
Dim myArray As Integer() = New Integer() {4, 10, 16} ' Row number of FirstName/LastName
xl = CreateObject("Excel.Application")
wb = xl.WorkBooks.Open(strDataPath)
sh = wb.Sheets(1)
fs = CreateObject("Scripting.FileSystemObject")
conv_file = fs.CreateTextFile(strFileName, True)
' CSV file head
conv_file.writeline("Name,Col1,Col2,Col3,Col4,Col5")
For Each val In myArray
For a = 2 To 5 ' I have to loop all 4 rows of tables starting from 2 rows down the FirstName/LastName cell
' Each line I wrote valeu of Name,Col1,Col2,Col3,Col4,Col5
line = ""
line=line & sh.Cells(val, 1).ToString & ";" 'Name
line=line & sh.Cells(val+a, 1).ToString & ";" 'Col1
line=line & sh.Cells(val+a, 2).ToString & ";" 'Col2
line=line & sh.Cells(val+a, 3).ToString & ";" 'Col3
line=line & sh.Cells(val+a, 4).ToString & ";" 'Col4
line=line & sh.Cells(val+a, 5).ToString & ";" 'Col5
conv_file.writeline(line)
Next
Next
conv_file.close()
sh = Nothing
wb.Close(False)
wb = Nothing
xl.Quit()
xl = Nothing
Dts.TaskResult = Dts.Results.Success
End Sub
End Class
Execute the package to let SSIS create a CSV file.
Add a data flow to the package to upload the CSV generated into the database.
Also, you can have a look at these articles:
Converting Excel files into CSV and uploading files using SSIS
SSIS Script task to convert XLSX files with multiple sheets to CSV file
I would use Power Query for this. It's built into Excel, also in Power BI and Power Automate, and has more flexible data transformation than SSIS data flows. You can code in "M" (the Power Query language), but I very rarely do - the UI does almost anything you could want.
For example, once you come up with a calculated column to create "column6", it has a "Fill Down" function that can push each group's value down onto the detailed rows.
This would be my approach for calculating "column6":
Data ribbon / From Table/range. Specify sheet and range
In the Power Query Editor, select [Column1], then Add Column / Extract / First Characters / 1
Select [First Characters], then Transform / Data Type / Whole Number
Select [First Characters], then Replace Values / Replace Errors / null
Add Column / Conditional Column, specify New column name = "Column6", If Column1" equals "column1" then "null" Else If "Column1" equals "null" then (Select a column) [Column1] Else "null"
Select [Column6], then Transform / Fill / Down
Once you have a working query, you can copy the PQ code into the PQ Source for SSIS:
https://learn.microsoft.com/en-us/sql/integration-services/data-flow/power-query-source
data will be read from excel sheet & store data table& i will insert data in database(oracle).but term column in database accept 7 bytes like mm/yyyy . but my input read data will be date time format.how can i do date time to mm/yyyy
below is oracle insert query
strQuery &= "'" + String.Format("{%M/%y}", dt1.Rows(row)("TERM")) + "',"
When you read the date/time from excel, assign the value to datetime variable and then you can pull the month and year from.
I have an excel sheet which has a time column. Its time column is currently in the data type 'Time' (6:00:00PM), however, I've tried with 'Custom' data type (6:00PM) as well.
I read this cell value using openXML library as follows:
row.XCells[9].GetValue()
The value I read is .75. This is the value I see when I change the data type to number. I want to convert this to a timespan in my C# backend. How do I do that?
var ts = TimeSpan.Parse(row.XCells[9].GetValue());
doesn't work.
Excel dates and times can be converted to C# DateTime using DateTime.FromOADate. Once you have the DateTime you can use the TimeOfDay property to get a TimeSpan (the DateTime will have a date of 30-Dec-1899 which is the OLE automation base date)
TimeSpan t = DateTime.FromOADate(row.XCells[9].GetValue()).TimeOfDay;
Console.WriteLine(t.ToString(#"hh\:mm\:ss")); // prints 18:00:00
I am importing an Excel spreadsheet into Access 2010. I created a Saved Import that will import a column of data intentionally out of order. The query I created needs to take the exact order of this data, return a value associated with it from a master table in our company's DB, then allow me to export both of these fields to Excel. I need to do this because I need to copy and paste the export on top of the values of another spreadsheet.
The problem is that when I import into Access the FG column is sorted by A-Z. This can be seen in both the table import and the results of the query. How do I keep my data in the mixed up order throughout the whole process?
Prepared Import sheet
FG
D
B
E
A
C
After importing
FG
A
B
C
D
E
Query
FG Description
A descript of A
B descript of B
C descript of C
D descript of D
E descript of E
To solve this problem, I had to use the option "Let Access create a primary key for me" when importing. This allowed the table to be populated with the data in the same order as was on the import spreadsheet. To make sure the query also keeps this same order, I had to include the new "ID" field as part of the results.
Final SQL Code
SELECT FGImport.ID, FGImport.FG, dbo_Active_Part_Number_List_Syteline.description
FROM dbo_Active_Part_Number_List_Syteline RIGHT JOIN FGImport ON dbo_Active_Part_Number_List_Syteline.item = FGImport.FG
GROUP BY FGImport.ID, FGImport.FG, dbo_Active_Part_Number_List_Syteline.description;
This may be an OLD post, but I would suggest opening Excel and creating a simple macro to enumerate the records before importing to Access; once you've imported the file, this enumartion (we'll call it "MacroField") will be part of the data set you import (We'll call this table "Table1"). You'll still retain the records original formatting by querying the record and ordering by Table1.MacroField asc/desc. There has to be a better way of doing it, but if you're in a pinch this does the trick.
Sub liminal()
For i = 1 To ThisWorkbook.Sheets.Count
If Sheets(i).Name <> "PrimaryTab" Then
ct = 0
Sheets(i).Activate
Range("B1000000").Select
Selection.End(xlUp).Select
en = ActiveCell.Row
For p = 1 To en
ct = ct + 1
Sheets(i).Range("A" & p).Value = ct
Next p
End If
Next i
Sheets("PrimaryTab").Range("A1").Select
MsgBox "Data Set, please open and load to MS ACCDB"
End Sub
If you want to import an Excel table (let's say with three columns) and you want the first row of the Access table (you are importing the data into) to have the Columns Title, then you create and Access table "myTable" with these fields:
F1 (Data Type: whatever)
F2 (Data Type: whatever)
F3 (Data Type: whatever)
Order (Data Type: AutoNumber)
In Access VBA you will use the following command:
ImportFile = "C:\myExcelFile"
DoCmd.TransferSpreadsheet acImport, acSpreadsheetTypeExcel12, "myTable", ImporteFile, False
When the Excel file is imported, the data will be alphabetical sorted by the first column; but the Field "Order" will have the sequential order of the data as in the Excel File. You can use a query on the table to sort the data by the field "Order", thus getting back the same data arrangement as in the Excel Table.
I have to import data from Excel file to SSIS but i am facing a problem in date column,in excel sheet date format is yyyy/mm/dd and when it gets upload in database it get change into yyyy/dd/mmm format.
How to fix this?
Use the SUBSTRING function in the derived column while importing the date,
(LEN(TRIM(SUBSTRING(ReceivedDateTime,1,8))) > 0 ? (DT_DBDATE)(SUBSTRING(ReceivedDateTime,1,4) + "-" + SUBSTRING(ReceivedDateTime,5,2) + "-" + SUBSTRING(ReceivedDateTime,7,2)) : (DT_DBDATE)NULL(DT_WSTR,5))
If the Data is there then use Substring function to extract the exact date that sets in the DB or if the date does not exist then insert NULL in the DB.
I see two options:
Data Conversion Transformation to convert to a text string in
the appropriate format. Using SSIS data types.
Add a script task that converts the data type. Using VB data types
First Create Table into Your Database Using below Command
CREATE TABLE [dbo].[Manual] (
[Name] nvarchar(255),
[Location] nvarchar(255),
[Date] datetime
)
SET DATEFORMAT YDM
By using DATEFORMAT YDM ,Date Will import in YYYY/DD/MM Format .Before runnung package modify the package and at the time of Column mapping select The Check Box "Delete Rows in Destination Table" .
Then Execute the Package . It Will work .