I have the following Data Flow Task setup (see image).
It takes the correct amount of rows from the OLE DB Source and passes everything through the Data Conversion item. However, the process then gets stuck on 10,104 out of the 29,379 rows at the Sort and Excel Destination item (I'm sorting alphabetically by one column only).
Why is it getting stuck and what can I do to move it out of this rut?
Thanks
Would need to see the properties on your sort transformation but maybe this could be the issue, make sure the following isn't checked:
Thanks.
Gav
The issue was that when inserting into an Excel Data Source the maximum size for each column is 255 but the size of the values from the mapped SQL Server column was on average greater than 700.
So it was necessary to set the maximum size in the Data Conversion to 255 (of the large column) to correspond to the Excel maximum column size. SSIS naturally truncates the column.
Related
Is it possible to somehow overwrite existing counters-fields when COPY FROM data (from CSV),
or completely delete rows from the database?
When I COPY FROM data to existing rows, the counters are summarized.
I can’t completely DELETE these rows as well:
although it seems that the rows are deleted, when you re-COPY FROM the data from CSV,
the counters-fields continue to increase.
You can't set counters to the specific value - for them the only supported operation is either increase, or decrease. To set them to specific value you need either to decrease it to its current value, and then increase to desired value, but this will require that you read value. Or you need to delete corresponding cells (or whole row), and perform increase operation using desired number.
The second approach could be implemented easier, but will require that you first generate a file with CQL DELETE commands based on the content of your CSV file, and then use COPY FROM - if nobody increased values since deletion, then counters will get correct values.
I am trying to create a query based on below requirement. So far I am able to do my query based on single criteria only. Your help would be much appreciated.
1) I have opened a target workbook where I want to fetch matching values for it's two columns (temperature & density) from multiple workbooks saved in a particular folder. Here I am referring to New Query > From file > From Folder option.
2) So In my target work book I have Observed Density and Observed Temperature and now I want to extract volume and weight correction factors from a pool of mutilple workbooks picked in step 1 as mentioned above (all the workbooks in the desired folder not only have Observed Desnity and Observed Temperature but also columns contaning corresponding weight and volume correction factors in them)
That's it. Just want to know if this can be achieved using Power Query or VBA is a must do do get results? If so, any hints would be much appreciated.
I think you are on the right track. I would finish building the Query using the From Folder option to turn the data from all the workbooks into a single dataset. This query does not need to be loaded into an Excel table.
Then I would start a new query based on the 2 columns in your target workbook. They will need to be in a table or named range. In that query you can add a Merge step to match to rows in the From Folder query. Then Expand the results to add the columns you need from the matching rows.
I have a query in an ACCDB that works fine in Access.
I can successfully copy/paste its data to Excel.
However, from Excel, if I try to insert a Pivot Table using External Data Source, pointing to the very same query, then some numeric fields have weird formatting and some calculated numeric columns (formula in the query) have their value divided by 100 compared to the source.
Never seen that behaviour. Any suggestion ?
The whole MS-Office setup is in 2010.
What I have already done in the source query (without visible improvement):
used CCur() to make sure the figures are in a coherent data type
set the Format property of those culprit columns to "Standard"
The behaviour is exactly the same on other PCs in the same bank.
I could solve the problem which was due to 2 different bugs, probably in JetOLEDB.
Like is not handled properly by Excel
The query contained some formulae using Like:
iif(someField Like "XX*";0;anotherField).
Changing this to iif(Left(somefield;2) = "XX";0;anotherField) solved calculation differences between Excel but and Access.
Reference to another calculated column is handled differently
Say you have 2 query columns:
Rate: i.Rate *100 (i is a table alias)
Amount: Rate*Price
Access calculates Amount using the Rate calculated column, while Excel uses the Rate field from table i.Therefore I had to change the Amount expression to:
Rate: i.Rate *100
Amount: i.Rate *100*Price
since Excel does not seem to make always use Rate from the table (i.Rate).
Use the query in Access to first Make Table in Access then import the table to excel.
I am trying to generate an Access database with information which is currently in endless sheets and tables in Excel.
I would like to know if there is any way to add a field to one table which is a calculation (average value) based on several other cells.
I need to calculate the running 6 months average value of another field which contains 1 value per month.
Hopefully the previous image shows what I mean.
What is the best approach to import this functionality into access?
You wouldn't normally store a calculated field in Access, you would run a query that provides you the calculation on the fly.
Without seeing your data structure it is impossible to tell you how to calculate the answer you need, but you would need your data correctly normalised in order to make this simple.
I have a table in Excel that I want to filter. It will have a maximum of 1 million rows and 80 columns. All the calculations etc are done programatically in arrays to cut dwn processing time. However, I want to also filter the results to display only certain results based on one column value, followed by a top 5% based on another filter value.
When I first did the sheet, it was limited to 65000 results so there were no problems with the size of the data set. I just invoked the worksheet filter functions from code and did it that way. Can I do it that way with a larger data set or is there a way to filter an array the way you d a dataset on a sheet?
Thanks
As already mentioned by everyone, excel 2007 will take you to a million rows, but its slower than the excel 2003 that I presume you're using at the moment so filtering using it wouldn't be advisable.
Along with mysql, ms access is also an option.
You really should put that data in an Access table and use Excel's Database Query to do the job. Since it can also filter retrieved data based on a cell value, it's a great combination.
Storing the data in a database brings you another interesting option (depending on what you want to do): to query your database using PowerPivot.
Although using a relational DB would be preferable in many ways, if you don't have any formulas then filtering your data (1 million rows by 80 columns) using Excel will be reasonably fast (< 1 or 2 seconds depending on what sort of filtering you want to do, which will probably be faster than an un-indexed DB table) assuming that you have enough RAM. If you do have any formulas then you will probably need to be in Manual calculation mode to avoid the filtering process triggering multiple recalculations.