SSRS - Columns Download to .CSV Despite Visibility Status - excel

I have a Reporting Services 2012 table that hides certain columns based on parameter choices, since some choices will cause the dataset to exclude certain columns when run. So, a column such as PassportID would have a hiding criteria expression such as:
=IIF(Parameters!TransitMode.Value = "bus"
OR Parameters!TransitMode.Value = "train",True,False)
The columns are indeed hidden when the report is rendered, and when it is downloaded to Excel. The problem is that I need to download it to a .CSV file. The .CSV downloader in SSRS does not have a layout renderer that can preserve the hiding criteria the way Excel can.
I looked at the DataElementOutput property, but changing this from the "Auto" default only appears to be give the options of downloading or excluding unconditionally, rather than based on column visibility in the rendering.
Is there a way to exclude the entire rendered column from the downloaded .CSV?

The easy answer is to set the displayed value to a formula, using the Render format function. If the Render format is CSV, then set the "Displayed value" to be an empty string. The field will still be exported, but won't contain data.
That is, set the value of the textbox to something like:
=iif(Globals!RenderFormat.Name="CSV", "", Fields!MyDataField.Value)
A little bit more info:
SSRS 2012 and CSV export

Change the DataElementOutput from Auto to NoOutput.
DataElementOutput controls whether or not the data is included in the export. Column headers are already excluded by CSV and names of columns for CSV are derived from the name of the textbox for a data element. Visibility properties are not considered in the CSV export since visibility is a formatting feature.

Related

JasperReport: footer field in xls participate in sorting

I have such xls file generated by jasper:
Bold sum is in <pageFooter> tag. After sorting it mixes up with data:
Is there any opportunity to make footer fields fixed and make them not participating in column sorting?
No, unfortunately not, as this is Excel behaviour not Jasperreports behaviour and can't be controlled by the jasperreports.export configuration.
You could, instead, prevent the footer from being exported and convert the data to a table once in Excel which, although it's a manual step for the end-user, should resolve your problem.
The only other way is to ensure that, once exported to Excel, only the data rows are selected for sorting.
Regards,

SSRS how to set a merged column to be split in excel

I have an issue that I know is solve-able, I just cant find the setting or work out how to do it. I have a report where I have merged two columns. Lets say these are columns a and b. I want that when the report is exported to excel that you can click into column A, and it does not merge with column b. this would allow you to filter etc by the data under column a. The reason column a and b are merged in the first place is that the heading needs to go across two cells due to size.
I know this is do-able as it exists on a report i inherited, just i can't find the setting.
This is usually due to the misalignment of your header cells with your table cells. The Excel export tries to have everything formatted the same as in the report so it will sometimes use two columns for the table cells and merge them so it can align the columns to the header columns. This is problematic when it comes to manipulating, filtering and sorting the spreadsheet.
The best way to avoid this is to create an Excel renderer that doesn't render the header part of the report as described in my answer here.
However, if the cells need to be merged in your report deliberately then you aren't going to be able to do what you want to do using your current report as Excel will duplicate the formatting, including the merged cells.
Probably the only way to get something like what you are after is to create another report that is formatted the way you would like it to be in Excel. In the header of your original report put a text box (or an image with an Excel icon) with an Action on it to open the new, properly formatted, report in Excel, passing across parameters as appropriate. Now the user just need to click on the Action link in the original report to open the more user-friendly report in Excel.

Wrapping text in crystal so that it does not add extra row in excel

I have a field in crystal reports that has a lot of text. For that I went into Format editor and allowed wrapping of text for that field. Now when I export my report in pdf, it looks fine, but when I export it to excel it adds an extra row below a record that has more text in that field. How can I make it so that it does not add a extra row, but wraps the text in single cell only.
I have the same problem with extra rows and columns in my Crystal Reports exports. From what I can tell from my searching that it is caused by fields not lining up or being different sizes. You might be able to fix your issue by making all your fields in the row larger so they are the same height as the field with the large amount of text.
See this page:
Blank columns appear when exporting to Excel.

Export and customize a crystal report in excel

I am having an issue is that while exporting a report to excel sheet, there are lots of spaces and empty cells between the data, as well as, the cells are merged.
Is there is a way to export the report and each field will be in a cell or to control that exportation, suppose my report looks like this:
No Trans_No
1 123
2 333
In my excel sheet, I would like
A B
No Trans_No
1 123
2 333
, But currently it is showing a merging of the cells and spaces , so instead of Trans_No will be in CELL B, it is in D.
So, is there is a way to control o export that?
mohs, welcome to StackOverflow.
Crystal Reports and Excel have very different methods and data structures. When exporting a .rpt into .xls format, Crystal has to make many compromises and judgement calls. Here are some suggestions:
Do you absolutely need to use Crystal in this process?
A. You can import data directly from your data source into Excel (without using Crystal) using Data->Import External Data.
B. You can export from Crystal into CSV format. If the Excel file is being made just for a machine to read it, CSV is a better option.
Keep your Crystal Report very simple.
A. After you drag & drop fields onto your design, do not resize or overlap them.
B. Make sure in your options, you have snap to grid checked.
C. Are your fields horizontally aligned? If not, they will probably be put on different rows.
D. If you are grouping data, you may want to suppress the group headers & footers.
If you are finding empty rows between your data, you can filter these out in Excel:
Select column
Data > Filter (Excel 2010)
Dropdown > uncheck 'Blanks'
I don't use Crystal Reports, but could you export to a CSV file, then import into Excel. The import will allow you to specify the delimiters and should format your data better.
From experience with exporting from older versions of Crystal to Excel, a couple of options:
(1) Export to CSV and open the CSV file in Excel.
This had the disadvantage that instead of appearing at the top of the report above the data values, the column headings would appear on every line of the output before the column values - like so:
No Trans_No 1 123
No Trans_No 2 333
This issue may have been resolved in CR XI - if not, the workround we used for this was to suppress column headings (so that only the values were included in the output), then copy and paste a standard spreadsheet heading for the report into the output in Excel.
(2) Consistently format all fields to the same, minimum size (typically, two grid widths), with columns aligned by snapping the left edge of fields to guidelines.
This produces output which is almost unreadable in the standard report viewer, but which should align correctly in Excel.

SSIS Excel Data Source - Is it possible to override column data types?

When an excel data source is used in SSIS, the data types of each individual column are derived from the data in the columns. Is it possible to override this behaviour?
Ideally we would like every column delivered from the excel source to be string data type, so that data validation can be performed on the data received from the source in a later step in the data flow.
Currently, the Error Output tab can be used to ignore conversion failures - the data in question is then null, and the package will continue to execute. However, we want to know what the original data was so that an appropriate error message can be generated for that row.
According to this blog post, the problem is that the SSIS Excel driver determines the data type for each column based on reading values of the first 8 rows:
If the top 8 records contain equal number of numeric and character types – then the priority is numeric
If the majority of top 8 records are numeric then it assigns the data type as numeric and all character values are read as NULLs
If the majority of top 8 records are of character type then it assigns the data type as string and all numeric values are read as
NULLs
The post outlines two things you can do to fix this:
First, add IMEX=1 to the end of your Excel driver connection string. This will allow Excel to read the values as Unicode. However, this is not sufficient if the data in the first 8 rows are numeric.
In the registry, change the value for HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Nod\Microsoft\Jet\4.0\Engines\Excel\TypeGuessRows to 0. This will ensure that the driver looks at all the rows to determine the data type for the column.
Yes, you can. Just go into the output column list on the Excel source and set the type for each of the columns.
To get to the input columns list right click on the Excel source, select 'Show Advanced Editor', click the tab labeled 'Input and Output Properties'.
A potentially better solution is to use the derived column component where you can actually build "new" columns for each column in Excel. This has the benefits of
You have more control over what you convert to.
You can put in rules that control the change (i.e. if null give me an empty string, but if there is data then give me the data as a string)
Your data source is not tied directly to the rest of the process (i.e. you can change the source and the only place you will need to do work is in the derived column)
If your Excel file contains a number in the column in question in the first row of data, it seems that the SSIS engine will reset the type to a numeric type. It kept resetting mine. I went into my Excel file and changed the numbers to "Numbers stored as text" by placing a single quote in front of them. They are now read as text.
I also noticed that SSIS uses the first row to IGNORE what the programmer has indicated is the actual type of the data (I even told Excel to format the entire column as TEXT, but SSIS still used the data, which was a bunch of digits), and reset it. Once I fixed that by putting a single-quote in my Excel file in front of the number in the first row of data, I thought it would get it right, but no, there is additional work.
In fact, even though the SSIS External DataSource Column now has the type DT_WSTR, it will still read 43567192 as 4.35671E+007. So you have to go back into your Excel file and put single quotes in front of all the numbers.
Pretty LAME, Microsoft! But there's your solution. I have no idea what to do if the Excel file is not under your control.
I was looking for a solution for the similar issue, but didn't find anything on the internet. Although most of the found solutions work at design time, they don't work when you want to automate your SSIS package.
I resolved the issue and made it work by changing the properties of "Excel Source". By default the AccessMode property is set to OpenRowSet. If you change it to SQL Command, you can write your own SQL to convert any column as you wish.
For me SSIS was treating the NDCCode column as float, but I needed it as a string and so I used following SQL:
Select [Site], Cstr([NDCCode]) as NDCCode From [Sheet1$]
Excel source is SSIS behaves crazy. SSIS determines the type of data in a particualr column by reading first 10 rows.. hence the issue. If you have a text column with null values in first 10 roes, SSIS takes the data type as Int. With a bit of struggle, here is a workaround
Insert a dummy row (preferrably first row) in the worksheet. I prefer doing this thru a Script task, you may consider using some service to preprocess the file before SSIS connects to it
With the duummy row, you are sure that the datatypes will be set as you need
Read the data using Excel source and filter out the dummy row before you take it for further processing.
I know it is a bit shabby, but it works :)
I could fix this issue. while creating the SSIS package, I manually changed the specific column to text (Open the excel file select the column, right click on column, select format cells, in number tab select Text and save the excel).
Now create the SSIS package and test it. It works. Now try to use the excel file where this column was not set as text.
It worked for me and I could execute the package successfully.
This should be resolved simply, just untick the box "Frist row as column names" and all data will be collected as text data type. Only downside of this choice is that you have to manage the columns names from the auto names (column 1, 2 etc) and handle the first row which contains the column names.
I had trouble implementing the solution here - I could follow the instructions, but it only gave new errors.
I solved my conversion issues by using a Data Conversion entity. This can be found on the SSIS Toolbox under Data Flow Transformations. I placed the Data Conversion between my Excel Source and OLE DB Destination, linked Excel to Data C, Data C to OLE DB, double clicked Data C to bring up a list of the data columns. Gave the problem column a new Alias, and changed the Data Type column.
Lastly, in the Mappings of the OLE DB Destination, use the Alias column name, rather than the original Excel column name. Job done.
You can use a Data Conversion component to convert to the desired data types.

Resources