I’m making a bar graph listing the amount of candidates per interview status. Ex: 1st Phase, 2 Hired,7 3rd Round, 4 etc., I have cases where some columns do not have any candidates within the current phase (Offer accepted, 0 or blank).
I made the bar graph using a range of all my headers (status) and the values (# of candidates) beneath the headers. How would I go about making my graph not display these blank values headers?
I can’t just not add the unused headers, as it is a dynamic array, pulling data from a separate table. Every time an input of a candidate and their status in the interviewing process it updates the values in my range for the bar graph.
The issue isn’t that the chart isn’t bypassing blanks but rather it is still using the header in my range when the value beneath the header is blank. Is it even possible to fix this?
Reference Photo
You can use name manager with the Offset function to create a dynamic range. This talks you through it
https://excelchamps.com/excel-charts/dynamic-chart-range/
He uses COUNTA, but I have done this with countif - I don't know which will be better for your situation
=OFFSET(Dropdowns!$K$17,0,0,COUNTIF(Dropdowns!$L$17:$L$21,">1")*1,1)
I inherited a rather large worksheet and the fundamental task of my application is to insert a variable number of rows in it. Notice the empty template below:
As you can see, the original designer decided that the minimum number of preconfigured rows should be TWO. This design decision probably makes sense in an interactive environment. Let's say we need 3 rows. In order not to break anything one row is added between rows 15 and 16. Row 18 is shifted down but must importantly, its contents are updated accordingly.
I am working on a new, simplified template and am trying to keep only one preconfigured row (15). Row 16 (preconfigured Item No. 2) was removed from the template. This is what happens at run time: All additional rows under it are fine BUT the row with the totals (18) is not updated.
Perhaps the empty row (17) breaks the continuity and Excel gets lost?
Is this "sandwich" style required for a template?
TIA
Addendum: The formula in cell J18 is: =SUM(J15:J16). With the sandwich approach, the second argument becomes J17, J18, J19, J20, ... as my app adds new rows.
I'm trying to read an Excel sheet from an XLS or XLSX file in memory using Delphi 7. When possible I use automation to read the cells one by one, but when Excel is not installed, I revert to using the ADO/ODBC Jet driver.
I connect using either
Provider=Microsoft.Jet.OLEDB.4.0; Data Source=file.xls;Extended Properties="Excel 8.0;Persist Security Info=False;IMEX=1;HDR=No";
Provider=Microsoft.ACE.OLEDB.12.0; Data Source=file.xlsx;Extended Properties="Excel 12.0;Persist Security Info=False;IMEX=1;HDR=No";
My problem then is that when I use the following query:
SELECT * FROM [SheetName$]
the returned results do not contain the empty rows or empty columns, so if the sheet contains such rows or columns, the following cells are shifted and do not end up in their correct position. I need the sheet to be loaded "as is", ie know exactly from what cell position each value comes from.
I tried to read the cells one by one by issuing one query of the form
SELECT F1 FROM `SheetName$A1:A1`
but now the driver returns an error saying "There is data outside the selected region". btw I had to use backticks to enclose the name because using brackets like this [SheetName$A1:A1] gave out a syntax error message.
Is there a way to tell the driver to select the sheet as-is, whithout skipping blanks? Or maybe a way to know from which cell position each value is returned?
For internal policy reasons (I know they are bad but I do not decide these), it is not possible to use a third party library, I really need this to work from standard Delphi 7 components.
I assume that if your data is say in the range B2:D10 for example, you want to include the column A as an empty column? Maybe? Is that correct? If that's the case, then your data set, when you read the sheet (SELECT * FROM [SheetName$]) would also return 1 million rows by 16K columns!
Can you not execute a query like: SELECT * FROM [SheetName$B2:D10] and use the ADO GetRows function to get an array - which will give you the size of the data. Then you can index into the array to get what data you want?
OK, the correct answer is
Use a third party library no matter what your boss says. Do not even
try ODBC/ADO to load arbitrary Excel files, you will hit a wall sooner or later.
It may work for excel files that contain a single data table, but not when you want to cherry pick data in a sheet primarily made for human consumption (ie where a single column contains some cells with introductory text, some with numerical data, some with comments, etc...)
Using IMEX=1 ignores empty lines and empty columns
Using IMEX=0 sometimes no longer ignores empty lines, but now some of the first non empty cells are considered field names instead of data, although HDR=No. Would not work anyway since valules in a column are of mixed types.
Explicitly looping across cells and making a SELECT * FROM [SheetName$A1:A1] works until you reach an empty cell, then you get access violations (see below)
Access violation at address 1B30B3E3 in module 'msexcl40.dll'. Read of address 00000000
I'm too old to want to try and guess the appropriate value to use so it works until someone comes with yet another mix of data in a column. Sorry for having wasted everybody's time.
I'm modifying (but not adding or removing) objects in a Core Data store upon selection of a UITableViewCell. However, whenever I do so, I get the following error:
CoreData: error: Serious application error. An exception was caught from the delegate of
NSFetchedResultsController during a call to -controllerDidChangeContent:. Invalid update:
invalid number of rows in section 0. The number of rows contained in an existing section
after the update (5) must be equal to the number of rows contained in that section before
the update (5), plus or minus the number of rows inserted or deleted from that section (1
inserted, 0 deleted) and plus or minus the number of rows moved into or out of that
section (0 moved in, 0 moved out). with userInfo (null)
...except that I've not inserted or deleted any new rows, I've just modified a property on an existing object which backs up the row. How can I fix this error?
I am developing a SSIS package, trying to update an existing SQL table from a CSV flat file. All of the columns are successfully updating except for one column. If I ignore this column on truncate, my package completes successfully. So I know this is a truncate problem and not error.
This column is empty for almost every row. However, there are a few rows where this field is 200-300 characters. My data conversion task identified this field as a DT_WSTR, but from what I've read elsewhere maybe this should be DT_NTEXT. I've tried both and I even set the DT_WSTR to 500. But none of this fixed my problem. How can I fix? What data type should this column be in my SQL table?
Error: 0xC02020A1 at Data Flow Task 1, Source - Berkeley812_csv [1]: Data conversion failed. The data conversion for column "Reason for Delay in Transition" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
Error: 0xC020902A at Data Flow Task 1, Source - Berkeley812_csv [1]: The "output column "Reason for Delay in Transition" (110)" failed because truncation occurred, and the truncation row disposition on "output column "Reason for Delay in Transition" (110)" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
Error: 0xC0202092 at Data Flow Task 1, Source - Berkeley812_csv [1]: An error occurred while processing file "D:\ftproot\LocalUser\RyanDaulton\Documents\Berkeley Demographics\Berkeley812.csv" on data row 758.
One possible reason for this error is that your delimiter character (comma, semi-colon, pipe, whatever) actually appears in the data in one column. This can give very misleading error messages, often with the name of a totally different column.
One way to check this is to redirect the 'bad' rows to a separate file and then inspect them manually. Here's a brief explanation of how to do that:
http://redmondmag.com/articles/2010/04/12/log-error-rows-ssis.aspx
If that is indeed your problem, then the best solution is to fix the files at the source to quote the data values and/or use a different delimeter that isn't in the data.
I've had this issue before, it is likely that the default column size for the file is incorrect. It will put a default size of 50 characters but the data you are working with is larger. In the advanced settings for your data file, adjust the column size from 50 to the table's column size.
I suspect the
or one or more characters had no match in the target code page
part of the error.
If you remove the rows with values in that column, does it load?
Can you identify, in other words, the rows which cause the package to fail?
It could be the data is too long, or it could be that there's some funky character in there SQL Server doesn't like.
If this is coming from SQL Server Import Wizard, try editing the definition of the column on the Data Source, it is 50 characters by default, but it can be longer.
Data Soruce -> Advanced -> Look at the column that goes in error -> change OutputColumnWidth to 200 and try again.
I've had this problem before, you can go to "advanced" tab of "choose a data source" page and click on "suggested types" button, and set the "number of rows" as much as you want. after that, the type and text qualified are set to the true values.
i applied the above solution and can convert my data to SQL.
In my case, some of my rows didn't have the same number of columns as the header. Example, Header has 10 columns, and one of your rows has 8 or 9 columns. (Columns = Count number of you delimiter characters in each line)
If all other options have failed, trying recreating the data import task and/or the connection manager. If you've made any changes since the task was originally created, this can sometimes do the trick. I know it's the equivalent of rebooting, but, hey, if it works, it works.
I have same problem, and it is due to a column with very long data.
When I map it, I changed it from DT_STR to Text_Stream, and it works
In the destination, in advanced, check that the length of the column is equal to the source.
OuputColumnWidth of column must be increased.
Path: Source Connection manager-->Advanced-->OuputColumnWidth