I have a large data set in SPSS (v20) with null values for some observations.
I tried saving as an excel 2007 file, but when I open excel file "#NULL!" appears in cells where values are null. I'm going out of ram when trying to use 'find and replace' function in Excel.
T tried saving as a csv file then I got a space in the cells where values are null.
Could anybody advice on this please?
I typically save as CSV and then in excel save as .xlsx. All missing values are then, as you noted, allocated a space which I accept as representing sysmis values.
When I work with a file that has been saved directly to Excel, (ie many "#NULL" values), I use a VBA macros which does a find/replace row by row. The macro is quicker than doing it all at once as this typically starts to slow to infuriating speeds. The macro is still not as fast as one would want...which is why I go via CSV.
According to the command syntax reference, #NULL! values occur only for system missing values. So to prevent that you need to assign the system missings a value - for that you can use the RECODE command (e.g. RECODE MyVar (SYSMIS = -9)(ELSE = COPY). would work for a number field in which -9 can not be a valid value).
Depending on what you want the value to be when written to the sheet, you can then use the /CELLS=VALUES subcommand on SAVE TRANSLATE to save the assigned missing numeric category (IMO a bad idea for spreadsheets) or you can assign the missing value a VALUE LABELS and use /CELLS=LABELS to save the string label in the cell.
Related
I want to use xlsread in MATLAB R2017b to read from an externally supplied data file. Usually, this works fine for me. However, in this case I get data I can't find in the .xls file and I don't know what happened.
Here is screenshot of the .xls:
and here of the corresponding raw from xlsread:
Note that there is data in MATLAB (e.g. 'Report tem...') that cannot be found in Excel, that the columns are in a different order and that their headers also differ.
The data file is from Svenska Kraftnät, the Swedish Transmission System Operator and contains the generation and consumption of electrical energy for a certain year. You can find it here.
I use the following line to import the data in question (I am only interested in the numerical data and the timestamps, but used the raw to try to understand what is going on here):
[num,~,raw] = xlsread('n_fot2013-01-12.xls');
I am sorry if this a bad format for the question or if this is a dupe, but I didn't have a clue how to make this question more general. Please feel free to suggest improvements!
Your workbook has a hidden sheet in it, and it is that sheet that is being read.
To read the visible sheet, specify the sheet name:
[num,~,raw] = xlsread('n_fot2013-01-12.xls','Förb + prod i Sverige');
To view the the hidden sheet, on the Home tab, in the Cells group, click Format > Visibility > Hide & Unhide > Unhide Sheet. Then select the hidden sheet.
There isn't a way to tell xlsread to only read visible sheets, and by default it reads the first sheet (hidden or not).
I'm having a problem and havnt managed to find a solution online and was hoping I'd get lucky and someone could help.
I have a database application that exports a large dataset to .xlsx
A VBA application then maps this data into another Excel application.
When the data is exported out of the original database application, this process is outside of my control. All the cells have a 'General' cell format and we have some large numbers such as 172627108914 which is the serial number for a piece of equipment. In the exported xlsx file, this serial number is represented as 1.72627E+11.
The next stage of the process copies this data into another worksheet which has all cells formatted as text. The value is copied over but the value stays the same and the format of the cell changes from Text to General.
Does anyone know what I have to do to change to remove the scientific notation?
I'm using Microsoft Excel 2010.
Thanks
Append a single apostrophe to the front of the number. That will force Excel to read the number in 'text' format automatically, and the apostrophe will not show up at the front of the number when it's displayed.
Thanks for the help everyone, in the end some VBA code was written to convert all fields in the original export to text then iterate over each cell and rewriting the value from the formula line. This has resulted in a correctly formatted worksheet to pass to the second application
While exporting an SPSS (.sav) data file to excel, the blank cells were imputed as #NULL! .Is there any way to overcome this?
Even, we can replace the #NULL! by find and replace in Excel, but i need to do this in SPSS end itself.
Please assist.
Regards
Satheeshkumar
You may consider saving a copy of your original spss file, and use the syntax command of changing the variables into strings... use alter type command. more details here: http://www.spss-tutorials.com/changing-variable-properties-3-type/
As i wrote, you probably save your file as csv. or excel file. If the variable is a string, it will output with non #NULL!, but when it is a numeric value, and the cell is empty, the excel shows the #NULL! character.
To exclude #NULL, we can export by .csv file, then save as by excel file.
It works.
I have an Excel spreadsheet that generates CSV scripts used in an application. The scripts must be in a very specific format, and I save a master in XLSX format with protected sheets and data validation to save the CSVs from rather than directly edit the CSVs, as directly editing the CSVs can lead to mistakes.
The issue is that the scripts can be of nearly any length. The left column of each line can only be one of a certain set of values, and the last line has to say "END". The only way I can do this without VBA is the following formula in the A column, from row 7 (the first 6 are header information) to row 1048576 (last Excel row) and protect the sheet with column A locked:
=IF(AND(ISBLANK(B368),NOT(ISBLANK(B367))),"END",IF(ISBLANK(B368),"",A367))
This makes the last row say "END" in column A, and all rows after blank, which is what is desired. The problem is that now when the CSV file is saved, it will always have 1048576 rows, with all the bottom rows containing the delimiters ",,,," . This won't work, the CSV file needs to stop after the "END" row. Is there a way to write the formula that will cause Excel to ignore the cells which evaluate to blank when saving to CSV or an alternate way to save to CSV in Excel that will ignore all the rows that evaluate to blank?
Note: I have a solution in VBA already that I can use on my own machine (it copies the data up to "END", pastes in a new sheet in text only format, then saves as CSV with the name of the original worksheet). I want to share this sheet, however, and getting around the security constraints to share macros at my company is a pain. So I'm looking for a way this might be done without Macros, if it's possible at all.
In looking for an answer I found this link, which is similar, but not the same:
Saving Excel data as csv with VBA - removing blank rows at end of file to save
As the "blanks" I have are active rows because they contain formulas, this method will not work.
Manually deleting the rows / columns will work to reset the size, as GSerg noted in the other question. Alternatively, also as suggested by GSserg, you can copy the data to a new sheet before saving.
Otherwise, an easy fix might be to create a small post-excel / pre-processing script - perhaps using a batch file - Batch / Find And Edit Lines in TXT file - or a similar solution in any small scripting language to remove the extra rows.
I have a spreadsheet where I can enter a reference number and Excel generates a table based on the data tied to that reference number.
Rather than enter each reference number individually and copy the resulting table manually, is it possible to automatically iterate the process of entering the reference numbers (from an existing list) and exporting the results separately?
My Idea:
I didn't quite understand HOW you need to automatically enter those reference, but a good idea would be writing all the numbers you need in a .txt file, then program a button which loads a msoFileDialogFilePicker to select your file. When the file is selected, you can open it with a New FileSystemObject, read it with ReadLine and filling an array with all the numbers you have written in the .txt file. Then with a forcycle from LBoundto UBoundof that array you can enter those numbers automatically in your Range.
To export everytime your results table (which again I didn't quite understand HOW you want it to be exported), an idea would be a Worksheet Changeevent (if the results table appear in your spreadsheet) in which you can write down some code to copy-paste the results table in a new worksheet and then save it, or make a .pdf of the results table from its area.
The possibilities are really too many to write a single piece of code to show you :)