Trailing Newlines in excel using MATLAB's writetable() - excel

I'm trying to write a MATLAB table containing strings to excel using writetable(). I'd like the text in excel to end in a newline, that is a blank line with nothing written on it. However, I can't seem to get trailing newlines to write.
format_spec = 'r %6.4f\ng %6.4f\nb %6.4f\n';
vals = rand(4,5,3);
temp_str = cellfun(#(x) sprintf(format_spec,x) ,...
squeeze(mat2cell(permute(vals,[3 1 2]), ...
[size(vals,3)],[ones(1,size(vals,1))],[ones(1,size(vals,2))])) ,...
'UniformOutput',false);
temp_table = cell2table( temp_str );
writetable(temp_table,'test_table.xlsx'); %//where's my trailing newline?
xlswrite('test_cell.xlsx',table2cell(temp_table)); %//trailing newline preserved
With xlswrite the trailing newline is handled correctly, but to get the same functionality as writetable I have to add additional code to write the RowNames and VariableNames, and offset the excel location of the table contents by one row and one column. I guess I already have work around using xlswrite, but my question is if/how this can be done with writetable.

I reached out to the Mathworks regarding this issue and was told it is a bug. For now xlswrite must be used to preserve trailing newlines.

Related

Replace non-printable characters with " (Inch sign) VBA Excel

I need to replace non-printable characters with " (Inch sign).
I tried to use excel clean function and other UDF functions, but it just remove and not replace.
Note: non-printable characters are highlighted in blue on the above photo and it's position is random on the cells.
this is a sample string file Link`
The expected correct output should be 12"x14" LPG . OUTLET OCT-SEP# process
In advance grateful for useful comments and answer.
As per my comment, you can try:
=SUBSTITUTE(A1,CHAR(25)&CHAR(25),CHAR(34))
Or the VBA pseudo-code:
[A1] = [A1].Replace(Chr(25) & Chr(25), Chr(34))
Where [A1] is the obvious placeholder for the range-object you would want to use with proper and absolute referencing.
With ms365 newest functions, we could also use:
=TEXTJOIN(CHAR(34),,TEXTSPLIT(A1,CHAR(25)))
You can use Regular Expressions within a UDF to create a flexible method to replace "bad" characters, when you don't know exactly what they are.
In the UDF below, I show two pattern options, but others are possible.
One is to replace all characters with a character code >127
the second is to replace all characters with a charcter code >255
Option Explicit
Function ReplaceBadChars(str As String, replWith As String) As String
Dim RE As Object
Set RE = CreateObject("Vbscript.Regexp")
With RE
.Pattern = "[\u0080-\uFFFF]" 'to replace all characters with code >127 or
'.Pattern = "[\u0100-\uFFFF]" 'to replace all characters with code >255
.Global = True
ReplaceBadChars = .Replace(str, replWith)
End With
End Function
On the worksheet you can use, for example:
=ReplaceBadChars(A1,"""")
Or you could use it in a macro if you wanted to process a column of data without adding an extra column.
Note: I am uncertain as to whether there might be an efficiency difference using a smaller negated character class (eg: [^\x00-\x79] instead of the character class I showed in the code. But if, as written, execution seems slow, I'd try this change)
You can try this :
Cells.Replace What:="[The caracter to replace]", Replacement:=""""

Split, escaping certain splits

I have a cell that contains multiple questions and answers and is organised like a CSV. So to get all these questions and answers separated a simple split using the comma as the delimiter should separate this easily.
Unfortunately, there are some values that use the comma as the decimal separator. Is there a way to escape the split for those occurrences?
Fortunately, my data can be split using ", " as separator, but if this wouldn't be the case, would there still be a solution besides manually replacing the decimal delimiter from a comma to a dot?
Example:
"Price: 0,09,Quantity: 12,Sold: Yes"
Using Split("Price: 0,09,Quantity: 12,Sold: Yes",",") would yield:
Price: 0
09
Quantity: 12
Sold: Yes
One possibility, given this test data, is to loop through the array after splitting, and whenever there's no : in the string, add this entry to the previous one.
The function that does this might look like this:
Public Function CleanUpSeparator(celldata As String) As String()
Dim ret() As String
Dim tmp() As String
Dim i As Integer, j As Integer
tmp = Split(celldata, ",")
For i = 0 To UBound(tmp)
If InStr(1, tmp(i), ":") < 1 Then
' Put this value on the previous line, and restore the comma
tmp(i - 1) = tmp(i - 1) & "," & tmp(i)
tmp(i) = ""
End If
Next i
j = 0
ReDim ret(j)
For i = 0 To UBound(tmp)
If tmp(i) <> "" Then
ret(j) = tmp(i)
j = j + 1
ReDim Preserve ret(j)
End If
Next i
ReDim Preserve ret(j - 1)
CleanUpSeparator = ret
End Function
Note that there's room for improvement by making the separator caharacters : and , into parameters, for instance.
I spent the last 24 hours or so puzzling over what I THINK is a completely analogous problem, so I'll share my solution here. Forgive me if I'm wrong about the applicability of my solution to this question. :-)
My Problem: I have a SharePoint list in which teachers (I'm an elementary school technology specialist) enter end-of-year award certificates for me to print. Teachers can enter multiple students' names for a given award, separating each name using a comma. I have a VBA macro in Access that turns each name into a separate record for mail merging. Okay, I lied. That was more of a story. HERE'S the problem: How can teachers add a student name like Hank Williams, Jr. (note the comma) without having the comma cause "Jr." to be interpreted as a separate student in my macro?
The full contents of the (SharePoint exported to Excel) field "Students" are stored within the macro in a variable called strStudentsBeforeSplit, and this string is eventually split with this statement:
strStudents = Split(strStudentsBeforeSplit, ",", -1, vbTextCompare)
So there's the problem, really. The Split function is using a comma as a separator, but poor student Hank Williams, Jr. has a comma in his name. What to do?
I spent a long time trying to figure out how to escape the comma. If this is possible, I never figured it out.
Lots of forum posts suggested using a different character as the separator. That's okay, I guess, but here's the solution I came up with:
Replace only the special commas preceding "Jr" with a different, uncommon character BEFORE the Split function runs.
Swap back to the commas after Split runs.
That's really the end of my post, but here are the lines from my macro that accomplish step 1. This may or may not be of interest because it really just deals with the minutiae of making the swap. Note that the code handles several different (mostly wrong) ways my teachers might type the "Jr" part of the name.
'Dealing with the comma before Jr. This will handle ", Jr." and ", Jr" and " Jr." and " Jr".
'Replaces the comma with ~ because commas are used to separate fields in Split function below.
'Will swap ~ back to comma later in UpdateQ_Comma_for_Jr query.
strStudentsBeforeSplit = Replace(strStudentsBeforeSplit, "Jr", "~ Jr.") 'Every Jr gets this treatment regardless of what else is around it.
'Note that because of previous Replace functions a few lines prior, the space between the comma and Jr will have been removed. This adds it back.
strStudentsBeforeSplit = Replace(strStudentsBeforeSplit, ",~ Jr", "~ Jr") 'If teacher had added a comma, strip it.
strStudentsBeforeSplit = Replace(strStudentsBeforeSplit, " ~ Jr", "~ Jr") 'In cases when teacher added Jr but no comma, remove the (now extra)...
'...space that was before Jr.

Removing unwanted data from text file

I have a large text file exported from an application that has three unwanted zeros in each row. The text file needs to be imported into another application and the zeros cause a problem.
Basically the unwanted three zeros per row need to be deleted. These zeros are always in the same location (same number of characters when counting from the left), but the location is somewhere in the middle. I have tried various things like importing the file into excel, removing the zeroes and then exporting as text file, but always have formatting problems with the exported text file.
Can someone suggest a solution or point me in the right direction?
something like this ? (quickly done)
Sub replaceInTx()
Dim inFile As String, outFile As String
Dim curLine As String
inFile = "x:\Documents\test.txt"
outFile = inFile & ".new.txt"
Open inFile For Input As #1
Open outFile For Output As #2
Do Until EOF(1)
Line Input #1, curLine
Print #2, Replace(curLine, "000", "", 6, 1, vbTextCompare)
Loop
Close #1
Close #2
End Sub
Alternatively, you can do that with any text editor that allows block selection (I like Notepad2, tiny, fast and portable)
I see you use excel a lot.
When you import the text file into excel do you use the import function and do you push the data into separate cells?
if the cell is numeric you could do the following:
=LEFT(TEXT(G5,"#"),LEN(TEXT(G5,"#"))-3)
if the cell is text:
=LEFT(G5,LEN(G5)-3)
G5 would the cell the data row/field is in.
curLine = Left(curLine, 104)
This will take the first 104 characters

Adding a newline character within a cell (CSV)

I would like to import product descriptions that need to be logically broken according by things like description, dimensions, finishes etc. How can I insert a line break so that when I import the file they will show up?
This question was answered well at Can you encode CR/LF in into CSV files?.
Consider also reverse engineering multiple lines in Excel. To embed a newline in an Excel cell, press Alt+Enter. Then save the file as a .csv. You'll see that the double-quotes start on one line and each new line in the file is considered an embedded newline in the cell.
I struggled with this as well but heres the solution. If you add " before and at the end of the csv string you are trying to display, it will consolidate them into 1 cell while honoring new line.
csvString += "\""+"Date Generated: \n" ;
csvString += "Doctor: " + "\n"+"\"" + "\n";
I have the same issue, when I try to export the content of email to csv and still keep it break line when importing to excel.
I export the conent as this: ="Line 1"&CHAR(10)&"Line 2"
When I import it to excel(google), excel understand it as string. It still not break new line.
We need to trigger excel to treat it as formula by:
Format -> Number | Scientific.
This is not the good way but it resolve my issue.
supposing you have a text variable containing:
const text = 'wonderful text with \n newline'
the newline in the csv file is correctly interpreted having enclosed the string with double quotes and spaces
'" ' + text + ' "'
On Excel for Mac 2011, the newline had to be a \r instead of an \n
So
"\"first line\rsecond line\""
would show up as a cell with 2 lines
I was concatenating the variable and adding multiple items in same row. so below code work for me. "\n" new line code is mandatory to add first and last of each line if you will add it on last only it will append last 1-2 character to new lines.
$itemCode = '';
foreach($returnData['repairdetail'] as $checkkey=>$repairDetailData){
if($checkkey >0){
$itemCode .= "\n".trim(#$repairDetailData['ItemMaster']->Item_Code)."\n";
}else{
$itemCode .= "\n".trim(#$repairDetailData['ItemMaster']->Item_Code)."\n";
}
$repairDetaile[]= array(
$itemCode,
)
}
// pass all array to here
foreach ($repairDetaile as $csvData) {
fputcsv($csv_file,$csvData,',','"');
}
fclose($csv_file);
I converted a pandas DataFrame to a csv string using DataFrame.to_csv() and then I looked at the results. It included \r\n as the end of line character(s). I suggest inserting these into your csv string as your row separation.
Depending on the tools used to generate the csv string you may need escape the \ character (\r\n).

R reading Excel files with carriage returns

I create a routine in R to import multiple Excel files that I need to merge in one big txt file. I use the read.xls function. Some of these xls files have carriage returns in cells ("\n"). Then, when I write the txt files (write.table) R interpret this "\n" as new lines.
How can I clean the xls files or read properly them to remove the not necessary "\n"?
Thanks!
The columns in your table are almost certainly factors (that's the default for character columns in R). So, we can just change the factors in each column.
First some dummy data
R> dd = data.frame(d1 = c("1", "2\n", "33"),
d2 = c("1\n", "2\n", "33"))
##Default, factor
R> levels(dd[,1])
[1] "1" "2\n" "33"
Next, we use a for loop to go over the column names:
for(i in 1:ncol(dd))
levels(dd[,i]) = gsub("\n","", levels(dd[,i]))
If you want to remove the for loop and use sapply, then this should work
##Can this be improved?
sapply(1:ncol(dd),
function(i) levels(dd[,i]) <<- gsub("\n","", levels(dd[,i])))

Resources