How we can remove ''' from beginning and end of strings in a cell array in MATLAB R2015a? Suppose that we have this cell array:
When we open one of cells we have this:
I want convert whole cell array to double (numbers). Suppose that out cell array is final. Using cellfun(#str2double,final) returns Nan for all cells. str2double(final) returns Nan too.
PS.
10 last elements of final in command prompt has this structure:
ans =
''2310''
''2319''
''2313''
''2318''
''2301''
''2302''
''2303''
''2312''
''2304''
''2309''
You can replace all of the apostrophe characters with nothing, then apply str2double to each cell in your cell array.
Given that your cell is stored in final, do something like this:
final_rep = strrep(final, '''', '');
out = cellfun(#str2double, final_rep);
Basically, use strrep to replace all of the apostrophe characters with nothing, then apply str2double to each cell in your cell array via cellfun.
Given your example above:
final = {'''2310'''
'''2319'''
'''2313'''
'''2318'''
'''2301'''
'''2302'''
'''2303'''
'''2312'''
'''2304'''
'''2309'''};
We now get this:
>> out =
2310
2319
2313
2318
2301
2302
2303
2312
2304
2309
>> class(out)
ans =
double
As you can see, the output of the array is double, as we expect.
Related
Is there any way to replace the dot in a float with a comma and keep a precision of 2 decimal places?
Example 1 : 105 ---> 105,00
Example 2 : 99.2 ---> 99,20
I used a lambda function df['abc']= df['abc'].apply(lambda x: f"{x:.2f}".replace('.', ',')). But then I have an invalid format in Excel.
I'm updating a specific sheet on excel, so I'm using : wb = load_workbook(filename) ws = wb["FULL"] for row in dataframe_to_rows(df, index=False, header=True): ws.append(row)
Let us try
out = (s//1).astype(int).astype(str)+','+(s%1*100).astype(int).astype(str).str.zfill(2)
0 105,00
1 99,20
dtype: object
Input data
s=pd.Series([105,99.2])
s = pd.Series([105, 99.22]).apply(lambda x: f"{x:.2f}".replace('.', ',')
First .apply takes a function inside and
f string: f"{x:.2f} turns float into 2 decimal point string with '.'.
After that .replace('.', ',') just replaces '.' with ','.
You can change the pd.Series([105, 99.22]) to match it with your dataframe.
I think you're mistaking something in here. In excel you can determine the print format i.e. the format in which numbers are printed (this icon with +/-0).
But it's not a format of cell's value i.e. cell either way is numeric. Now your approach tackles only cell value and not its formatting. In your question you save it as string, so it's read as string from Excel.
Having this said - don't format the value, upgrade your pandas (if you haven't done so already) and try something along these lines: https://stackoverflow.com/a/51072652/11610186
To elaborate, try replacing your for loop with:
i = 1
for row in dataframe_to_rows(df, index=False, header=True):
ws.append(row)
# replace D with letter referring to number of column you want to see formatted:
ws[f'D{i}'].number_format = '#,##0.00'
i += 1
well i found an other way to specify the float format directly in Excel using this code :
for col_cell in ws['S':'CP'] :
for i in col_cell :
i.number_format = '0.00'
Given a Matlab table that contains many NaN, how can I write this table as an excel or csv files where the NaN are replaced by blanks?
I use the following function:
T = table(NaN(5,2),'VariableNames',{'A','C'})
writetable(T, filename)
I do not want to replace it with zeros. I want that the output file:
has blanks for NaN and
that the variable names are included in the output.
You just need xlswrite for that. It replaces NaNs with blanks itself. Use table2cell or the combination of table2array and num2cell to convert your table to a cell array first. Use the VariableNames property of the table to retrieve the variable names and pad them with the cell array.
data= [T.Properties.VariableNames; table2cell(T)];
%or data= [T.Properties.VariableNames; num2cell(table2array(T))];
xlswrite('output',data);
Sample run for:
T = table([1;2;3],[NaN; 410; 6],[31; NaN; 27],'VariableNames',{'One' 'Two' 'Three'})
T =
3×3 table
One Two Three
___ ___ _____
1 NaN 31
2 410 NaN
3 6 27
yields:
Although the above solution is simpler in my opinion but if you really want to use writetable then:
tmp = table2cell(T); %Converting the table to a cell array
tmp(isnan(T.Variables)) = {[]}; %Replacing the NaN entries with []
T = array2table(tmp,'VariableNames',T.Properties.VariableNames); %Converting back to table
writetable(T,'output.csv'); %Writing to a csv file
I honestly think the most straight-forward way to output the data in the format you describe is to use xlswrite as Sardar did in his answer. However, if you really want to use writetable, the only option I can think of is to encapsulate every value in the table in a cell array and replace the nan entries with empty cells. Starting with this sample table T with random data and nan values:
T = table(rand(5,1), [nan; rand(3,1); nan], 'VariableNames', {'A', 'C'});
T =
A C
_________________ _________________
0.337719409821377 NaN
0.900053846417662 0.389738836961253
0.369246781120215 0.241691285913833
0.111202755293787 0.403912145588115
0.780252068321138 NaN
Here's a general way to do the conversion:
for name = T.Properties.VariableNames % Loop over variable names
temp = num2cell(T.(name{1})); % Convert numeric array to cell array
temp(cellfun(#isnan, temp)) = {[]}; % Set cells with NaN to empty
T.(name{1}) = temp; % Place back into table
end
And here's what the table T ends up looking like:
T =
A C
___________________ ___________________
[0.337719409821377] []
[0.900053846417662] [0.389738836961253]
[0.369246781120215] [0.241691285913833]
[0.111202755293787] [0.403912145588115]
[0.780252068321138] []
And now you can output it to a file with writetable:
writetable(T, 'sample.csv');
I would like to search for a specific string in matlab cell. For example my cell contains a column of strings like this
variable(:,5) = {'10';'10;20';'20';'10;20';'10';'10';'20'};
I would like to search for all cells that have only '10' and delete them.
I tried using this statement for searching
is10 = ~cellfun(# isempty , strfind (variable(:,5) , '10'));
But this returns all cells with '10' (including the ones with '10;20').
I would like to have just the cells with pure '10' values
What is the best way to do this?
It is not working as you expect because strfind allows for a partial string match. What you want is an exact match. You can do this using strcmp. Also, the input to strcmp can actually be a cell array of strings so you can use it the following way.
A = {'10';'10;20';'20';'10;20';'10';'10';'20'};
is10 = strcmp(A, '10');
%// 1 0 0 0 1 1 0
You could also use ismember to do the same thing.
is10 = ismember(A, '10');
%// 1 0 0 0 1 1 0
As a side note, most string functions (including strfind) can actually accept a cell array of strings as input. So in your initial post, the wrapping of strfind inside of cellfun is unnecessary.
Hello,
I have a little problem.
I have a txt file with over 200mb.
It looks like:
%Hello World
%second sentences
%third;
%example
12.02.2014
;-400;-200;200
;123;233;434
%Hello World
%second sentences
%third
%example
12.02.2014
;-410;200;20300
;63;23;43
;23;44;78213
..
... ...
I need only the Values after the semicolon like:
Value1{1,1}=[-400]; Value{1,2}=[-200]; and Value{1,3}=[200]
Value2{1,1}=[123]; Value{1,2}=[233]; and Value{1,3}=[434]
and so on.
Hase someone an ideas, how i can split the values in a cell array or vektor?
Thus, the variables must be:
Var1=[-400 -200 200;
434 233 434;
Var2=[
-410 200 20300;
63 23 43;
23 44 28213]
I will seperate, after every date in a another Value. Example when i have 55 Dates, i will have 55 Values.
shareeditundeleteflag
This could be one approach assuming a uniformly structured data (3 valid numbers per row) -
%// Read in entire text data into a cell array
data = importdata('sample.txt','');
%// Remove empty lines
data = data(~cellfun('isempty',data))
%// Find boundaries based on delimiter "%example"
exmp_delim_matches = arrayfun(#(n) strcmp(data{n},'%example'),1:numel(data))
bound_idx = [find(exmp_delim_matches) numel(exmp_delim_matches)]
%// Find lines that start with delimiter ";"
matches_idx = find(arrayfun(#(n) strcmp(data{n}(1),';'),1:numel(data)))
%// Select lines that start with character ";" and split lines based on it
%// Split selected lines based on the delimiter ";"
split_data = regexp(data(matches_idx),';','split')
%// Collect all cells data into a 1D cell array
all_data = [split_data{:}]
%// Select only non-empty cells and convert them to a numeric array.
%// Finally reshape into a format with 3 numbers per row as final output
out = reshape(str2double(all_data(~cellfun('isempty',all_data))),3,[]).' %//'
%// Separate out lines based on the earlier set bounds
out_sep = arrayfun(#(n) out(matches_idx>bound_idx(n) & ...
matches_idx<bound_idx(n+1),:),1:numel(bound_idx)-1,'Uni',0)
%// Display results for verification
celldisp(out_sep)
Code run -
out_sep{1} =
-400 -200 200
123 233 434
out_sep{2} =
-410 200 20300
63 23 43
23 44 78213
A brute force approach would be to open up your file, then read each line one at a time. With each line, you check to see if the first character is a semi-colon and if it is, split up the string by the ; delimiter from the second character of the line up until the end. You will receive a cell array of strings, so you'd have to convert this into an array of numbers. Because you will probably have each line containing a different amount of numbers, let's store each array into a cell array where each element in this cell array will contain the numbers per line. As such, do something like this. Let's assume your text file is stored in text.txt:
fid = fopen('text.txt');
if fid == -1
error('Cannot find file');
end
nums = {};
while true
st = fgetl(fid);
if st == -1
break;
end
if st(1) == ';'
st_split = strsplit(st(2:end), ';');
arr = cellfun(#str2num, st_split);
nums = [nums arr];
end
end
Let's go through the above code slowly. We first use fopen to open up the file for reading. We check to see if the ID returned from fopen is -1 and if that's the case, we couldn't find or open the file so spit out an error. Next, we declare an empty cell array called nums which will store our numbers that you are getting when parsing your text file.
Now, until we reach the end of the file, get one line of text starting from the top of the file and we proceed to the end. We use fgetl for this. If we read a -1, this means we have reached the end of the file, so get out of the loop. Else, we check to see if the first character is ;. If it is, then we take a look at the second character until the end of this line, and split the string based on the ; character with strsplit. The result of this will be a cell array of strings where each element is the string representation of your number. You need to convert this cell array back into a numeric array, and so what you would need to do is apply str2num to each element in this cell. You can either use a loop to go through each cell, or you can conveniently use [cellfun](http://www.mathworks.com/help/matlab/ref/cellfun.html to allow you to go through each element in this cell and convert the string representation into a numeric value. The resulting output of cellfun will give you a numeric array representation of each value delimited by the ; character for that line. We then place this array into a single cell stored in nums.
The end result of this entire code will give you numeric arrays that are based on what you are looking for stored in nums.
Warning
I am assuming that your text file only has numbers delimited by ; characters if we encounter a line that starts with ;. If this is not the case, then my code will not work. I'm assuming this isn't the case!
I have a cell, something like this P= {Face1 Face6 Scene6 Both9 Face9 Scene11 Both12 Face15}. I would like to count how many Face values, Scene values, Both values in P. I don't care about the numeric values after the string (i.e., Face1 and Face23 would be counted as two). I've tried the following (for the Face) but I got the error "If any of the input arguments are cell arrays, the first must be a cell array of strings and the second must be a character array".
strToSearch='Face';
numel(strfind(P,strToSearch));
Does anyone have any suggestion? Thank you!
Use regexp to find strings that start (^) with the desired text (such as 'Face'). The result will be a cell array, where each cell contains 1 if there is a match, or [] otherwise. So determine if each cell is nonempty (~cellfun('isempty', ...): will give a logical 1 for nonempty cells, and 0 for empty cells), and sum the results (sum):
>> P = {'Face1' 'Face6' 'Scene6' 'Both9' 'Face9' 'Scene11' 'Both12' 'Face15'};
>> sum(~cellfun('isempty', regexp(P, '^Face')))
ans =
4
>> sum(~cellfun('isempty', regexp(P, '^Scene')))
ans =
2
Your example should work with some small tweaks, provided all of P contains strings, but may give the error you get if there are any non-string values in the cell array.
P= {'Face1' 'Face6' 'Scene6' 'Both9' 'Face9' 'Scene11' 'Both12' 'Face15'};
strToSearch='Face';
n = strfind(P,strToSearch);
numel([n{:}])
(returns 4)