If I am given a string of letters 'abcd' and I want to convert this to a vector of numbers V = [1,2,3,4] which corresponds to the position of letters in the alphabet table, how can I do this?
Just subtract 'a'. Add one to map 'a' to 1. The subtraction sends the results into a double.
V = C - 'a' + 1;
For example,
C = 'helloworld';
C - 'a' + 1
ans =
8 5 12 12 15 23 15 18 12 4
To map 'a' to 1, 'b' to 2, etc., use the DOUBLE function to recast the character back to its ASCII code number, then shift the value:
V = double(charString)-96;
EDIT: Actually, you don't even need the call to DOUBLE. Characters will automatically be converted into double-precision numbers when you perform any arithmetic with another double-precision number (the default type for MATLAB variables). So, the following is an even simpler answer:
V = charString-96;
use uint8, then subtract the char value of 'a', then push that onto a vector. link
Related
You have given two strings A and B. You have some empty string C. In one operation You can remove any no of characters (from anywhere) from String B and append it to string C. Minimum no of operations required to convert String C to String A.
e.g if
A is "ABCDE" and
B is "ABDEC" then
In 1st operation you will choose subsequence ABC from B and in 2nd operation DE.
So two operations are required.
if
A is "ABCDE"
B is "EDCBA" then
operations required 5.
Linear complexity expected O(n)
Just use a greedy algorithm.
1 - Let i = 0
2 - Let j = 0
3 - Search for the first A[i] in B after j
4 - If it exists, let j be its index in B, remove it from B, append it to C, increment i, and repeat from 3
5 - If it doesn't exist, repeat from 2
Each time you get to 5 corresponds to one operation.
Assuming all the characters of A (and B) are different, then here is a solution with linear complexity. You need a hashmap or something similar, as well as an array of indices, Y, of equal length to A and B.
1 - Put each character of A in the hashmap as key, with its index as value.
2 - Look up each character of B in the hashmap to get the value i, and put its index into Y at the position i.
3 - Go through Y counting the number of times that Y[i] < Y[i-1]. That's your number of operations.
I am trying to understand the maths in this code that converts binary to decimal. I was wondering if anyone could break it down so that I can see the working of a conversion. Sorry if this is too newb, but I've been searching for an explanation for hours and can't find one that explains it sufficently.
I know the conversion is decimal*2 + int(digit) but I still can't break it down to understand exaclty how it's converting to decimal
binary = input('enter a number: ')
decimal = 0
for digit in binary:
decimal= decimal*2 + int(digit)
print(decimal)
Here's example with small binary number 10 (which is 2 in decimal number)
binary = 10
for digit in binary:
decimal= decimal*2 + int(digit)
For for loop will take 1 from binary number which is at first place.
digit = 1 for 1st iteration.
It will overwrite the value of decimal which is initially 0.
decimal = 0*2 + 1 = 1
For the 2nd iteration digit= 0.
It will again calculate the value of decimal like below:
decimal = 1*2 + 0 = 2
So your decimal number is 2.
You can refer this for binary to decimal conversion
The for loop and syntax are hiding a larger pattern. First, consider the same base-10 numbers we use in everyday life. One way of representing the number 237 is 200 + 30 + 7. Breaking it down further, we get 2*10^2 + 3*10^1 + 7*10^0 (note that ** is the exponent operator in Python, but ^ is used nearly everywhere else in the world).
There's this pattern of exponents and coefficients with respect to the base 10. The exponents are 2, 1, and 0 for our example, and we can represent fractions with negative exponents. The coefficients 2, 3, and 7 are the same as from the number 237 that we started with.
It winds up being the case that you can do this uniquely for any base. I.e., every real number has a unique representation in base 10, base 2, and any other base you want to work in. In base 2, the exact same pattern emerges, but all the 10s are replaced with 2s. E.g., in binary consider 101. This is the same as 1*2^2 + 0*2^1 + 1*2^0, or just 5 in base-10.
What the algorithm you have does is make that a little more efficient. It's pretty wasteful to compute 2^20, 2^19, 2^18, and so on when you're basically doing the same operations in each of those cases. With our same binary example of 101, they've re-written it as (1 *2+0)*2+1. Notice that if you distribute the second 2 into the parenthesis, you get the same representation we started with.
What if we had a larger binary number, say 11001? Well, the same trick still works. (((1 *2+1 )*2+0)*2+0)*2+1.
With that last example, what is your algorithm doing? It's first computing (1 *2+1 ). On the next loop, it takes that number and multiplies it by 2 and adds the next digit to get ((1 *2+1 )*2+0), and so on. After just two more iterations your entire decimal number has been computed.
Effectively, what this is doing is taking each binary digit and multiplying it by 2^n where n is the place of that digit, and then summing them up. The confusion comes due to this being done almost in reverse, let's step through an example:
binary = "11100"
So first it takes the digit '1' and adds it on to 0 * 2 = 0, so we
have digit = '1'.
Next take the second digit '1' and add it to 1* 2 =
2, digit = '1' + '1'*2.
Same again, with digit = '1' + '1'*2 +
'1'*2^2.
Then the 2 zeros add nothing, but double the result twice,
so finally, digit = '0' + '0'*2 + '1'*2^2 + '1'*2^3 + '1'*2^4 = 28
(I've left quotes around digits to show where they are)
As you can see, the end result in this format is a pretty simple binary to decimal conversion.
I hope this helped you understand a bit :)
I will try to explain the logic :
Consider a binary number 11001010. When looping in Python, the first digit 1 comes in first and so on.
To convert it to decimal, we will multiply it with 2^7 and do this till 0 multiplied by 2^0.
And then we will add(sum) them.
Here we are adding whenever a digit is taken and then will multiply by 2 till the end of loop. For example, 1*(2^7) is performed here as decimal=0(decimal) +1, and then multiplied by 2, 7 times. When the next digit(1) comes in the second iteration, it is added as decimal = 1(decimal) *2 + 1(digit). During the third iteration of the loop, decimal = 3(decimal)*2 + 0(digit)
3*2 = (2+1)*2 = (first_digit) 1*2*2 + (seconds_digit) 1*2.
It continues so on for all the digits.
I have 2 integer variables that I want to covert to a decimal integer. The result should be like this:
a = 10
b = 12
c = 10.12
I can convert them with concatenate to a string decimal but then I cannot use math functions on the result. I have tried to use tonumber() on the string but I got a nil value.
I assume in the beginning a and b are integers and you wanted to join them so that a is the integer part of the resulting number and b is the decimal part - the part after the comma or dot in a double or float.
This is the string concat solution you suggested which works fine for me
a = 10
b = 12
c = tonumber(a..'.'..b)
print(c) -- prints 10.12
Here we are using math to calculate amount to divide b to get it as the correct decimal and then we add it to a. The code for determining the power of 10 was found here: How can I count the digits in an integer without a string cast?
a = 10
b = 12
c = a + b / math.pow(10, b == 0 and 1 or math.floor(math.log(math.abs(b), 10))+1)
print(c) -- prints 10.12
I have a string which is a sequence of numbers between 0-9, I would like to replace some of the numbers in that string with random numbers, but I'm unable to do with with either rand or randi as I keep getting the following error:
Conversion to char from cell is not possible.
Error in work (line 58)
mut(i,7:14) = {mute_by};
Here's what I'm currently doing to try and alter some digits of my string:
% mutate probability 0.2
if (rand < 0.2)
% pick best chromosome to mutate
mut = combo{10};
mute_by = rand([0,9]);
for i = 1:5
mut(i) = {mute_by};
end
end
mut represents the string 110202132224154246176368198100
How would I go about doing this? I assumed it would be fairly simple but I've been going over the documentation for a while now and I can't find the solution.
What I would do is generate a logical array that represents true if you want to replace a particular position in your string and false otherwise. How you're determining this is by generating random floating pointing point numbers of the same size as your string, and checking to see if their values are < 0.2. For any positions in your string that satisfy this constraint, replace them with another random integer.
The best way to do this would be to convert your string into actual numbers so that you can actual modify their positions numerically, create our logical array, then replace these values with random integers at these co-ordinates.
Something like this:
rng(123); %// Set seed for reproducibility
mut = '110202132224154246176368198100';
mut_num = double(mut) - 48;
n = numel(mut_num);
vec = rand(1, n) < 0.4;
num_elem = sum(vec);
mut_num(vec) = randi(10,1,num_elem) - 1;
mut_final = char(mut_num + 48);
Let's go through this code slowly. rng is a random seed generator, and I set it to 123 so that you're able to reproduce the same results that I have made as random number generation... is of course random. I declare the number you have made as a string, then I have a nifty trick of turning each character in your string into a single element of a numeric array. I do this by casting to double, then subtracting by 48. By casting to double, each element gets converted into its ASCII code, and so 0 would be 48, 1 would be 49 and so on. Therefore, we need to subtract the values by 48 so that we can bring this down to a range of [0-9]. I then count how long your string is with numel, then figure out which values I want to replace by generating that logical vector we talked about.
We then count up how many elements we need to change, then generate a random integer vector that is exactly this size, and we use this logical vector to index into our recently converted string (being stored as a numeric array) with random integers. Note that randi generates values from 1 up to whatever maximum you want. Because this starts at 1, I have to generate up to 10, then subtract by 1. The output of randi gets placed into our numeric array, and then we convert our numeric array into a string with char. Note that we need to add by 48 to convert the numbers into their ASCII equivalents before creating our string.
I've changed the probability to 0.4 to actually see the changes better. Setting this to 0.2 I could barely notice any changes. mut_final would contain the changed string. Here is what they look like:
>> mut
mut =
110202132224154246176368198100
>> vec
vec =
Columns 1 through 13
0 1 1 0 0 0 0 0 0 1 1 0 0
Columns 14 through 26
1 1 0 1 1 0 0 0 0 0 0 0 1
Columns 27 through 30
1 1 1 0
>> mut_final
mut_final =
104202132444143248176368195610
vec contains those positions in the string you want to change, starting from the 2nd position, 3rd position, etc. The corresponding positions in mut change with respect to vec while the rest of the string is untouched and is finally stored in mut_final.
In matlab, how can I turn a string or cell of digits into a vector of numbers, where each digit in the string is an element in the vector.
That is, for eg., how to turn this:
A=3141592;
(where class(A)=char)
into this:
A=[3 1 4 1 5 9 2];
(where class(A)=double)
This is related to this question
Subtract ascii value of '0' from each of the ascii characters that constitute the string in A to get the double array -
A-'0'
Straight away plugging in the ascii value would work too -
A-48
Output -
ans =
3 1 4 1 5 9 2