Assembly shift right with carry in? - nasm

My class is using Nasm assembly and I was trying to figure out different ways to shift, we know the instructions shr/sar, shl/sal, ror, rcr, rol, rcl. But would I shift right and set the leftmost bit to whatever I want.
For example:
I have 11010011, and shifting right would produce _1101001 cf=1,
is there a shift in which I can carry in a number to the leftmost bit?
Thanks!
edit:
My only thoughts are using bit-wise operations and if the leftmost bit isn't what I want I can flip it using the not operator.
For example the number ends up as 1 1101001 and I wanted 0 1101001,
1 1101001 & 01101001 = 01101001
or,
0 1101001 | 11101001 = 11101001

The easiest way would be to simply set the bit to what you want using AND or OR operations.
If you want the high bit set to 1, use input OR 1000000.
If you want it set to 0, use input AND 01111111.
The remaining bits will be unchanged.

Related

Any algorithm that use a number as a feed for generating random string?

I want to generate a random string with any fixed length (N) of my choice. With the same number as a feed to this algorithm it should generate the same string. And with small change to the number like number+1, it should generate a completely different string. (Difficult to relate to the previous seed) It's ok if more than one number might result in the same string. Any approaches for doing this?
By the way, I have a set of characters that I want to appear in the string, like A-Z a-z 0-9.
For example
Algorithm(54893450,4,"ABCDEFG0") -> A0GF
Algorithm(54893451,4,"ABCDEFG0") -> BDCG
I could random each characters one by one, but it would need N different seed for each characters. If I want to do it this way, the question might become "how to generate N numbers from one number" for the seeds.
The end goal is that I want to convert a GUID to something more readable on printed media and shorter. I don't care about conflict. (If the conflict did happen, I can still check the GUID for resolution)
Ok, thanks for the guidance #Jim Mischel. I read all the related pages and come to understand more about this.
http://blog.mischel.com/2017/05/30/how-not-to-generate-unique-codes/
http://blog.mischel.com/2017/06/02/a-broken-unique-key-generator/
http://blog.mischel.com/2017/06/10/how-did-this-happen/
http://blog.mischel.com/2017/06/20/how-to-generate-random-looking-keys/
https://ericlippert.com/2013/11/12/math-from-scratch-part-thirteen-multiplicative-inverses/
https://ericlippert.com/2013/11/14/a-practical-use-of-multiplicative-inverses/
https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm
In short, first I should use a sequential number. That is 1,2,3,4,... Very predictable, but it can turn into something random and hard to guess.
(Note that in my case this is not entirely possible, since each users will be generating his own ID locally so I cannot run a global sequential number, hence I use GUID. But I will make my own workaround to fit GUID to this solution, probably with a simple modulo on the GUID to fit it to my desired range.)
With sequential integer n I can get another seemingly unrelated integer with a multiplication then a modulo. This might looks like (n * x)% m with x and m of my choice. Of course m would have to be larger than the largest number that I want to use since it wraps around the modulo while multiplying.
This alone is a good start as close number n does not provide similar output. But we cannot be so sure about that. For example, if my x is 4 and m is 16 then the input can only produce 0,4,8,12. To avoid this we choose x and m which is a coprime of each other. (Having greatest common divisor of 1) There are many obvious candidate to this such as 100000 as m (defines the limit of my output as 99999) and 2429 as x. If we choose 2 coprime like this, not only the result conflict as less as possible, it also guarantee that each input produces unique output in that range.
We can learn from this example :
(n * 5) % 16
As 5 and 16 is a coprime, we can get a maximum length of sequence of unique numbers before it wraps around. (length = 16) if we input numbers sequentially from 0 to 16 :
Input : 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16
Output : 0, 5, 10, 15, 4, 9, 14, 3, 8, 13, 2, 7, 12, 1, 6, 11, 0
We can see that the output is in a not so predictable sequence and also none of the output other than the last one is the same. It travels to all available number possible.
Now my very predictable sequential running number would produce sufficiently different number and also guarantee not to conflict to any other input as long as it is in the range of m. What's left is to convert this number to a string of my choice via base conversion. If I have 5 characters "ABCDE" then I will use base-5.
Only this is enough for my use case. But with the concept of multiplicative inverse I can also find one more integer y which can reverse that multiply modulo transformation to the original number. Currently I still haven't understand that part fully, but it uses Extended Euclidean Algorithm to find y.
Since my application does not need reverting yet I am fine with not understanding it for now. I will definitely try to understand that part.

Bitwise operations Python

This is a first run-in with not only bitwise ops in python, but also strange (to me) syntax.
for i in range(2**len(set_)//2):
parts = [set(), set()]
for item in set_:
parts[i&1].add(item)
i >>= 1
For context, set_ is just a list of 4 letters.
There's a bit to unpack here. First, I've never seen [set(), set()]. I must be using the wrong keywords, as I couldn't find it in the docs. It looks like it creates a matrix in pythontutor, but I cannot say for certain. Second, while parts[i&1] is a slicing operation, I'm not entirely sure why a bitwise operation is required. For example, 0&1 should be 1 and 1&1 should be 0 (carry the one), so binary 10 (or 2 in decimal)? Finally, the last bitwise operation is completely bewildering. I believe a right shift is the same as dividing by two (I hope), but why i>>=1? I don't know how to interpret that. Any guidance would be sincerely appreciated.
[set(), set()] creates a list consisting of two empty sets.
0&1 is 0, 1&1 is 1. There is no carry in bitwise operations. parts[i&1] therefore refers to the first set when i is even, the second when i is odd.
i >>= 1 shifts right by one bit (which is indeed the same as dividing by two), then assigns the result back to i. It's the same basic concept as using i += 1 to increment a variable.
The effect of the inner loop is to partition the elements of _set into two subsets, based on the bits of i. If the limit in the outer loop had been simply 2 ** len(_set), the code would generate every possible such partitioning. But since that limit was divided by two, only half of the possible partitions get generated - I couldn't guess what the point of that might be, without more context.
I've never seen [set(), set()]
This isn't anything interesting, just a list with two new sets in it. So you have seen it, because it's not new syntax. Just a list and constructors.
parts[i&1]
This tests the least significant bit of i and selects either parts[0] (if the lsb was 0) or parts[1] (if the lsb was 1). Nothing fancy like slicing, just plain old indexing into a list. The thing you get out is a set, .add(item) does the obvious thing: adds something to whichever set was selected.
but why i>>=1? I don't know how to interpret that
Take the bits in i and move them one position to the right, dropping the old lsb, and keeping the sign. Sort of like this
Except of course that in Python you have arbitrary-precision integers, so it's however long it needs to be instead of 8 bits.
For positive numbers, the part about copying the sign is irrelevant.
You can think of right shift by 1 as a flooring division by 2 (this is different from truncation, negative numbers are rounded towards negative infinity, eg -1 >> 1 = -1), but that interpretation is usually more complicated to reason about.
Anyway, the way it is used here is just a way to loop through the bits of i, testing them one by one from low to high, but instead of changing which bit it tests it moves the bit it wants to test into the same position every time.

Encoding name strings into an unique number

I have a large set of names (millions in number). Each of them has a first name, an optional middle name, and a lastname. I need to encode these names into a number that uniquely represents the names. The encoding should be one-one, that is a name should be associated with only one number, and a number should be associated with only one name.
What is a smart way of encoding this? I know it is easy to tag each alphabet of the name according to its position in the alphabet set (a-> 1, b->2.. and so on) and so a name like Deepa would get -> 455161, but again here I cannot make out if the '16' is really 16 or a combination of 1 and 6.
So, I am looking for a smart way of encoding the names.
Furthermore, the encoding should be such that the number of digits in the output numeral for any name should have fixed number of digits, i.e., it should be independent of the length. Is this possible?
Thanks
Abhishek S
To get the same width numbers, can't you just zero-pad on the left?
Some options:
Sort them. Count them. The 10th name is number 10.
Treat each character as a digit in a base 26 (case insensitive, no
digits) or 52 (case significant, no digits) or 36 (case insensitive
with digits) or 62 (case sensitive with digits) number. Compute the
value in an int. EG, for a name of "abc", you'd have 0 * 26^2 + 1 *
26^1 + 2 * 20^0. Sometimes Chinese names may use digits to indicate tonality.
Use a "perfect hashing" scheme: http://en.wikipedia.org/wiki/Perfect_hash_function
This one's mostly suggested in fun: use goedel numbering :). So
"abc" would be 2^0 * 3^1 * 5^2 - it's a product of powers of primes.
Factoring the number gives you back the characters. The numbers
could get quite large though.
Convert to ASCII, if you aren't already using it. Then treat each
ordinal of a character as a digit in a base-256 numbering system.
So "abc" is 0*256^2 + 1*256^1 + 2*256^0.
If you need to be able to update your list of names and numbers from time to time, #2, #4 and #5 should work. #1 and #3 would have problems. #5 is probably the most future-proofed, though you may find you need unicode at some point.
I believe you could do unicode as a variant of #5, using powers of 2^32 instead of 2^8 == 256.
What you are trying to do there is actually hashing (at least if you have a fixed number of digits). There are some good hashing algorithms with few collisions. Try out sha1 for example, that one is well tested and available for modern languages (see http://en.wikipedia.org/wiki/Sha1) -- it seems to be good enough for git, so it might work for you.
There is of course a small possibility for identical hash values for two different names, but that's always the case with hashing and can be taken care of. With sha1 and such you won't have any obvious connection between names and IDs, which can be a good or a bad thing, depending on your problem.
If you really want unique ids for sure, you will need to do something like NealB suggested, create IDs yourself and connect names and IDs in a Database (you could create them randomly and check for collisions or increment them, starting at 0000000000001 or so).
(improved answer after giving it some thought and reading the first comments)
You can use the BigInteger for encoding arbitrary strings like this:
BigInteger bi = new BigInteger("some string".getBytes());
And for getting the string back use:
String str = new String(bi.toByteArray());
I've been looking for a solution to a problem very similar to the one you proposed and this is what I came up with:
def hash_string(value):
score = 0
depth = 1
for char in value:
score += (ord(char)) * depth
depth /= 256.
return score
If you are unfamiliar with Python, here's what it does.
The score is initially 0 and the depth are set to 1
For every character add the ord value * the depth
The ord function returns the UTF-8 value (0-255) for each character
Then it's multiplied by the 'depth'.
Finally the depth is divided by 256.
Essentially, the way that it works is that the initial characters add more to the score while later characters contribute less and less. If you need an integer, multiply the end score by 2**64. Otherwise you will have a decimal value between 0-256. This encoding scheme works for binary data as well as there are only 256 possible values in a byte/char.
This method works great for smaller string values, however, for longer strings you will notice that the decimal value requires more precision than a regular double (64-bit) can provide. In Java, you can use the 'BigDecimal' and in Python use the 'decimal' module for added precision. A bonus to using this method is that the values returned are in sorted order so they can be searched 'efficiently'.
Take a look at https://en.wikipedia.org/wiki/Huffman_coding. That is the standard approach.
You can translate it, if every character (plus blank, at least) will occupy a position.
Therefore ABC, which is 1,2,3 has to be translated to
1*(2*26+1)² + 2*(53) + 3
This way, you could encode arbitrary strings, but if the length of the input isn't limited (and how should it?), you aren't guaranteed to have an upper limit for the length.

scale 14 bit word to an 8 bit word

I'm working on a project where I sample a signal with an ADC, that represents values as 14 bit words. I need to scale the values to 8 bit words. What's a good way to go about this in general. By the way, I'm using an FPGA so I'd like to do it in "hardware" rather than a software solution. Also in case you're wondering the chain of events will be: sample analog signal, represent sample value with 14 bit word, scale 14 bit word to 8 bit word, transmit 8 bit word with UART to PC COM1.
I've never done this before. I was assuming you use quantization levels, but I'm not sure what an efficient circuit for this operation would be. Any help would be appreciated.
Thanks
You just need an add and a shift:
val_8 = (val_14 + 32) >> 6;
(The + 32 is necessary to get correct rounding - you can omit it but you will get more truncation noise in your signal if you do.)
I think you just drop the six lowest resolution bits and call it good, right? But I might not fully understand the problem statement.
Paul's algorithm is correct, but you'll need some bounds checking.
assign val_8 = (&val_14[13:5]) ? //Make sure your sum won't overflow
8'hFF : //Assign all 1's if it will
val_14[13:6] + val_14[5];

Why do prevailing programming languages like C use array starting from 0? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Why does the indexing start with zero in 'C'?
Why do prevailing programming languages like C use array starting from 0? I know some programming languages like PASCAL have arrays starting from 1. Are there any good reasons for doing so? Or is it merely a historical reason?
Because you access array elements by offset relative to the beginning of the array.
First element is at offset 0.
Later more complex array data structures appeared (such as SAFEARRAY) that allowed arbitrary lower bound.
In C, the name of an array is essentially a pointer, a reference to a memory location, and so the expression array[n] refers to a memory location n-elements away from the starting element. This means that the index is used as an offset. The first element of the array is exactly contained in the memory location that array refers (0 elements away), so it should be denoted as array[0]. Most programming languages have been designed this way, so indexing from 0 is pretty much inherent to the language.
However, Dijkstra explains why we should index from 0. This is a problem on how to denote a subsequence of natural numbers, say for example 1,2,3,...,10. We have four solutions available:
a. 0 < i < 11
b. 1<= i < 11
c. 0 < i <= 10
d. 1 <= i <= 10
Dijkstra argues that the proper notation should be able to denote naturally the two following cases:
The subsequence includes the smallest natural number, 0
The subsequence is empty
Requirement 1. leaves out a. and c. since they would have the form -1 < i which uses a number not lying in the natural number set (Dijkstra says this is ugly). So we are left with b. and d. Now requirement 2. leaves out d. since for a set including 0 that is shrunk to the empty one, d. takes the form 0 <= i <= -1, which is a little messed up! Subtracting the ranges in b. we also get the sequence length, which is another plus. Hence we are left with b. which is by far the most widely used notation in programming now.
Now you know. So, remember and take pride in the fact that each time you write something like
for( i=0; i<N; i++ ) {
sum += a[i];
}
you are not just following the rules of language notation. You are also promoting mathematical beauty!
here
In assembly and C, arrays was implemented as memory pointers. There the first element was stored at offset 0 from the pointer.
In C arrays are tied to pointers. Array index is a number that you add to the pointer to the array's initial element. This is tied to one of the addressing modes of PDP-11, where you could specify a base address, and place an offset to it in a register to simulate an array. By the way, this is the same place from which ++ and -- came from: PDP-11 provided so-called auto-increment and auto-decrement addressing modes.
P.S. I think Pascal used 1 by default; generally, you were allowed to specify the range of your array explicitly, so you could start it at -10 and end at +20 if you wanted.
Suppose you can store only two bits. That gives you four combinations:
00 10 01 11 Now, assign integers to those 4 values. Two reasonable mappings are:
00->0
01->1
10->2
11->3
and
11->-2
10->-1
00->0
01->1
(Another idea is to use signed magnitude and use the mapping:
11->-1 10->-0 00->+0 01->+1)
It simply does not make sense to use 00 to represent 1 and use 11 to represent 4. Counting from 0 is natural. Counting from 1 is not.

Resources