what left and right shift operator and how they works - c#-4.0

//((Hello everybody!
i am C# beginner can any one tell me the function of left and right shift operator and their working way w.r.t the following program. I read it somewhere but confuse.
thanks ))
using System;
class clc
{
public static void Main() // the Main method
{
int x = 7, y = 2, z, r;
z = x << y ; //left shift operator
r = x >> y; // right shift operator
Console.WriteLine("\n z={3}\tr={4} ",z,r);
}
}

To understand the shift operations you must understand binary numbers.
Let's take your example for left shift:
z = 7 << 2;
32 bit integer 7 is 0000 0000 0000 0000 0000 0000 0000 0111 in binary. You must move the bits to the left beginning from the right. The bits that are shifted out of either end are discarded.
Shifting it by 1 will result 0000 0000 0000 0000 0000 0000 0000 1110
Shifting it by 1 one more time will result 0000 0000 0000 0000 0000 0000 0001 1100 which is 28 in integer representation.
Read this good wikipedia article Binary number

Related

jumping indices of values in buffer

I am currently testing some things with an accelerometer and its iio buffer and there is something that confuses me.
The sensor does have four different scan elements: x, y, z and a timestamp.
The indices of those values are:
x = 0, y = 1, z = 2 and time = 3. So far so good.
If I enable all available scan elements the order of the entries is set according to the description.
everything enabled:
0000010 f758 011c 3f64 c0b0 be90 0bfe 499f 0004
0000020 f724 0134 3f58 c0b0 3f2f 10ab 499f 0004
But once I have gaps, for example if I disable the scan element for y, the z value jumps onto index 1 and my buffer looks like this:
x, z and time:
0000010 f720 3f70 0000 0000 722a 5c13 4946 0004
0000020 f728 3f74 0000 0000 0958 60c0 4946 0004
z and time:
0000010 3f6c 0000 0000 0000 ca0b 6ef1 48be 0004
0000020 3f44 0000 0000 0000 edf7 739e 48be 0004
only x and z:
0000010 f720 3f48 f748 3f54 f744 3f5c f75c 3f68
0000020 f750 3f78 f738 3f80 f718 3f64 f700 3f50
I could not find further information on this but I am a bit confused and surprised that the scan elements do not respect their given index once the timestamp is activated and there is an index gap. Is this the normal behavior or is this some stuff that the current sensor driver mixes up?

bit representation in python

Hi I have a question about the bit representation in python
when I use bit operation 1<<31, then we can see the bits are
1000 0000 0000 0000 0000 0000 0000 0000
python will print this value as 2147483648
but when I give a variable value like a = -2**31
the bits are also
1000 0000 0000 0000 0000 0000 0000 0000
but python will print -2147483648
so if the bits are the same , how python decide to use 2147483648 or -2147483648 ?
In python integers do not have a limited precision. Which means among other things, that the numbers are not stored in twos compliment binary. The sign is NOT stored in the bit representation of the number.
So all of -2**31, 2**31 and 1<<31 will have the same bit representation for the number. The sign part of the -2**31 is not part of the bitwise representation of the number. The sign is separate.
You can see this if you try this:
>>> bin(5)
'0b101'
>>> bin(-5)
'-0b101'
The representation isn't really the same. You can use int.to_bytes to check it:
(1 << 31).to_bytes(32, 'big', signed=True)
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\x00\x00\x00'
(-2 ** 31).to_bytes(32, 'big', signed=True)
b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x80\x00\x00\x00'
Also, be careful about the - operator, which have the lower priority here:
-2 ** 31 == -(2 ** 31)

How to specify 3GB using bit shift in python?

I am allocating memory in the Jetson TX2. It has 8GB of RAM.
I need to specify the maximun GPU memory size available for TensorRT.
max_workspace_size_bytes = (has to be an integer)
I have seen some examples using these "values":
1<<20 = 1048576 (decimal)
= 0001 0000 0000 0000 0000
1<<30 = 1073741824
= 0001 0000 0000 0000 0000 0000 0000
But if I have 8GB of RAM, how can "1048576" or "1073741824" represent a part of RAM?
I have used this to allocate 3GB:
3*(10**9)
But I would like to understand the other way of representing a number.
Perhaps you're having a "Gb vs Gib" problem. Usually, 3 Gigas of RAM refers to 3,221,225,472 bytes instead of 3,000,000,000.
The first value is 3 * (2^10)*(2^10)*(2^10), a nice 3 (11) followed by 30 zeros in binary representation, while the second is 3 * (10^3)*(10^3)*(10^3), which is a mess in binary.
This convention of using powers of 2 instead of powers of 10 is the reason why you'll see people writing a 3Gb as 3 << 30:
3 << 30 == 3 * (1 << 10) * (1 << 10) * (1 << 10)
== 3 * (2**10 * 2**10 * 2**10)
== 3 * (2**30)
There's a related question and a good Wikipedia article about this issue if you want to learn more.
You can sum them up.
((1<<30)+(1<<31))
Or bitwise OR them.
((1<<30) | (1<<31))
Or shift a larger value than 1, e.g. 3.
(3<<30)
3GB = 3,221,225,472
1100 0000 0000 0000 0000 0000 0000 0000
3<<30 = 3GB

n-th bit of the binary representation

I have a problem that asks
"find the K-th bit of the binary representation of an integer N"
where 0 <= K <= 31.
The answer states that when N=1 K=0 , the k-th bit is 1
and also
when N=2 K=1 , the k-th bit is 1 as well. How is this so?
The little endian binary representation of 1 is 0000 0000 0000 0001 and the little endian binary representation of 2 is 0000 0000 0000 0010.

Memory Mapping Large File Haskell

I am experimenting with the Haskell mmap package and I am quite new to Haskell, so I am trying to get started by writing a little program to write a small amount of data to a memory mapped file.
This code correctly creates and file size but doesn't seem to flush the data from the vector to the memory mapped file; I verified this using hexdump - it's just all 0s.
What is going wrong?
import Control.Monad
import Data.Vector.Storable
import Foreign.Marshal.Array
import System.Directory
import System.IO
import System.IO.MMap
createFile :: FilePath -> Integer -> IO ()
createFile path size = do
h <- openBinaryFile path WriteMode
hSetFileSize h size
n = 10
size = 10 * 8
path = "test.dat" :: FilePath
main :: IO ()
main = do
createFile "signal.ml" size
let v = generate n (\i -> i) :: Vector Int
putStrLn $ show v
(ptr, s, _, _) <- mmapFilePtr path ReadWrite Nothing
unsafeWith v (\srcPtr -> copyArray ptr srcPtr n)
munmapFilePtr ptr s
Many thanks.
Looks like a typo. If I replace this:
createFile "signal.ml" size
with this:
createFile path size
I get correct result:
$ xxd test.dat
0000000: 0000 0000 0000 0000 0100 0000 0000 0000 ................
0000010: 0200 0000 0000 0000 0300 0000 0000 0000 ................
0000020: 0400 0000 0000 0000 0500 0000 0000 0000 ................
0000030: 0600 0000 0000 0000 0700 0000 0000 0000 ................
0000040: 0800 0000 0000 0000 0900 0000 0000 0000 ................

Resources