how to make vim handle 64bit int number? - vim

in vim :h Number:
Number A 32 or 64 bit signed number.
the value of max 32/64bit number:
2^31=2147483648
2^63=9223372036854775808
Now if I call str2nr() in vim:
:echo str2nr('2147483648')
-> -2147483648
:echo str2nr('112147483648')
-> 478333952
or eval the expression directly:
echo 2147483648 + 7777
-> -2147482871
So the question is, vim doc tells number is 32 or 64 bit number. How to make vim handle the 64bit number? like 2147483648 + 7777 =2147491425
P.S above test was done on:
Archlinux 64bit
Vim 7.4.135 (from Arch 64bit repo)

According to this https://groups.google.com/forum/#!topic/vim_dev/2pXdRG7yW0Y. You can't.
Bram says that a number is an int and if your platform declares int as having 64 bits you will have a 64 bit int in vim. However int was chosen to be 32 bits (by your compiler), so you only get 32 bits for a number.
In :h limits I find the following.
Range of a Number variable: -2147483648 to 2147483647 (might be more on 64
bit systems)
The might be more is probably telling you that those limits might still be the same on a 64 bit platform. It is totally up to the compiler.
I think the only way you would be able to change this is if you were to go through all of the source code and change how numbers are represented internally or find a compiler that represents int as 64 bits. (I don't know of one off hand)
If you want to find out how many bits your compiler will represent an int as you can run the following code.
#include "limits.h"
#include "stdio.h"
int main() {
printf("sizeof int in bits %lu\n", sizeof(int) * CHAR_BIT);
}
On my 64 bit machine this prints (I suspect your machine will do the same)
sizeof int in bits 32

Related

Why is the size of Option<f64> 16 bytes on 64bit Linux?

I tried this on 64bit Linux and it gives 16:
println!("Results: {}", mem::size_of::<Option<f64>>())
as I understand, this is because of:
pub struct Discriminant<T>(u64, PhantomData<fn() -> T>);
What is the sense in a 64bit discriminant? For code written by hand, 256 will be enough, for generated code 2^16 would be a huge number, and I can not even imagine why need 2^32. Why would it use 64 bits for this?
Why does the compiler not optimize it for the Option case? 8 bits at the end of the structure should be enough.
f64 has a 64-bit alignment. Thus a type containing an f64 needs an alignment of at least 64 bits. The size of a type is guaranteed to be a multiple of its alignment. Since it can't fit in 64 bits, it needs to go up to the next multiple, 128 bits/16 bytes.

Calculating percentage in C in Windows

I have the following line of codes to calculate the percentage of count/total.
In Microsoft Visual C++ 2005
printf("Count = %lu (%.2lf%%)\n", count, (double)count/(double)total*100.0);
In gcc
printf("Count = %lu (%.2lf\%)\n", count, (double)count/(double)total*100.0);
Both "count" and "total" are unsigned 64-bit integers. For some reason, I'm able to get the correct value in gcc, but always get 0.00% in Visual C++.
Why can't I get the correct value in Windows?
%lu prints an unsigned long int. In Visual C++, that's a 32-bit integer, not a 64-bit integer. You said that "count" is a 64-bit integer, so it is not a "long int". (You must have declared it as __int64 count; or long long int count;, right? How else would you declare a 64-bit integer variable in Visual C++ 2005?)
Since printf has no idea what arguments you've passed or how big they are, it relies on the format string to tell it how many bytes of data to pull off the stack for each argument. By writing %lu, you've told it to remove 4 bytes (32 bits) of data for "count", but you passed 8 bytes (64 bits) of data when you passed the 64-bit integer variable "count" to printf. That leaves 4 bytes still sitting on the stack. Then when printf goes to remove the 8 bytes (64 bits) of "double" for the percentage, it first ends up with 4 bytes leftover from "count". Since Intel x86 CPUs are little-endian, those bytes are almost certainly 00000000.
Short answer: write %I64u or %llu to print an unsigned 64-bit value in Visual C++.

Weird syscall numbers on Linux 32 bits

The story
I have a C program which generates automatically a list of syscall numbers, as I prefer automated generation from real world reference than hand‑written generated files when, applicable. The target is an Ada package. I've run a test with the classical “Hello world” involving the common write syscall… it failed. I figured the syscall number was wrong: 64 instead of 4.
I generated the list from a C program including <asm-generic/unistd.h>. The platform is 32 bits and no tool‑chain targeting 64 bits platform was ever installed.
Examples definitions from this unistd.h: #define __NR_write 64 (should be 4), #define __NR_read 63 (should be 3), #define __NR_getuid 174 (should be 24), and so on…
I've run a text search in all files in /usr/** for occurrences of __NR_write which would be part of the expected definition, and found no one.
The question
Why this header specify weird syscall numbers? Why is the expected definitions found nowhere? Is this a new ABI?
Note: the platform is Ubuntu 12.04, 32 bits.
Update
I figured something running this command:
find /usr/include/ -name "unistd*" -exec nano '{}' \;
It shows the header /usr/include/i386-linux-gnu/asm/unistd_32.h contains the good numbers, and that header is included from /usr/include/i386-linux-gnu/asm/unistd.h, but many of the symbols are not defined when <asm/unistd.h> is included.
Update 2
Not only the numbers differs, but many names too. Ex. __NR_socket vs __NR_socketcall. The start of an explanation may be given in the possible duplicate: arch/x86/include/asm/unistd.h vs. include/asm-generic/unistd.h.
If you start with /usr/include/sys/syscall.h (as indicated in syscall(2)) and repeatedly follow the include directives you arrive at /usr/include/asm/unistd_32.h. Hence I recommend you use this header.
From the source of asm-generic.h:
6 /*
7 * This file contains the system call numbers, based on the
8 * layout of the x86-64 architecture, which embeds the
^^^^^^
9 * pointer to the syscall in the table.
10 *
...
15 */

vmalloc_to_pfn returns 32 bit address on Linux 32 system. Why does it chop off higher bits of PAE physical address?

I'm using vmalloc_to_pfn() to get the physical address on a 32-bit PAE Linux system. It looks like vmalloc_to_pfn() returns "unsigned long" which means it is 32 bit on a 32 bit system, 64 bit on a 64-bit system. On 64-bit Linux, unsigned long is 64 bit and I've no issues.
Problem:
Using this function to convert virtual to physical:
VA: 0xf8ab87fc
PA using vmalloc_to_pfn: 0x36f7f7fc. But I'm actually expecting: 0x136f7f7fc.
The physical address falls between 4 to 5 GB. But I can't get the exact physical address, I only get the chopped off 32-bit address. Is there another way to get true physical address?
I am myself studying this, and am on 32 bit - so this is not exactly an answer. But digging through the same stuff, I can see the source for vmalloc_to_pfn says:
/*
* Map a vmalloc()-space virtual address to the physical page frame number.
*/
unsigned long vmalloc_to_pfn(const void *vmalloc_addr)
{
return page_to_pfn(vmalloc_to_page(vmalloc_addr));
}
EXPORT_SYMBOL(vmalloc_to_pfn);
So, it should not actually return an address - it should return a "page frame number" (PFN). In relation to that:
http://www.tldp.org/LDP/tlk/mm/memory.html
Using the above example again, process Y's virtual page frame number 1 is mapped to physical page frame number 4 which starts at 0x8000 (4 x 0x2000). Adding in the 0x194 byte offset gives us a final physical address of 0x8194.
So apparently, one should multiply the PFN by PAGE_SIZE to get an actual address - which then makes it strange, how come you got "returns 32 bit address on Linux 32 system" to work at all (but then again, I'm no expert - maybe PFN is equivalent to an address for 32 bit?). Probably a minimal working example of a module in the question OP, and output on both platforms for comparison, would have been in order.
In any case, I just have noticed what you have - that Physical Address Extension (PAE) may make a difference in paging; apparently, the value stored as a PFN in Page Global Directory (PGD) is architecture-specific, and is defined differently depending on it:
typedef unsigned long pgdval_t; // arch/x86/include/asm/pgtable-2level_types.h
typedef u64 pgdval_t; // arch/x86/include/asm/pgtable-3level_types.h
typedef unsigned long pgdval_t; // arch/x86/include/asm/pgtable_64_types.h
To summarize - just using vmalloc_to_pfn() is probably not the whole story in getting the physical address.

printf ptr: can the leading 0x be eliminated?

The Linux printf renders %p arguments as hex digits with a leading 0x. Is there a way to make it not print the 0x? (Needs to work on both 32 and 64 bit.)
You can use the format specifier for uintptr_t from <inttypes.h>:
#include <inttypes.h>
[...]
printf("%"PRIxPTR"\n", (uintptr_t) p);
This works like %x for the uintptr_t type, which is an integer type capable of roundtrip conversion from/to any pointer type.
Use %llx, it will work on 64-bit for sure. Tried and tested.
Use %lx or %08lx. It works for both 32 and 64 bit linux gcc, because long int is always the same width as void *. Doesn't work for MSVC, because long int is always 32 bit in MSVC.
If you want it to work on all compilers, you can use %llx and cast your pointer to unsigned long long int, it's not efficient in 32 bit though.
If you want efficiency as well, define different macro for different cases.

Resources