I want to know when to use VARINT instead of BIGINT?
What is the limitation of each of them and how much space is being used by VARINT?
As per the documentation the bigint is a 64bit signed integer, whereas varint is an arbitrary precision integer, corresponding to the BigInteger type in java.
Related
There is big hex value:
var Hex = "ad6eb61316ff805e9c94667ab04aa45aa3203eef71ba8c12afb353a5c7f11657e43f5ce4483d4e6eca46af6b3bde4981499014730d3b233420bf3ecd3287a2768da8bd401f0abd7a5a137d700f0c9d0574ef7ba91328e9a6b055820d03c98d56943139075d";
How can I convert it to big integer in node.js? I tried to search, but what I found is
var integer = parseInt(Hex, 16);
But It doesn't work if I put big hex value. I think.
the result is,
1.1564501846672726e+243
How can I return normal big integer? I want to use this value for modulus in RSA encryption. Actually I don't know I have to convert it or not.
You need precise integers to do modular arithmetic for RSA, but the largest integer in JavaScript is 9007199254740991 without losing precision. You cannot represent a larger integer as a Number. You would need to devise a way to do modular arithmetic with many chunks of the large integer or simply use one of the available like the big number arithmetic in JSBN which also provides a full implementation of RSA including PKCS#1 v1.5 padding.
column_with_very_big_numerics is numeric(30,0)
SELECT
column_with_very_big_numerics
FROM some_table
I am using the pg node.js client. I would the result to come back as a numeric. Because this number is so large that I can't use int e.g.
SELECT
column_with_very_big_numerics::int,
FROM some_table
This above will throw an error as it is out of range. Is there any way to get around this? What is the largest numeric that PostgreSQL and the pg node.js client can return?
If you want column_with_very_big_numerics to come to you as NUMERIC, don't cast it to int.
You've written:
SELECT
column_with_very_big_numerics::int
FROM some_table;
Instead, just write:
SELECT
column_with_very_big_numerics
FROM some_table;
The largest integer (and largest bigint) PostgreSQL can represent is in the documentation. These limits don't apply to numeric as it's a different type.
Your client library must understand how to deal with numeric; otherwise you'll probably just get the number as a string. You probably need to find an arbitrary precision decimal floating point library to work with these numbers if your programming language does not support them natively. E.g. in Java you use BigDecimal, a built-in type; in Python you use Python's decimal module (automatically supported by psycopg2, IIRC) etc. I don't use Node.js, which it sounds like you are using, so I cannot help you with that.
If node.js's pg driver doesn't cope with numeric then either (a) write a patch to fix that, or (b) change your schema to avoid numeric, using bigint and (where necessary for non-integers) fixed-point multipliers.
I just discovered (the hard way, of course) that all of the integer datatypes I'm using in my XSD schemas are allowing 64-bit values to pass validation instead of 32-bit values. Yes, I know -- my bad for not diving deeply into the W3C specifications for datatypes and just assuming that INT would be 32-bit.
So is there an easy way (as in a DOCTYPE declaration, or a namespace, etc.) I can enforce a 32-bit limit on any "xs:" datatypes involving integers? Right now I'm going through and declaring my own derived datatypes with 32-bit min/max values and fgrep'ing the XSD files, but it would be rather nice if there was an easy (obvious) tweak that I'm unaware of.
There is a built-in xs:int type, which is derived from xs:integer and has a signed 32-bit range.
3.3.17 int
[Definition:] int is ·derived· from
long by setting the value of
·maxInclusive· to be 2147483647 and
·minInclusive· to be -2147483648. The
·base type· of int is long.
(and the base type of xs:long is xs:integer)
I have to multiply two big numbers - saved as string - any hint how to do that?
Think back to grade school, and how you would solve the problem long-hand.
Depends on the language and how large the numbers are. For example in C, you can convert string to int with atoi and then multiply if the product will fit in 32bit int. If number is too large for 32bit you'll probably have to use third-party BigInt library. Some languages (python, haskell) have built-in support for bigint, so you can multiply numbers of any size.
We have an alpha numeric string (up to 32 characters) and we want to transform it to an integer (bigint). Now we're looking for an algorithm to do that. Collision isn't bad (therefor we use an bigint to prevent this a little bit), important thing is, that the calculated integers are constantly distributed over bigint range and the calculated integer is always the same for a given string.
This page has a few. You'll need to port to 64bit, but that should be trivial. A C# port of SBDM hash is here. Another page of hash functions here
Most programming languages come with a built-in construct or a standard library call to do this. Without knowing the language, I don't think anyone can help you.
Yes, a "hash" should be the right description for my problem. I know, that there is CRC32, but it only provides an 32-bit int (in PHP) and this 32-bit integers are at least 10 characters long, so a huge range of integer number is unused!?
Mostly, we have a short string like "PX38IEK" or an 36 character UUID like "24868d36-a150-11df-8882-d8d385ffc39c", so the strings are arbitrary, yes.
It doesn't has to be reversible (so collisions aren't bad). It also doesn't matter what int a string is converted to, my only wish is, that the full bigint range is used as best as possible.