I know that gforth stores characters as their codepoints in the stack, but the material I'm learning from doesn't show any word that helps to convert each character to codepoint.
I also want to sum the codepoints of the string. What should I use to do that?
In Forth we distinguish primitive characters (usually an octet that covers ASCII) and extended characters (usually Unicode).
Any character is always represented in the stack as its code point, but how extended characters are represented in memory is implementation depended.
See also Extended-Character word set:
Extended characters are stored in memory encoded as one or more primitive characters (pchars).
So to convert a character into a code point it's enough to read this character from the memory.
To read a primitive character, we use c# ( c-addr -- char )
: sum-codes ( c-addr u -- sum ) 0 -rot over + swap ?do i c# + 1 chars +loop ;
\ test
"test passed" sum-codes .
NB: native string literals are supported in the recent versions of Gforth. Before that you need to use the word s" as s" test passed".
To read an extended character, we can use xc#+ ( xc-addr1 -- xc-addr2 xchar )
: sum-xcodes ( c-addr u -- sum )
over + >r 0 swap
begin ( sum xc-addr ) dup r# u< while
xc#+ ( sum xc-addr2 xchar ) swap >r + r>
repeat drop rdrop
;
\ test
"test ⇦ ⇨ ⇧ ⇩" 2dup dump cr sum-xcodes . cr
dump shows that in Gforth the extended characters are stored in the memory in UTF-8 encoding.
Related
The specification for RDF N-Triples states that string literals must be encoded.
https://www.w3.org/TR/n-triples/#grammar-production-STRING_LITERAL_QUOTE
Does this "encoding" have a name I can look up to use it in my programming language? If not, what does it mean in practice?
The grammar productions that you need are right in the document that you linked to:
[9] STRING_LITERAL_QUOTE ::= '"' ([^#x22#x5C#xA#xD] | ECHAR | UCHAR)* '"'
[141s] BLANK_NODE_LABEL ::= '_:' (PN_CHARS_U | [0-9]) ((PN_CHARS | '.')* PN_CHARS)?
[10] UCHAR ::= '\u' HEX HEX HEX HEX | '\U' HEX HEX HEX HEX HEX HEX HEX HEX
[153s] ECHAR ::= '\' [tbnrf"'\]
This means that a string literal begins and ends with a double quote ("). Inside of the double quotes, you can have:
any character except: #x22, #x5C, #xA, #xD. Offhand, I don't know what each of those is, but I'd assume that they're the space characters covered in the escapes;
a unicode character represented with a \u followed by four hex digits, or a \U followed by eight hex digits; or
an escape character, which is a \ followed by any of t, b, n, r, f, ", ', and \, which represent various characters.
You could use Literal#n3()
e.g.
# pip install rdflib
>>> from rdflib import Literal
>>> lit = Literal('This "Literal" needs escaping!')
>>> s = lit.n3()
>>> print(s)
"This \"Literal\" needs escaping!"
In addition to Josh's answer. It is almost always a good idea to normalize unicode data to NFC,e.g. in Java you can use the following routine
java.text.Normalizer.normalize("rdf literal", Normalizer.Form.NFKC);
For more information see: http://www.macchiato.com/unicode/nfc-faq
What is NFC?
For various reasons, Unicode sometimes has multiple representations of the same character. For example, each of the following sequences (the first two being single-character sequences) represent the same character:
U+00C5 ( Å ) LATIN CAPITAL LETTER A WITH RING ABOVE
U+212B ( Å ) ANGSTROM SIGN
U+0041 ( A ) LATIN CAPITAL LETTER A + U+030A ( ̊ ) COMBINING RING ABOVE
These sequences are called canonically equivalent. The first of these forms is called NFC - for Normalization Form C, where the C is for compostion. For more information on these, see the introduction of UAX #15: Unicode Normalization Forms. A function transforming a string S into the NFC form can be abbreviated as toNFC(S), while one that tests whether S is in NFC is abbreviated as isNFC(S).
Is there a built in way, or reasonably standard package that allows you to convert a standard UUID into a short string that would enable shorter URL's?
I.e. taking advantage of using a larger range of characters such as [A-Za-z0-9] to output a shorter string.
I know we can use base64 to encode the bytes, as follows, but I'm after something that creates a string that looks like a "word", i.e. no + and /:
id = base64.StdEncoding.EncodeToString(myUuid.Bytes())
A universally unique identifier (UUID) is a 128-bit value, which is 16 bytes. For human-readable display, many systems use a canonical format using hexadecimal text with inserted hyphen characters, for example:
123e4567-e89b-12d3-a456-426655440000
This has length 16*2 + 4 = 36. You may choose to omit the hypens which gives you:
fmt.Printf("%x\n", uuid)
fmt.Println(hex.EncodeToString(uuid))
// Output: 32 chars
123e4567e89b12d3a456426655440000
123e4567e89b12d3a456426655440000
You may choose to use base32 encoding (which encodes 5 bits with 1 symbol in contrast to hex encoding which encodes 4 bits with 1 symbol):
fmt.Println(base32.StdEncoding.EncodeToString(uuid))
// Output: 26 chars
CI7EKZ7ITMJNHJCWIJTFKRAAAA======
Trim the trailing = signs when transmitting, so this will always be 26 chars. Note that you have to append "======" prior to decode the string using base32.StdEncoding.DecodeString().
If this is still too long for you, you may use base64 encoding (which encodes 6 bits with 1 symbol):
fmt.Println(base64.RawURLEncoding.EncodeToString(uuid))
// Output: 22 chars
Ej5FZ-ibEtOkVkJmVUQAAA
Note that base64.RawURLEncoding produces a base64 string (without padding) which is safe for URL inclusion, because the 2 extra chars in the symbol table (beyond [0-9a-zA-Z]) are - and _, both which are safe to be included in URLs.
Unfortunately for you, the base64 string may contain 2 extra chars beyond [0-9a-zA-Z]. So read on.
Interpreted, escaped string
If you are alien to these 2 extra characters, you may choose to turn your base64 string into an interpreted, escaped string similar to the interpreted string literals in Go. For example if you want to insert a backslash in an interpreted string literal, you have to double it because backslash is a special character indicating a sequence, e.g.:
fmt.Println("One backspace: \\") // Output: "One backspace: \"
We may choose to do something similar to this. We have to designate a special character: be it 9.
Reasoning: base64.RawURLEncoding uses the charset: A..Za..z0..9-_, so 9 represents the highest code with alphanumeric character (61 decimal = 111101b). See advantage below.
So whenever the base64 string contains a 9, replace it with 99. And whenever the base64 string contains the extra characters, use a sequence instead of them:
9 => 99
- => 90
_ => 91
This is a simple replacement table which can be captured by a value of strings.Replacer:
var escaper = strings.NewReplacer("9", "99", "-", "90", "_", "91")
And using it:
fmt.Println(escaper.Replace(base64.RawURLEncoding.EncodeToString(uuid)))
// Output:
Ej5FZ90ibEtOkVkJmVUQAAA
This will slightly increase the length as sometimes a sequence of 2 chars will be used instead of 1 char, but the gain will be that only [0-9a-zA-Z] chars will be used, as you wanted. The average length will be less than 1 additional character: 23 chars. Fair trade.
Logic: For simplicity let's assume all possible uuids have equal probability (uuid is not completely random, so this is not the case, but let's set this aside as this is just an estimation). Last base64 symbol will never be a replaceable char (that's why we chose the special char to be 9 instead of like A), 21 chars may turn into a replaceable sequence. The chance for one being replaceable: 3 / 64 = 0.047, so on average this means 21*3/64 = 0.98 sequences which turn 1 char into a 2-char sequence, so this is equal to the number of extra characters.
To decode, use an inverse decoding table captured by the following strings.Replacer:
var unescaper = strings.NewReplacer("99", "9", "90", "-", "91", "_")
Example code to decode an escaped base64 string:
fmt.Println("Verify decoding:")
s := escaper.Replace(base64.RawURLEncoding.EncodeToString(uuid))
dec, err := base64.RawURLEncoding.DecodeString(unescaper.Replace(s))
fmt.Printf("%x, %v\n", dec, err)
Output:
123e4567e89b12d3a456426655440000, <nil>
Try all the examples on the Go Playground.
As suggested here, If you want just a fairly random string to use as slug, better to not bother with UUID at all.
You can simply use go's native math/rand library to make random strings of desired length:
import (
"math/rand"
"encoding/hex"
)
b := make([]byte, 4) //equals 8 characters
rand.Read(b)
s := hex.EncodeToString(b)
Another option is math/big. While base64 has a constant output of 22
characters, math/big can get down to 2 characters, depending on the input:
package main
import (
"encoding/base64"
"fmt"
"math/big"
)
type uuid [16]byte
func (id uuid) encode() string {
return new(big.Int).SetBytes(id[:]).Text(62)
}
func main() {
var id uuid
for n := len(id); n > 0; n-- {
id[n - 1] = 0xFF
s := base64.RawURLEncoding.EncodeToString(id[:])
t := id.encode()
fmt.Printf("%v %v\n", s, t)
}
}
Result:
AAAAAAAAAAAAAAAAAAAA_w 47
AAAAAAAAAAAAAAAAAAD__w h31
AAAAAAAAAAAAAAAAAP___w 18owf
AAAAAAAAAAAAAAAA_____w 4GFfc3
AAAAAAAAAAAAAAD______w jmaiJOv
AAAAAAAAAAAAAP_______w 1hVwxnaA7
AAAAAAAAAAAA_________w 5k1wlNFHb1
AAAAAAAAAAD__________w lYGhA16ahyf
AAAAAAAAAP___________w 1sKyAAIxssts3
AAAAAAAA_____________w 62IeP5BU9vzBSv
AAAAAAD______________w oXcFcXavRgn2p67
AAAAAP_______________w 1F2si9ujpxVB7VDj1
AAAA_________________w 6Rs8OXba9u5PiJYiAf
AAD__________________w skIcqom5Vag3PnOYJI3
AP___________________w 1SZwviYzes2mjOamuMJWv
_____________________w 7N42dgm5tFLK9N8MT7fHC7
https://golang.org/pkg/math/big
How to split binary Erlang string treating its data as UTF8 characters?
Let's say we have a binary, which should be split into two parts, and the first part should contain first two UTF8 characters. Here are few examples:
<<"ąčęė">> should become [<<"ąč">>, <<"ęė">>]
<<"あぁぅうぁ">> should become [<<"あぁ">>, <<"ぅうぁ">>]
To just split a utf-8 encoded binary string into two parts with the first part containing the first two characters and the second part the rest you could use the function:
split_2(<<One/utf8,Two/utf8,Rest/binary>>) ->
%% One and Two are now the unicode codepoints of the first 2 characters.
[<<One/utf8,Two/utf8>>,Rest].
Matching against a binary with a utf8 will extract the first utf-8 encoded character and return the unicode codepoint as an integer which is why we must build the resultant binary of the first two characters. This function will fail if there are not 2 utf-8 encoded characters first in the binary.
The difference between a bitstring and a binary is that the size of a binary must be a multiple of 8 bits while a bitstring can be any size.
Still, it's unclear for me, but I think this would make the trick:
Eshell V6.2 (abort with ^G)
1> Input = <<"ąčęė">>.
<<"ąčęė">>
2> L = [X || <<X:2/binary>> <= Input].
[<<"ąč">>,<<"ęė">>]
3>
UPDATE: This one will split it into S, TheRest:
%% S is the number of characters you want
split_it(S, Bin) when S > 0 ->
case Bin of
<<P:S/binary, R/binary>> -> [P | split_it(infinity, R)];
<<>> -> [];
_ -> [Bin]
end.
happen to need a function like this. and here is what I end up with:
trunc_utf8(Utf8s, Count) ->
trunc_utf8(Utf8s, Count, <<>>).
trunc_utf8(<<>>, _Count, Acc) -> Acc;
trunc_utf8(_Utf8s, 0, Acc) -> Acc;
trunc_utf8(<<H/utf8, T/binary>> = _Utf8s, Count, Acc) ->
trunc_utf8(T, Count - 1, <<Acc/binary, H/utf8>>).
I have the following piece of code which converts 1 char to a hex at a time. I want to convert two chars to a hex. ie 99ab should be treated as '99', 'ab' to be converted to its equivalent hex.
Current implementation is as follows
$final =~ s/(.)/sprintf("0x%X ",ord($1))/eg;
chop($final);
TIA
Your question doesn't make much sense. Hex is a string representation of a number. You can't convert a string to hex.
You can convert individual characters of a string to hex since characters are merely numbers, but that's clearly not what you want. (That's what your code does.)
I think you are trying to convert from from hex to chars.
6 chars "6a6b0a" ⇒ 3 chars "\x6a\x6b\x0a"
If so, you can use your choice of
$final =~ s/(..)/ chr(hex($1)) /seg;
or
$final = pack 'H*', $final;
The other possibility I can think of is that you want to unpack 16-bit integers.
6 chars "6a6b" ⇒ 13 chars "0x6136 0x6236" (LE byte order)
-or-
6 chars "6a6b" ⇒ 13 chars "0x3661 0x3662" (BE byte order)
If so, you can use
my #nums = unpack 'S<*', $packed; # For 16-bit ints, LE byte order
-or-
my #nums = unpack 'S>*', $packed; # For 16-bit ints, BE byte order
my $final = join ' ', map sprintf('0x%04X', $_), #nums;
I can't find a basic description of how string data is stored in Perl! Its like all the documentation is assuming I already know this for some reason. I know about encode(), decode(), and I know I can read raw bytes into a Perl "string" and output them again without Perl screwing with them. I know about open modes. I also gather Perl must use some interal format to store character strings and can differentiate between character and binary data. Please where is this documented???
Equivalent question is; given this perl:
$x = decode($y);
Decode to WHAT and from WHAT??
As far as I can figure there must be a flag on the string data structure that says this is binary XOR character data (of some internal format which BTW is a superset of Unicode -http://perldoc.perl.org/Encode.html#DESCRIPTION). But I'd like it if that were stated in the docs or confirmed/discredited here.
This is a great question. To investigate, we can dive a little deeper by using Devel::Peek to see what is actually stored in our strings (or other variables).
First lets start with an ASCII string
$ perl -MDevel::Peek -E 'Dump "string"'
SV = PV(0x9688158) at 0x969ac30
REFCNT = 1
FLAGS = (POK,READONLY,pPOK)
PV = 0x969ea20 "string"\0
CUR = 6
LEN = 12
Then we can turn on unicode IO layers and do the same
$ perl -MDevel::Peek -CSAD -E 'Dump "string"'
SV = PV(0x9eea178) at 0x9efcce0
REFCNT = 1
FLAGS = (POK,READONLY,pPOK)
PV = 0x9f0faf8 "string"\0
CUR = 6
LEN = 12
From there lets try to manually add some wide characters
$ perl -MDevel::Peek -CSAD -e 'Dump "string \x{2665}"'
SV = PV(0x9be1148) at 0x9bf3c08
REFCNT = 1
FLAGS = (POK,READONLY,pPOK,UTF8)
PV = 0x9bf7178 "string \342\231\245"\0 [UTF8 "string \x{2665}"]
CUR = 10
LEN = 12
From that you can clearly see that Perl has interpreted this correctly as utf8. The problem is that if I don't give the octets using the \x{} escaping the representation looks more like the regular string
$ perl -MDevel::Peek -CSAD -E 'Dump "string ♥"'
SV = PV(0x9143058) at 0x9155cd0
REFCNT = 1
FLAGS = (POK,READONLY,pPOK)
PV = 0x9168af8 "string \342\231\245"\0
CUR = 10
LEN = 12
All Perl sees is bytes and has no way to know that you meant them as a unicode character, unlike when you entered the escaped octets above. Now lets use decode and see what happens
$ perl -MDevel::Peek -CSAD -MEncode=decode -E 'Dump decode "utf8", "string ♥"'
SV = PV(0x8681100) at 0x8683068
REFCNT = 1
FLAGS = (TEMP,POK,pPOK,UTF8)
PV = 0x869dbf0 "string \342\231\245"\0 [UTF8 "string \x{2665}"]
CUR = 10
LEN = 12
TADA!, now you can see that the string is correctly internally represented matching what you entered when you used the \x{} escaping.
The actual answer is it is "decoding" from bytes to characters, but I think it makes more sense when you see the Peek output.
Finally, you can make Perl see you source code as utf8 by using the utf8 pragma, like so
$ perl -MDevel::Peek -CSAD -Mutf8 -E 'Dump "string ♥"'
SV = PV(0x8781170) at 0x8793d00
REFCNT = 1
FLAGS = (POK,READONLY,pPOK,UTF8)
PV = 0x87973b8 "string \342\231\245"\0 [UTF8 "string \x{2665}"]
CUR = 10
LEN = 12
Rather like the fluid string/number status of its scalar variables, the internal format of Perl's strings is variable and depends on the contents of the string.
Take a look at perluniintro, which says this.
Internally, Perl currently uses either whatever the native eight-bit character set of the platform (for example Latin-1) is, defaulting to UTF-8, to encode Unicode strings. Specifically, if all code points in the string are 0xFF or less, Perl uses the native eight-bit character set. Otherwise, it uses UTF-8.
What that means is that a string like "I have £ two" is stored as (bytes) I have \x{A3} two. (The pound sign is U+00A3.) Now if I append a multi-byte unicode string such as U+263A - a smiling face - Perl will convert the whole string to UTF-8 before it appends the new character, giving (bytes) I have \xC2\xA3 two\xE2\x98\xBA. Removing this last character again leaves the string UTF-8 encoded, as `I have \xC2\xA3 two.
But I wonder why you need to know this. Unless you are writing an XS extension in C the internal format is transparent and invisible to you.
Perls internal string format is implementation dependant, but usually a super set of UtF-8. It doesn't matter what it is because you use decode and encode to convert strings to and from the internal format to other encodings.
Decode converts to perls internal format, encode converts from perls internal format.
Binary data is stored internaly the same way characters 0 through 255 are.
Encode and decode just convert between formats. For example UTF8 encoding means each character will only be an octet using perl character vlaues 0 through 255, ie that the string consists of UTF8 octets.
Short answer: It's a mess
Slightly longer: The difference isn't visible to the programmer.
Basically you have to remember if your string contains bytes or characters, where characters are unicode codepoints. If you only encounter ASCII, the difference is invisible, which is dangerous.
Data itself and the representation of such data are distinct, and should not be confused. Strings are (conceptually) a sequence of codepoints, but are represented as a byte array in memory, and represented as some byte sequence when encoded. If you want to store binary data in a string, you re-interpret the number of a codepoint as a byte value, and restrict yourself to codepoints in 0–255.
(E.g. a file has no encoding. The information in that file has some encoding (be it ASCII, UTF-16 or EBCDIC at a character level, and Perl, HTML or .ini at an application level))
The exact storage format of a string is irrelevant, but you can store complete integers inside such a string:
# this will work if your perl was compiled with large integers
my $string = chr 2**64; # this is so not unicode
say ord $string; # 18446744073709551615
The internal format is adjusted accordingly to accomodate such values; normal strings won't take up one integer per character.
Perl can handle more than Unicode can, so it's very flexible. Sometimes you want to interface with something that cannot, so you can use encode(...) and decode(...) handle those transformations. see http://perldoc.perl.org/utf8.html