The following example defines a series of port numbers starting at 3333 using iota.
package main
import (
"fmt"
)
const (
FirstPort = iota+3333
SecondPort
ThirdPort
)
func main() {
hostAndPort := "localhost:"+fmt.Sprint(SecondPort)
fmt.Printf("%s", hostAndPort )
// Output:
// localhost:3334
}
When combining hostname and ports, I'd like to avoid having to wrap the port constant in fmt.Sprint and simply write, for example, "localhost:"+SecondPort. Is there a way to use iota to define the port numbers as string constants, e.g "3334"?
The following doesn't work:
FirstPort = string(iota + 3333)
Neither does
FirstPort = fmt.Sprintf("%d", iota + 3333)
Quoting from Spec: Iota:
Within a constant declaration, the predeclared identifier iota represents successive untyped integer constants.
So iota provides you integer constants. If we want string constants, we need to find a way to convert an integer to its base-10 string representation. This way must be a constant expression, else we can't use it in a constant declaration.
Unfortunately for us, a simple type conversion from integer to string will not yield the base-10 representation of the numerical value, but:
Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer.
So the result will be a string holding a single rune, whose value (the Unicode codepoint) is the source number.
Also calling "converter" functions such as strconv.Itoa() or fmt.Sprint() is out of the question, as calling those functions cannot be part of a constant expression, so the result could only be used in a variable declaration (not to mention we couldn't use iota, it's only allowed in constant declarations).
But there is still a solution.
I don't think it is worth the hassle and the loss of readability, but actually you can define string constants holding increasing decimal numbers using iota.
The solution builds the "complete" numbers from digits. We can obtain the base-10 string representation by concatenating the digits (as string values) of the number.
Last question to solve for this is how to "list" the digits of a number. This is simple arithmetic:
The last digit (in base 10) of a number is i % 10.
The preceding digit is i / 10 % 10.
The one before that is i / 100 % 10.
And so on...
And to obtain the rune for a digit (which is in the range of 0..9), we can simply add '0' to it, and convert it to string. And that's all.
This is how we can code this for a 1-digit string number:
n0 = string('0'+iota%10)
For a 2-digit number:
n00 = string('0'+iota/10%10) + string('0'+iota/1%10)
For a 3-digit number:
n000 = string('0'+iota/100%10) + string('0'+iota/10%10) + string('0'+iota/1%10)
Let's see it in action:
const (
P00 = string('0'+iota/10%10) + string('0'+iota/1%10)
P01
P02
P03
P04
P05
P06
P07
P08
P09
P10
P11
P12
P13
P14
P15
P16
P17
P18
P19
P20
)
Printing the results:
fmt.Printf("%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n%v\n",
P00, P01, P02, P03, P04, P05, P06, P07, P08, P09,
P10, P11, P12, P13, P14, P15, P16, P17, P18, P19, P20)
Output (try it on the Go Playground):
00
01
02
03
04
05
06
07
08
09
10
11
12
13
14
15
16
17
18
19
20
So far so good, but how do we make it start at 3333?
Also not a problem, can be achieved easily. We can shift the iota, simply by adding an "initial" number to it. And that's all it takes.
Let's see an example where the first number will be 3339:
const (
P3339 = string('0'+(iota+3339)/1000%10) +
string('0'+(iota+3339)/100%10) +
string('0'+(iota+3339)/10%10) +
string('0'+(iota+3339)/1%10)
P3340
P3341
)
func main() {
fmt.Println(P3339)
fmt.Println(P3340)
fmt.Println(P3341)
}
Output of the above is the expected (try it on the Go Playground):
3339
3340
3341
You're creating untyped numeric constants. When in doubt, check the spec. To create a string with host and port number, you can simply use fmt.Sprintf like so:
package main
const (
FirstPort = iota+3333
SecondPort
ThirdPort
)
func main() {
hostPort := fmt.Sprintf("localhost:%d", FirstPort)
fmt.Println(hostPort)
}
That's all there's too it: Demo
Related
If I create a subprogram of type function that for instance orders you to type a string of a particular length and you type Overflow, it's supposed to type the last half of the string, so in this case it would be flow. But on the other end if I type an odd number of characters like Stack it's supposed to type the last half of the string + the middle letter, so in this case it would be "ack".
Let me make it clearer (text in bold is user input):
Type a string that's not longer than 7 characters: Candy
The other half of the string is: ndy
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Integer_Text_IO; use Ada.Integer_Text_IO;
function Split_String (S : in String) return String is
begin
Mid := 1 + (S'Length / 2);
return S(Mid .. S'Last);
end Split_String;
S : String(1 .. 7);
I : Integer;
begin
Put("Type a string that's no longer than 7 characters: ");
Get_Line(S, I);
Put(Split_String(S));
end Split;
Let me tell you how I've been thinking. So I do a Get_Line to see how many characters the string contains. I then put I in my subprogram to determine if its evenly dividable by two or not. If it's dividable by two, the rest should be 0, thus it'll mean that typing out the other half of the string + THE MIDDLE CHARACTER is not needed. If in all the other cases, it's not dividable by two I have to type out the other half of the string + the middle character. But now I stumbled upon a big problem in my main program. I don't know how type out the other half of a string. If a string contains 4 words I can just type out Put(S(3 .. 4); but the thing is that I don't know a general formula for this. Help is appreciated! :) Have a good day!
You need a more general approach to your problem. Also, try to understand how Get_Line works for you.
For example, if you declare an input string with a large size such as
Input : String (1..1024);
You will have a string large enough to work with any likely input values.
Next, you need a variable to indicate how many characters were actually read by Get_Line.
Length : Natural;
The data returned by Get_Line will then be in the slice of the input string designated as
Input (1 .. Length);
Pass that slice to your function to return the second half of the string.
function last_half(S : string) return string;
last_half(Input(1..Length));
Now all you need is to calculate the last half of the string passed to the function last_half. The function will output a slice of the string passed to it. To find the first index of the last half of the input string you must perform the calculation
mid : Positive := 1 + (S'length / 2);
Then simply return the string S(mid .. S'Last).
It appears that the goal of this exercise is to learn how to use array slices. Concentrate on how slices work for you in the problem and the solution will be very simple.
One possible solution is
with Ada.Text_IO; use Ada.Text_IO;
procedure Main is
Input : String (1 .. 1_024);
Length : Natural;
function last_half (S : in String) return String is
Mid : Positive := 1 + (S'Length / 2);
begin
return S (Mid .. S'Last);
end last_half;
begin
Put ("Enter a string: ");
Get_Line (Input, Length);
Put_Line (Input (1 .. Length) & " : " & last_half (Input (1 .. Length)));
end Main;
Study how the solution uses array slices on the return value of Get_Line and on the parameter for the function last_half and on its return statement. It is also important to remember that the type String is defined as an unbounded array of character. This means that every slice of a string is also a string.
type String is array ( Positive range <> ) of Character;
Aside from being an untidy mess, your latest code edit (as of 20:11 GMT on 15 Nov 2021) doesn’t even compile. Please don’t show us code like this! (unless, of course, that’s the problem).
I’d like to strongly suggest this alternate way of inputting strings:
declare
S : constant String := Get_Line;
begin
-- do things with S, which is exactly as long as
-- the input you typed: no undefined characters at
-- the end to confuse the result, no need to worry
-- about overrunning an input buffer
end;
With this change, and obvious syntactic changes, your current code will do what you want.
I'm trying to convert from a HEX to ASCII and I'm getting this message. I would like to understand how to interpret it in the correct way.
0x2b6162630704fe17
Using the npm module hex2ascii it returns this:
"+abc\u0007\u0004þ\u0017"
if I convert from an online converter, it returns :
+abcþ
Could someone help me to interpret this? I am using node.
Am I doing something wrong?
Appreciate the help!
If you look at the string in the console, you will notice that the two strings you're posted are actually the same.
The gist is, the string contains nonprintable unicode characters, which get escaped by the hex2ascii module.
The online converter you are using tries to display those characters. Since they are not printable you simply cannot see them.
Let's convert the hex string
var conv = "2b6162630704fe17".match (/(..)/g).reduce ((a,c) => a + String.fromCharCode(parseInt (c,16)), "")
conv //"+abcþ"
It looks just like the String from the converter! Let's compare it to the other string
conv === "+abc\u0007\u0004þ\u0017" // true
Are you sure its 8 digit ASCII?
If it is, each 2 hex characters represents a given ASCII number.
So:
2b6162630704fe17
First 2b, which is 2 * 16 + 11 = 43 - which is a plus sign
61, which is 6 * 16 + 1 = 97 = lowercase a
62, which is 6 * 16 + 2 = 98 = lowercase b
63, which is 6 * 16 + 3 = 99 = lowercase c
07, which is 0 * 16 + 7 = 7 = that's a special unprintable character.
references to convert the numbers to characters - asciitable.com
Based on the 07, I wonder if your data is truly ascii, or a different encoding.
Is there a built in way, or reasonably standard package that allows you to convert a standard UUID into a short string that would enable shorter URL's?
I.e. taking advantage of using a larger range of characters such as [A-Za-z0-9] to output a shorter string.
I know we can use base64 to encode the bytes, as follows, but I'm after something that creates a string that looks like a "word", i.e. no + and /:
id = base64.StdEncoding.EncodeToString(myUuid.Bytes())
A universally unique identifier (UUID) is a 128-bit value, which is 16 bytes. For human-readable display, many systems use a canonical format using hexadecimal text with inserted hyphen characters, for example:
123e4567-e89b-12d3-a456-426655440000
This has length 16*2 + 4 = 36. You may choose to omit the hypens which gives you:
fmt.Printf("%x\n", uuid)
fmt.Println(hex.EncodeToString(uuid))
// Output: 32 chars
123e4567e89b12d3a456426655440000
123e4567e89b12d3a456426655440000
You may choose to use base32 encoding (which encodes 5 bits with 1 symbol in contrast to hex encoding which encodes 4 bits with 1 symbol):
fmt.Println(base32.StdEncoding.EncodeToString(uuid))
// Output: 26 chars
CI7EKZ7ITMJNHJCWIJTFKRAAAA======
Trim the trailing = signs when transmitting, so this will always be 26 chars. Note that you have to append "======" prior to decode the string using base32.StdEncoding.DecodeString().
If this is still too long for you, you may use base64 encoding (which encodes 6 bits with 1 symbol):
fmt.Println(base64.RawURLEncoding.EncodeToString(uuid))
// Output: 22 chars
Ej5FZ-ibEtOkVkJmVUQAAA
Note that base64.RawURLEncoding produces a base64 string (without padding) which is safe for URL inclusion, because the 2 extra chars in the symbol table (beyond [0-9a-zA-Z]) are - and _, both which are safe to be included in URLs.
Unfortunately for you, the base64 string may contain 2 extra chars beyond [0-9a-zA-Z]. So read on.
Interpreted, escaped string
If you are alien to these 2 extra characters, you may choose to turn your base64 string into an interpreted, escaped string similar to the interpreted string literals in Go. For example if you want to insert a backslash in an interpreted string literal, you have to double it because backslash is a special character indicating a sequence, e.g.:
fmt.Println("One backspace: \\") // Output: "One backspace: \"
We may choose to do something similar to this. We have to designate a special character: be it 9.
Reasoning: base64.RawURLEncoding uses the charset: A..Za..z0..9-_, so 9 represents the highest code with alphanumeric character (61 decimal = 111101b). See advantage below.
So whenever the base64 string contains a 9, replace it with 99. And whenever the base64 string contains the extra characters, use a sequence instead of them:
9 => 99
- => 90
_ => 91
This is a simple replacement table which can be captured by a value of strings.Replacer:
var escaper = strings.NewReplacer("9", "99", "-", "90", "_", "91")
And using it:
fmt.Println(escaper.Replace(base64.RawURLEncoding.EncodeToString(uuid)))
// Output:
Ej5FZ90ibEtOkVkJmVUQAAA
This will slightly increase the length as sometimes a sequence of 2 chars will be used instead of 1 char, but the gain will be that only [0-9a-zA-Z] chars will be used, as you wanted. The average length will be less than 1 additional character: 23 chars. Fair trade.
Logic: For simplicity let's assume all possible uuids have equal probability (uuid is not completely random, so this is not the case, but let's set this aside as this is just an estimation). Last base64 symbol will never be a replaceable char (that's why we chose the special char to be 9 instead of like A), 21 chars may turn into a replaceable sequence. The chance for one being replaceable: 3 / 64 = 0.047, so on average this means 21*3/64 = 0.98 sequences which turn 1 char into a 2-char sequence, so this is equal to the number of extra characters.
To decode, use an inverse decoding table captured by the following strings.Replacer:
var unescaper = strings.NewReplacer("99", "9", "90", "-", "91", "_")
Example code to decode an escaped base64 string:
fmt.Println("Verify decoding:")
s := escaper.Replace(base64.RawURLEncoding.EncodeToString(uuid))
dec, err := base64.RawURLEncoding.DecodeString(unescaper.Replace(s))
fmt.Printf("%x, %v\n", dec, err)
Output:
123e4567e89b12d3a456426655440000, <nil>
Try all the examples on the Go Playground.
As suggested here, If you want just a fairly random string to use as slug, better to not bother with UUID at all.
You can simply use go's native math/rand library to make random strings of desired length:
import (
"math/rand"
"encoding/hex"
)
b := make([]byte, 4) //equals 8 characters
rand.Read(b)
s := hex.EncodeToString(b)
Another option is math/big. While base64 has a constant output of 22
characters, math/big can get down to 2 characters, depending on the input:
package main
import (
"encoding/base64"
"fmt"
"math/big"
)
type uuid [16]byte
func (id uuid) encode() string {
return new(big.Int).SetBytes(id[:]).Text(62)
}
func main() {
var id uuid
for n := len(id); n > 0; n-- {
id[n - 1] = 0xFF
s := base64.RawURLEncoding.EncodeToString(id[:])
t := id.encode()
fmt.Printf("%v %v\n", s, t)
}
}
Result:
AAAAAAAAAAAAAAAAAAAA_w 47
AAAAAAAAAAAAAAAAAAD__w h31
AAAAAAAAAAAAAAAAAP___w 18owf
AAAAAAAAAAAAAAAA_____w 4GFfc3
AAAAAAAAAAAAAAD______w jmaiJOv
AAAAAAAAAAAAAP_______w 1hVwxnaA7
AAAAAAAAAAAA_________w 5k1wlNFHb1
AAAAAAAAAAD__________w lYGhA16ahyf
AAAAAAAAAP___________w 1sKyAAIxssts3
AAAAAAAA_____________w 62IeP5BU9vzBSv
AAAAAAD______________w oXcFcXavRgn2p67
AAAAAP_______________w 1F2si9ujpxVB7VDj1
AAAA_________________w 6Rs8OXba9u5PiJYiAf
AAD__________________w skIcqom5Vag3PnOYJI3
AP___________________w 1SZwviYzes2mjOamuMJWv
_____________________w 7N42dgm5tFLK9N8MT7fHC7
https://golang.org/pkg/math/big
I am doing some classification and needed to convert an integer code to strings for that reason. I wrote something like this:
s(1).class = 1;
s(2).class = 7;
s(3).class = 9;
[s([find([s.class] == 1)]).class] = deal('c1'); %first conversion
[s([find([s.class] > 1)]).class] = deal('c2'); %second conversion
and was surprised to find s being a 1x4 struct array after the second conversion instead of the expected 1x3 struct array with the values.
Now, after some research, I understand that after the first conversion the value of s(1).class is 'c1' and the argument to find in the second conversion is not what I assumed it would be. The [s.class] statement actually returns something like the string 'c1\a\t' with ASCII escape sequences for bell and horizontal tab.
As the comparison does work (returning the matrix [1 1 1 1] and thus expanding my structure) I assume that matlab converts either the operand [s.class] or the operand 1.
Which is it? What actually is compared here numbers or characters?
And on the other hand is there a built in way to make > more restrictive, i. e. to require the operands to be of the same type and if not to throw an error?
When you do the comparison 'ab' > 1, the char array 'ab' gets converted to a double array, namely the ASCII codes of the characters. So 'ab' > 1 is equivalent to double('ab') > 1, which gives [1 1].
To get the behaviour you want (issue an error if one of the arguments is char) you could define a function:
function z = greaterthan(x,y)
if ischar(x) || ischar(y)
error('Invalid comparison: one of the input arguments is of type char')
else
z = x>y;
end
so that
>> greaterthan([0 1 2], 1)
ans =
0 0 1
>> greaterthan('ab', 1)
??? Error using ==> greaterthan at 3
Invalid comparison between char and int
Because you have not provided any expected output yet, I am going with the observations.
You are using a comprehension method (by invoking find) to determine which locations you will be populating for struct s with the results from your method deal (takes the argument c1 and c2). You have already set your type for s{whatever).class in the first snippet you provided. Which means it is number you are comparing, not character.
There is this isa function to see which class your variable belongs to. Use that to see what it is you are actually putting in (should say int32 for your case).
TL;DR: how do I convert ints into hex, also how would I convert a 1 character string into a Hex ( ie 'F' -> 0xF )
looking to convert a character to a hex value, do some math, then convert back into a character.
so I have something like this:
addBits: aValue move: action
"aValue is always either 5 or 10 (0xA)"
"move is either 'a' for add or 's' for subtract"
|sum conversion|
"self stringToMakeHex is the string, it's always either an 'F', 'A', '0',
or '5' I need to turn it into either 0xF,0xA,0x0,or 0x5"
conversion := (self stringToMakeHex) asInteger.
(action = 's')
ifTrue:[sum:= conversion - aValue.]
ifFalse:[sum:=conversion + aValue.].
stringToMakeHex: (sum asString).
I know I shouldn't be doing asInteger, as it converts 'F' into a zero somehow, so I'm wondering if there's a nice way to get 0xF or even 15 from it. My other problem is that aValue is coming in as an integer (5 and 10 base10) so I'll need a way to get the hex values 0x5 and 0xA.
All this data is retrieved via TCP/IP from a dif program so it's out of my control what format I receive... Doesn't help that I need to send back a string in order for the communication to be handled across the connection
In Pharo (and Squeak if I'm not mistaken) you can use the class side #readFrom:base: method of Integer: Integer readFrom: self stringToMakeHex base: 16.
This gives you an integer of value 15 in the case of 'F'.
If I were you I'd encapsulate the reading and printing in a new class, e.g. HexString. You could implement on: on the class side and +, -, and printOn: on the instance side.
Update:
To get a String from an integer in a specific base use #printStringBase: (in Pharo) or #printOn:base: (which is probably more portable) like so: 12 printStringBase: 16. This evaluates to 'C'.
Try this:
(Compiler evaluate: '16r', self stringToMakeHex)
To convert back:
(sum printStringRadix: 16)