Transforming Go's PutUint16 to Python - python-3.x

I want to get the equivalent of the Go code given below in Python:
func Make(op Opcode, operands ...int) []byte {
def, ok := definitions[op]
if !ok {
return []byte{}
}
instructionLen := 1
for _, w := range def.OperandWidths {
instructionLen += w
}
instruction := make([]byte, instructionLen)
instruction[0] = byte(op)
offset := 1
for i, o := range operands {
width := def.OperandWidths[i]
switch width {
case 2:
binary.BigEndian.PutUint16(instruction[offset:], uint16(o))
case 1:
instruction[offset] = byte(o)
}
offset += width
}
return instruction
}
func ReadOperands(def *Definition, ins Instructions) ([]int, int) {
operands := make([]int, len(def.OperandWidths))
offset := 0
for i, width := range def.OperandWidths {
switch width {
case 2:
operands[i] = int(ReadUint16(ins[offset:]))
case 1:
operands[i] = int(ReadUint8(ins[offset:]))
}
offset += width
}
return operands, offset
}
op above is any of:
type Opcode byte
const (
OpConstant Opcode = iota
OpAdd
OpPop
OpSub
OpMul
OpDiv
)
The code above comes from the book Writing a Compiler in Go and can be found here
I am not exactly sure about what is going on here with byte transformations and packing but in order to understand it better I am writing the whole thing in Python. Can someone help me translate those two functions in Python?

You can use the to_bytes method of integers. o.to_bytes(2, byteorder='big') will give the same effect as PutUint16. Likewise int.from_bytes can be used for reading. There is also struct.pack which handles similar things in a format-string kind of way.
Instead of building the buffer and writing into offsets, as done in the Go code, it makes more sense simply to use + to append to a bytes which begins empty.

Related

Can I make a prefilled string in golang with make or new?

I am trying to optimize my stringpad library in Go. So far the only way I have found to fill a string (actually bytes.Buffer) with a known character value (ex. 0 or " ") is with a for loop.
the snippet of code is:
// PadLeft pads string on left side with p, c times
func PadLeft(s string, p string, c int) string {
var t bytes.Buffer
if c <= 0 {
return s
}
if len(p) < 1 {
return s
}
for i := 0; i < c; i++ {
t.WriteString(p)
}
t.WriteString(s)
return t.String()
}
The larger the string pad I believe there is more memory copies of the t buffer. Is there a more elegant way to make a known size buffer with a known value on initialization?
You can only use make() and new() to allocate buffers (byte slices or arrays) that are zeroed. You may use composite literals to obtain slices or arrays that initially contain non-zero values, but you can't describe the initial values dynamically (indices must be constants).
Take inspiration from the similar but very efficient strings.Repeat() function. It repeats the given string with given count:
func Repeat(s string, count int) string {
// Since we cannot return an error on overflow,
// we should panic if the repeat will generate
// an overflow.
// See Issue golang.org/issue/16237
if count < 0 {
panic("strings: negative Repeat count")
} else if count > 0 && len(s)*count/count != len(s) {
panic("strings: Repeat count causes overflow")
}
b := make([]byte, len(s)*count)
bp := copy(b, s)
for bp < len(b) {
copy(b[bp:], b[:bp])
bp *= 2
}
return string(b)
}
strings.Repeat() does a single allocation to obtain a working buffer (which will be a byte slice []byte), and uses the builtin copy() function to copy the repeatable string. One thing noteworthy is that it uses the working copy and attempts to copy the whole of it incrementally, meaning e.g. if the string has already been copied 4 times, copying this buffer will make it 8 times, etc. This will minimize the calls to copy(). Also the solution takes advantage of that copy() can copy bytes from a string without having to convert it to a byte slice.
What we want is something similar, but we want the result to be prepended to a string.
We can account for that, simply allocating a buffer that is used inside Repeat() plus the length of the string we're left-padding.
The result (without checking the count param):
func PadLeft(s, p string, count int) string {
ret := make([]byte, len(p)*count+len(s))
b := ret[:len(p)*count]
bp := copy(b, p)
for bp < len(b) {
copy(b[bp:], b[:bp])
bp *= 2
}
copy(ret[len(b):], s)
return string(ret)
}
Testing it:
fmt.Println(PadLeft("aa", "x", 1))
fmt.Println(PadLeft("aa", "x", 2))
fmt.Println(PadLeft("abc", "xy", 3))
Output (try it on the Go Playground):
xaa
xxaa
xyxyxyabc
See similar / related question: Is there analog of memset in go?

golang: optimal sorting and joining strings

This short method in go's source code has a comment which implies that it's not allocating memory in an optimal way.
... could do better allocation-wise here ...
This is the source code for the Join method.
What exactly is inefficiently allocated here? I don't see a way around allocating the source string slice and the destination byte slice. The source being the slice of keys. The destination being the slice of bytes.
The code referenced by the comment is memory efficient as written. Any allocations are in strings.Join which is written to minimize memory allocations.
I suspect that the comment was accidentally copied and pasted from this code in the net/http package:
// TODO: could do better allocation-wise here, but trailers are rare,
// so being lazy for now.
if _, err := io.WriteString(w, "Trailer: "+strings.Join(keys, ",")+"\r\n"); err != nil {
return err
}
This snippet has the following possible allocations:
[]byte created in strings.Join for constructing the result
string conversion result returned by strings.Join
string result for expression "Trailer: "+strings.Join(keys, ",")+"\r\n"
The []byte conversion result used in io.WriteString
A more memory efficient approach is to allocate a single []byte for the data to be written.
n := len("Trailer: ") + len("\r\n")
for _, s := range keys {
n += len(s) + 1
}
p := make([]byte, 0, n-1) // subtract 1 for len(keys) - 1 commas
p = append(p, "Trailer: "...)
for i, s := range keys {
if i > 0 {
p = append(p, ',')
}
p = append(p, s...)
}
p = append(p, "\r\n"...)
w.Write(p)

Bitmasking conversion of CPU ids with Go

I have a mask that contains a binary counting of cpu_ids (0xA00000800000 for 3 CPUs) which I want to convert into a string of comma separated cpu_ids: "0,2,24".
I did the following Go implementation (I am a Go starter). Is it the best way to do it? Especially the handling of byte buffers seems to be inefficient!
package main
import (
"fmt"
"os"
"os/exec"
)
func main(){
cpuMap := "0xA00000800000"
cpuIds = getCpuIds(cpuMap)
fmt.Println(cpuIds)
}
func getCpuIds(cpuMap string) string {
// getting the cpu ids
cpu_ids_i, _ := strconv.ParseInt(cpuMap, 0, 64) // int from string
cpu_ids_b := strconv.FormatInt(cpu_ids_i, 2) // binary as string
var buff bytes.Buffer
for i, runeValue := range cpu_ids_b {
// take care! go returns code points and not the string
if runeValue == '1' {
//fmt.Println(bitString, i)
buff.WriteString(fmt.Sprintf("%d", i))
}
if (i+1 < len(cpu_ids_b)) && (runeValue == '1') {
//fmt.Println(bitString)
buff.WriteString(string(","))
}
}
cpuIds := buff.String()
// remove last comma
cpuIds = cpuIds[:len(cpuIds)-1]
//fmt.Println(cpuIds)
return cpuIds
}
Returns:
"0,2,24"
What you're doing is essentially outputting the indices of the "1"'s in the binary representation from left-to-right, and starting index counting from the left (unusal).
You can achieve the same using bitmasks and bitwise operators, without converting it to a binary string. And I would return a slice of indices instead of its formatted string, easier to work with.
To test if the lowest (rightmost) bit is 1, you can do it like x&0x01 == 1, and to shift a whole number bitwise to the right: x >>= 1. After a shift, the rightmost bit "disappears", and the previously 2nd bit becomes the 1st, so you can test again with the same logic. You may loop until the number is greater than 0 (which means it sill has 1-bits).
See this question for more examples of bitwise operations: Difference between some operators "|", "^", "&", "&^". Golang
Of course if we test the rightmost bit and shift right, we get the bits (indices) in reverse order (compared to what you want), and the indices are counted from right, so we have to correct this before returning the result.
So the solution looks like this:
func getCpuIds(cpuMap string) (r []int) {
ci, err := strconv.ParseInt(cpuMap, 0, 64)
if err != nil {
panic(err)
}
count := 0
for ; ci > 0; count, ci = count+1, ci>>1 {
if ci&0x01 == 1 {
r = append(r, count)
}
}
// Indices are from the right, correct it:
for i, v := range r {
r[i] = count - v - 1
}
// Result is in reverse order:
for i, j := 0, len(r)-1; i < j; i, j = i+1, j-1 {
r[i], r[j] = r[j], r[i]
}
return
}
Output (try it on the Go Playground):
[0 2 24]
If for some reason you need the result as a comma separated string, this is how you can obtain that:
buf := &bytes.Buffer{}
for i, v := range cpuIds {
if i > 0 {
buf.WriteString(",")
}
buf.WriteString(strconv.Itoa(v))
}
cpuIdsStr := buf.String()
fmt.Println(cpuIdsStr)
Output (try it on the Go Playground):
0,2,24

Go: convert rune (string) to string representation of the binary

This is just in case someone else is learning Golang and is wondering how to convert from a string to a string representation in binary.
Long story short, I have been looking at the standard library without being able to find the right call. So I started with something similar to the following:
func RuneToBinary(r rune) string {
var buf bytes.Buffer
b := []int64{128, 64, 32, 16, 8, 4, 2, 1}
v := int64(r)
for i := 0; i < len(b); i++ {
t := v-b[i]
if t >= 0 {
fmt.Fprintf(&buf, "1")
v = t
} else {
fmt.Fprintf(&buf, "0")
}
}
return buf.String()
}
This is all well and dandy, but after a couple of days looking around I found that I should have been using the fmt package instead and just format the rune with %b%:
var r rune
fmt.Printf("input: %b ", r)
Is there a better way to do this?
Thanks
Standard library support
fmt.Printf("%b", r) - this solution is already very compact and easy to write and understand. If you need the result as a string, you can use the analog Sprintf() function:
s := fmt.Sprintf("%b", r)
You can also use the strconv.FormatInt() function which takes a number of type int64 (so you first have to convert your rune) and a base where you can pass 2 to get the result in binary representation:
s := strconv.FormatInt(int64(r), 2)
Note that in Go rune is just an alias for int32, the 2 types are one and the same (just you may refer to it by 2 names).
Doing it manually ("Simple but Naive"):
If you'd want to do it "manually", there is a much simpler solution than your original. You can test the lowest bit with r & 0x01 == 0 and shift all bits with r >>= 1. Just "loop" over all bits and append either "1" or "0" depending on the bit:
Note this is just for demonstration, it is nowhere near optimal regarding performance (generates "redundant" strings):
func RuneToBin(r rune) (s string) {
if r == 0 {
return "0"
}
for digits := []string{"0", "1"}; r > 0; r >>= 1 {
s = digits[r&1] + s
}
return
}
Note: negative numbers are not handled by the function. If you also want to handle negative numbers, you can first check it and proceed with the positive value of it and start the return value with a minus '-' sign. This also applies the other manual solution below.
Manual Performance-wise solution:
For a fast solution we shouldn't append strings. Since strings in Go are just byte slices encoded using UTF-8, appending a digit is just appending the byte value of the rune '0' or '1' which is just one byte (not multi). So we can allocate a big enough buffer/array (rune is 32 bits so max 32 binary digits), and fill it backwards so we won't even have to reverse it at the end. And return the used part of the array converted to string at the end. Note that I don't even call the built-in append function to append the binary digits, I just set the respective element of the array in which I build the result:
func RuneToBinFast(r rune) string {
if r == 0 {
return "0"
}
b, i := [32]byte{}, 31
for ; r > 0; r, i = r>>1, i-1 {
if r&1 == 0 {
b[i] = '0'
} else {
b[i] = '1'
}
}
return string(b[i+1:])
}

Overhead of converting from []byte to string and vice-versa

I always seem to be converting strings to []byte to string again over and over. Is there a lot of overhead with this? Is there a better way?
For example, here is a function that accepts a UTF8 string, normalizes it, remove accents, then converts special characters to ASCII equivalent:
var transliterations = map[rune]string{'Æ':"AE",'Ð':"D",'Ł':"L",'Ø':"OE",'Þ':"Th",'ß':"ss",'æ':"ae",'ð':"d",'ł':"l",'ø':"oe",'þ':"th",'Œ':"OE",'œ':"oe"}
func RemoveAccents(s string) string {
b := make([]byte, len(s))
t := transform.Chain(norm.NFD, transform.RemoveFunc(isMn), norm.NFC)
_, _, e := t.Transform(b, []byte(s), true)
if e != nil { panic(e) }
r := string(b)
var f bytes.Buffer
for _, c := range r {
temp := rune(c)
if val, ok := transliterations[temp]; ok {
f.WriteString(val)
} else {
f.WriteRune(temp)
}
}
return f.String()
}
So I'm starting with a string because that's what I get, then I'm converting it to a byte array, then back to a string, then to a byte array again, then back to a string again. Surely this is unnecessary but I can't figure out how to not do this..? And does it really have a lot of overhead or do I not have to worry about slowing things down with excessive conversions?
(Also if anyone has the time I've not yet figured out how bytes.Buffer actually works, would it not be better to initialize a buffer of 2x the size of the string, which is the maximum output size of the return value?)
In Go, strings are immutable so any change creates a new string. As a general rule, convert from a string to a byte or rune slice once and convert back to a string once. To avoid reallocations, for small and transient allocations, over-allocate to provide a safety margin if you don't know the exact number.
For example,
package main
import (
"bytes"
"fmt"
"unicode"
"unicode/utf8"
"code.google.com/p/go.text/transform"
"code.google.com/p/go.text/unicode/norm"
)
var isMn = func(r rune) bool {
return unicode.Is(unicode.Mn, r) // Mn: nonspacing marks
}
var transliterations = map[rune]string{
'Æ': "AE", 'Ð': "D", 'Ł': "L", 'Ø': "OE", 'Þ': "Th",
'ß': "ss", 'æ': "ae", 'ð': "d", 'ł': "l", 'ø': "oe",
'þ': "th", 'Œ': "OE", 'œ': "oe",
}
func RemoveAccents(b []byte) ([]byte, error) {
mnBuf := make([]byte, len(b)*125/100)
t := transform.Chain(norm.NFD, transform.RemoveFunc(isMn), norm.NFC)
n, _, err := t.Transform(mnBuf, b, true)
if err != nil {
return nil, err
}
mnBuf = mnBuf[:n]
tlBuf := bytes.NewBuffer(make([]byte, 0, len(mnBuf)*125/100))
for i, w := 0, 0; i < len(mnBuf); i += w {
r, width := utf8.DecodeRune(mnBuf[i:])
if s, ok := transliterations[r]; ok {
tlBuf.WriteString(s)
} else {
tlBuf.WriteRune(r)
}
w = width
}
return tlBuf.Bytes(), nil
}
func main() {
in := "test stringß"
fmt.Println(in)
inBytes := []byte(in)
outBytes, err := RemoveAccents(inBytes)
if err != nil {
fmt.Println(err)
}
out := string(outBytes)
fmt.Println(out)
}
Output:
test stringß
test stringss
There is no answer to this question. If these conversions are a performance bottleneck in your application you should fix them. If not: Not.
Did you profile your application under realistic load and RemoveAccents is the bottleneck? No? So why bother?
Really: I assume one could do better (in the sense of less garbage, less iterations and less conversions) e.g. by chaining in some "TransliterationTransformer". But I doubt it would be wirth the hassle.
There is a small overhead with converting a string to a byte slice (not an array, that's a different type). Namely allocating the space for the byte slice.
Strings are its own type and are an interpretation of a sequence of bytes. But not every sequence of bytes is a useful string. Strings are also immutable. If you look at the strings package, you will see that strings will be sliced a lot.
In your example you can omit the second conversion back to string. You can also range over a byte slice.
As with every question about performance: you will probably need to measure. Is the allocation of byte slices really your bottleneck?
You can initialize your bytes.Buffer like so:
f := bytes.NewBuffer(make([]byte, 0, len(s)*2))
where you have a size of 0 and a capacity of 2x the size of your string. If you can estimate the size of your buffer, it is probably good to do that. It will save you a few reallocations of the underlying byte slices.

Resources