In Go, I am trying to create a function that reads and processes the next line of input:
// Read a string of hex from stdin and parse to an array of bytes
func ReadHex() []byte {
r := bufio.NewReader(os.Stdin)
t, _ := r.ReadString('\n')
data, _ := hex.DecodeString(strings.TrimSpace(t))
return data
}
Unfortunately, this only works the first time it is called. It captures the first line but is unable to capture subsequent lines piped via standard input.
I suspect, if the same persistent bufio.Reader() object was used on each subsequent call, it would work but I haven't been able to achieve this without passing it manually on each function call.
Yes, try this:
package main
import (
"bufio"
"encoding/hex"
"fmt"
"log"
"os"
"strings"
)
func ReadFunc() func() []byte {
r := bufio.NewReader(os.Stdin)
return func() []byte {
t, err := r.ReadString('\n')
if err != nil {
log.Fatal(err)
}
data, err := hex.DecodeString(strings.TrimSpace(t))
if err != nil {
log.Fatal(err)
}
return data
}
}
func main() {
r, w, err := os.Pipe()
if err != nil {
log.Fatal(err)
}
os.Stdin = r
w.Write([]byte(`ffff
cafebabe
ff
`))
w.Close()
ReadHex := ReadFunc()
fmt.Println(ReadHex())
fmt.Println(ReadHex())
fmt.Println(ReadHex())
}
Output:
[255 255]
[202 254 186 190]
[255]
Using a struct, try this:
package main
import (
"bufio"
"encoding/hex"
"fmt"
"io"
"log"
"os"
"strings"
)
// InputReader struct
type InputReader struct {
bufio.Reader
}
// New creates an InputReader
func New(rd io.Reader) *InputReader {
return &InputReader{Reader: *bufio.NewReader(rd)}
}
// ReadHex returns a string of hex from stdin and parse to an array of bytes
func (r *InputReader) ReadHex() []byte {
t, err := r.ReadString('\n')
if err != nil {
log.Fatal(err)
}
data, err := hex.DecodeString(strings.TrimSpace(t))
if err != nil {
log.Fatal(err)
}
return data
}
func main() {
r, w, err := os.Pipe()
if err != nil {
log.Fatal(err)
}
os.Stdin = r
w.Write([]byte(`ffff
cafebabe
ff
`))
w.Close()
rdr := New(os.Stdin)
fmt.Println(rdr.ReadHex())
fmt.Println(rdr.ReadHex())
fmt.Println(rdr.ReadHex())
}
Related
I am trying to convert raw byte strings into UUIDs in my program as follows:
Case1:
package main
import (
"fmt"
"strconv"
"github.com/google/uuid"
)
func main() {
s := `"\220\254\0021\265\235O~\244\326\216\"\227c\245\002"`
s2, err := strconv.Unquote(s)
if err != nil {
panic(err)
}
by := []byte(s2)
u, err := uuid.FromBytes(by)
if err != nil {
panic(err)
}
fmt.Println(u.String())
}
output:
90ac0231-b59d-4f7e-a4d6-8e229763a502
Case2:
package main
import (
"fmt"
"strconv"
"github.com/google/uuid"
)
func main() {
s := `"\235\273\'\021\003\261#\022\226\275o\265\322\002\211\263"`
s2, err := strconv.Unquote(s)
if err != nil {
panic(err)
}
by := []byte(s2)
u, err := uuid.FromBytes(by)
if err != nil {
panic(err)
}
fmt.Println(u.String())
}
output:
panic: invalid syntax
goroutine 1 [running]:
main.main()
/tmp/sandbox1756881518/prog.go:14 +0x149
Program exited.
The above program is working with string "\220\254\0021\265\235O~\244\326\216\"\227c\245\002" but fails at converting string "\235\273\'\021\003\261#\022\226\275o\265\322\002\211\263" into uuid. How can I convert these strings into UUIDs?
If fails because of \'. When using backticks, all backslashes are literally backslashes and not escape sequences, so you are passing raw backslash \, followed by single quote ' to the strconv.Unquote. It causes invalid syntax. There are two workarounds here:
First
Just replace this line:
s := `"\235\273\'\021\003\261#\022\226\275o\265\322\002\211\263"`
with this:
s := `"\235\273'\021\003\261#\022\226\275o\265\322\002\211\263"`
So there is ' not \'. But if you need to programmatically convert the string, use the second approach.
Second
Import "strings":
import (
"fmt"
"strconv"
"strings"
"github.com/google/uuid"
)
And replace \' with ':
s = strings.ReplaceAll(s, `\'`, `'`)
So the full code looks like this now:
package main
import (
"fmt"
"strconv"
"strings"
"github.com/google/uuid"
)
func main() {
s := `"\235\273\'\021\003\261#\022\226\275o\265\322\002\211\263"`
s = strings.ReplaceAll(s, `\'`, `'`)
s2, err := strconv.Unquote(s)
if err != nil {
fmt.Println(err)
}
by := []byte(s2)
u, err := uuid.FromBytes(by)
if err != nil {
fmt.Println(err)
}
fmt.Println(u.String())
}
I am trying to automate a process in Go. I have been able to implement threads and do the process accordingly however the output is mixed and matched.
I was wondering if there is a way to show the output as it is produced by the program and according to the program's process. So if task A completes before task B, we show A's output before B, or vice-versa.
package main
import (
"fmt"
"log"
"os"
"os/exec"
"sync"
)
var url string
var wg sync.WaitGroup
func nikto() {
cmd := exec.Command("nikto", "-h", url)
cmd.Stdout = os.Stdout
err := cmd.Run()
if err != nil {
log.Fatal(err)
}
wg.Done()
}
func whois() {
cmd := exec.Command("whois", "google.co")
cmd.Stdout = os.Stdout
err := cmd.Run()
if err != nil {
log.Fatal(err)
}
wg.Done()
}
func main() {
fmt.Printf("Please input URL")
fmt.Scanln(&url)
wg.Add(1)
go nikto()
wg.Add(1)
go whois()
wg.Wait()
}
In your process, you pass the os.Stdout file descriptor directly to the commands you invoke to run your child processes. This means the STDOUT pipe of the child processes will be connected directly to your Go program's standard output, and will likely be interleaved if both child processes write simultaneously.
The simplest way to fix this requires you to buffer the output from the STDOUT pipe of the child process in your Go program, so you can intercept the output and control when it is printed.
The Cmd type in the os/exec package provides a function call Output() which will invoke the child process and return the contents of STDOUT in a byte slice. Your code can be adapted with ease to implement this pattern and process the results, for example:
func whois() {
cmd := exec.Command("whois", "google.co")
out, err := cmd.Output()
if err != nil {
log.Fatal(err)
}
fmt.Println(out)
wg.Done()
}
Interleaving of output
If you use functions in the fmt package to print output, there is no guarantee that concurrent calls to fmt.Println will not be interleaved.
To prevent interleaving, you may choose to serialize access to STDOUT, or use a logger which is safe for concurrent use (such as the log package). Here is an example of serializing access to STDOUT in the Go process:
package main
import (
"fmt"
"log"
"os/exec"
"sync"
)
var url string
func nikto(outChan chan<- []byte) {
cmd := exec.Command("nikto", "-h", url)
bs, err := cmd.Output()
if err != nil {
log.Fatal(err)
}
outChan <- bs
}
func whois(outChan chan<- []byte) {
cmd := exec.Command("whois", "google.com")
bs, err := cmd.Output()
if err != nil {
log.Fatal(err)
}
outChan <- bs
}
func main() {
outChan := make(chan []byte)
fmt.Printf("Please input URL")
fmt.Scanln(&url)
go nikto(outChan)
go whois(outChan)
for i := 0; i < 2; i++ {
bs := <-outChan
fmt.Println(string(bs))
}
}
I have the following Go code which will eventually fill the disk and fail with ENOSPC (just a proof of concept). How can I determine from the err returned by os.Write that it indeed failed because of ENOSPC (so I need a way to grab errno after the write operation) ?
package main
import (
"log"
"os"
)
func main() {
fd, _ := os.Create("dump.txt")
defer fd.Close()
for {
buf := make([]byte, 1024)
_, err := fd.Write(buf)
if err != nil {
log.Fatalf("%T %v", err, err)
}
}
}
EDIT: Updated the program as #FUZxxl suggested:
package main
import (
"log"
"os"
"syscall"
)
func main() {
fd, _ := os.Create("dump.txt")
defer fd.Close()
for {
buf := make([]byte, 1024)
_, err := fd.Write(buf)
if err != nil {
log.Printf("%T %v\n", err, err)
errno, ok := err.(syscall.Errno)
if ok {
log.Println("type assert ok")
if errno == syscall.ENOSPC {
log.Println("got ENOSPC")
}
} else {
log.Println("type assert not ok")
}
break
}
}
}
However, I'm not getting the expected result. Here is the output:
2015/02/15 10:13:27 *os.PathError write dump.txt: no space left on device
2015/02/15 10:13:27 type assert not ok
File operations generally return an *os.PathError; cast err to os.PathError and use the Err field to examine the underlying cause, like this:
patherr, ok := err.(*os.PathError)
if ok && patherr.Err == syscall.ENOSPC {
log.Println("Out of disk space!")
}
i am trying to create a simple program to read lines from a text file and print them out to the console in golang. I spent lots of time going over my code and I simply can't understand why only the last line is being printed out to the screen. can anyone tell me where I am going wrong here? Everything here should compile and run.
package main
import (
"bufio"
"fmt"
"os"
)
func Readln(r *bufio.Reader) (string, error) {
var (
isPrefix bool = true
err error = nil
line, ln []byte
)
for isPrefix && err == nil {
line, isPrefix, err = r.ReadLine()
ln = append(ln, line...)
}
return string(ln), err
}
func main() {
f, err := os.Open("tickers.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
r := bufio.NewReader(f)
s, e := Readln(r)
for e == nil {
fmt.Println(s)
s, e = Readln(r)
}
}
I therefore suspect that the problem is in your tickers.txt file line endings. The docs for ReadLine() also indicate that for most situations a Scanner is more suitable.
The following SO question has some useful information for alternative implementations: reading file line by line in go
I then used the example in the above question to re-implement your main function as follows:
f, err := os.Open("tickers.txt")
if err != nil {
fmt.Printf("error opening file: %v\n", err)
os.Exit(1)
}
scanner := bufio.NewScanner(f)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
if err := scanner.Err(); err != nil {
fmt.Println(err)
}
I already found encoding/binary package to deal with it, but it depended on reflect package so it didn't work with uncapitalized(that is, unexported) struct fields. However I spent a week to find that problem out, I still have a question: if struct fields should not be exported, how do I dump them easily into binary data?
EDIT: Here's the example. If you capitalize the name of fields of Data struct, that works properly. But Data struct was intended to be an abstract type, so I don't want to export these fields.
package main
import (
"fmt"
"encoding/binary"
"bytes"
)
type Data struct {
id int32
name [16]byte
}
func main() {
d := Data{Id: 1}
copy(d.Name[:], []byte("tree"))
buffer := new(bytes.Buffer)
binary.Write(buffer, binary.LittleEndian, d)
// d was written properly
fmt.Println(buffer.Bytes())
// try to read...
buffer = bytes.NewBuffer(buffer.Bytes())
var e = new(Data)
err := binary.Read(buffer, binary.LittleEndian, e)
fmt.Println(e, err)
}
Your best option would probably be to use the gob package and let your struct implement the GobDecoder and GobEncoder interfaces in order to serialize and deserialize private fields.
This would be safe, platform independent, and efficient. And you have to add those GobEncode and GobDecode functions only on structs with unexported fields, which means you don't clutter the rest of your code.
func (d *Data) GobEncode() ([]byte, error) {
w := new(bytes.Buffer)
encoder := gob.NewEncoder(w)
err := encoder.Encode(d.id)
if err!=nil {
return nil, err
}
err = encoder.Encode(d.name)
if err!=nil {
return nil, err
}
return w.Bytes(), nil
}
func (d *Data) GobDecode(buf []byte) error {
r := bytes.NewBuffer(buf)
decoder := gob.NewDecoder(r)
err := decoder.Decode(&d.id)
if err!=nil {
return err
}
return decoder.Decode(&d.name)
}
func main() {
d := Data{id: 7}
copy(d.name[:], []byte("tree"))
buffer := new(bytes.Buffer)
// writing
enc := gob.NewEncoder(buffer)
err := enc.Encode(d)
if err != nil {
log.Fatal("encode error:", err)
}
// reading
buffer = bytes.NewBuffer(buffer.Bytes())
e := new(Data)
dec := gob.NewDecoder(buffer)
err = dec.Decode(e)
fmt.Println(e, err)
}