I have a thermal printer connected via USB at /dev/usb/lp0. I want to print a label by writing my EPL (Zebra commands) to this file. But the printer doesn't receive all the written bytes. I suspect some kind of buffering but don't know hot to avoid it.
I have a working java example:
FileOutputStream fo = new FileOutputStream("/dev/usb/lp0");
fo.write(bytes, 0 , bytes.length);
fo.flush();
fo.close();
But my go code doesnt't work:
f, err := os.OpenFile("/dev/usb/lp0", os.O_RDWR, 755)
if err != nil {
return err
}
n, err := f.Write(bytes)
if err != nil {
f.Close()
return err
}
log.Printf("Wrote %d bytes", n)
return f.Close()
Putting the printer into Trace mode, I found out, that the printer receives only the first 128 of
181 bytes. When I add a second write method after the first line, like this
f.WriteString(" ")
the sequence get printed as submitted. Using f.Sync() to flush doesn't work, because it's not supported for files like /dev/usb/*. How do I correctly write to the printer without using a second write method?
Related
I want to create an empty file with a size of 1GB without initializing the contents of the file. Is there a way to quickly create a file of a specified size like in golang?
golang code:
func CreateFileBySize(file string, size int64) error {
f, err := os.Create(file)
if err != nil {
return err
}
defer f.Close()
if err := f.Truncate(size); err != nil {
return err
}
return nil
}
Truncate changes the size of the file. It does not change the I/O offset.
Is there an existing similar method in rust
As far as i know there is no support for sparse files in the Rust standard library (assuming that's what you mean, I have no idea if it is).
According to https://www.systutorials.com/handling-sparse-files-on-linux/ and https://users.rust-lang.org/t/rust-create-sparse-file/57276/2 you can just seek forward and write a single byte, and you'll get a sparse file.
Alternatively, you can use the libc crate and call fallocate(2) and / or pwrite(2) depending on your requirements
I just read some Go code that does something along the following lines:
type someType struct {
...
...
rpipe io.ReadCloser
wpipe io.WriteCloser
}
var inst someType
inst.rpipe, inst.wpipe, _ := os.Pipe()
cmd := exec.Command("some_binary", args...)
cmd.Stdout = inst.wpipe
cmd.Stderr = inst.wpipe
if err := cmd.Start(); err != nil {
....
}
inst.wpipe.Close()
inst.wpipe = nil
some_binary is a long running process.
Why is inst.wpipe closed and set to nil? What would happen if its not closed? Is it common/necessary to close inst.wpipe?
Is dup2(pipe_fd[1], 1) the C analogue of cmd.Stdout = inst.wpipe; inst.wpipe.Close()?
That code is typical of a program that wants to read output generated by some other program. The os.Pipe() function returns a connected pair of os.File entities (or, on error—which should not be simply ignored—doesn't) where a write on the second (w or wpipe) entity shows up as readable bytes on the first (r / rpipe) entity. But—this is the key to half the answer to your first question—how will a reader know when all writers are finished writing?
For a reader to get an EOF indication, all writers that have or had access to the write side of the pipe must call the close operation. By passing the write side of the pipe to a program that we start with cmd.Start(), we allow that command to access the write side of the pipe. When that command closes that pipe, one of the entities with access has closed the pipe. But another entity with access hasn't closed it yet: we have write access.
To see an EOF, then, we must close off access to our wpipe, with wpipe.Close(). So that answer the first half of:
Why is inst.wpipe closed and set to nil?
The set-to-nil part may or may not have any function; you should inspect the rest of the code to find out if it does.
Is dup2(pipe_fd[1], 1) the C analogue of cmd.Stdout = inst.wpipe; inst.wpipe.Close()?
Not precisely. The dup2 level is down in the POSIX OS area, while cmd.Stdout is at a higher (OS-independent) level. The POSIX implementation of cmd.Start() will wind up calling dup2 (or something equivalent) like this after calling fork (or something equivalent). The POSIX equivalent of inst.wipe.Close() is close(wfd) where wfd is the POSIX file number in wpipe.
In C code that doesn't have any higher level wrapping around it, we'd have something like:
int fds[2];
if (pipe(fds) < 0) ... handle error case ...
pid = fork();
switch (pid) {
case -1: ... handle error ...
case 0: /* child */
if (dup2(fds[1], 1) < 0 || dup2(fds[1], 2) < 0) ... handle error ...
if (execve(prog, args, env) < 0) ... handle error ...
/* NOTREACHED */
default: /* parent */
if (close(fds[1]) < 0) ... handle error ...
... read from fds[0] ...
}
(although if we're careful enough to check for an error from close, we probably should be careful enough to check whether the pipe system call gave us back descriptors 0 and 1, or 1 and 2, or 2 and 3, here—though perhaps we handle this earlier by making sure that 0, 1, and 2 are at least open to /dev/null).
The capture stream adds netflow v5 dumps to the server, when reading it from GO it is either simply impossible to read, because the first two bytes are not a version of netflow, or if I transfer the packet to 11 arrays, it gives the packet completely or also gives an error if it tries example output that should be, example output:
0.0.0.10
157.121.64.85
138.166.72.38
107.207.29.71
108.177.99.171
243.117.126.147
But netflow writes addresses in a dump which are behind NAT.
When working with flow-tools everything is displayed correctly, used various libraries to unmarshal into the structure, but the output is the same everywhere, the one indicated above.
go func() {
for f := range fileNamesCatalog{
file, err := ioutil.ReadFile(f)
if err != nil{
fmt.Errorf("Failed open file %v", f)
continue
}
decoder := netflow.NewDecoder(session.New())
body, err := decoder.Read(bytes.NewReader(file[11:]))
switch packet := body.(type) {
case *netflow5.Packet:
filter(packet,net.ParseIP("157.121.64.85"),packetsChannel)
}
}
}()
I am using two packages:
https://github.com/tehmaze/netflow
https://github.com/VerizonDigital/vflow
How would a process read its own output stream? I am writing automated tests which start a few application sub-processes (applications) in the same process as the test. Therefore, the standard out is a mix of test output and application output.
I want to read the output stream at runtime and fail the test if I see errors from the application. Is this possible/feasible? If so, how do I do it?
Note: I know I could start the applications as their own separate processes and then read their output streams. That's a lot of work from where I am now.
Also note, this is not a dupe of How to test a function's output (stdout/stderr) in Go unit tests, although that ticket is similar and helpful. The other ticket is about capturing output for a single function call. This ticket is about reading the entire stream, continuously. The correct answer is also a little different - it requires a pipe.
Yes, You may use os.Pipe() then process it yourself:
tmp := os.Stdout
r, w, err := os.Pipe()
if err != nil {
panic(err)
}
os.Stdout = w
Or divert os.Stdout to a another file or strings.Builder.
Here is the detailed answer:
In Go, how do I capture stdout of a function into a string?
A slightly modified version of an answer given in In Go, how do I capture stdout of a function into a string? using a os.Pipe (a form of IPC):
Pipe returns a connected pair of Files; reads from r return bytes written to w. It returns the files and an error, if any.
As os.Stdout is an *os.File, you could replace it with any file.
package main
import (
"bytes"
"fmt"
"io"
"log"
"os"
)
func main() {
old := os.Stdout
r, w, _ := os.Pipe() // TODO: handle error.
os.Stdout = w
// All stdout will be caputered from here on.
fmt.Println("this will be caputered")
// Access output and restore previous stdout.
outc := make(chan string)
go func() {
var buf bytes.Buffer
io.Copy(&buf, r) // TODO: handle error
outc <- buf.String()
}()
w.Close()
os.Stdout = old
out := <-outc
log.Printf("captured: %s", out)
}
I need to write bytes1 to file1 and bytes2 to file2 and to make sure I will not catch no space left exception during writings.
Or maybe someone knows how sql databases implement integrity of their files?
I've found a way to implement transaction but don't know pitfalls of this method. The key element is truncate syscall which will allow us to implement a rollback logic. Here is a pseudocode:
file1Pos = file1.tell()
file2Pos = file2.tell()
err = file1.write(bytes1)
if err != nil {
// rollback to previous position
file1.truncate(file1Pos)
// The file offset is not changed after truncation
file1.seek(file1Pos, SEEK_SET)
}
err = file2.write(bytes2)
if err != nil {
file1.truncate(file1Pos)
file1.seek(file1Pos, SEEK_SET)
file2.truncate(file2Pos)
file2.seek(file2Pos, SEEK_SET)
}
According to Is file append atomic in UNIX? a single write is atomic on linux if the file is opened with O_DIRECT . There might be more developments out there if you search for "atomic write".