go1.6 File method WriteStringfrequent calls led to a large system cache - linux

go1.6 File method WriteStringfrequent calls led to a large system cache.
How to solve this problem.
go env: linux amd64.
Is this a problem of Linux system?
code:
package main
import (
"fmt"
"net/http"
"os"
"time"
)
var logCtxCh chan *http.Request
var accessLogFile *os.File
type HandlerHttp struct{}
func (this *HandlerHttp) ServeHTTP(w http.ResponseWriter, req *http.Request) {
sendAccessLog(req)
w.Write([]byte("Hello Word"))
}
func main() {
s := &http.Server{
Addr: ":8012",
Handler: &HandlerHttp{},
}
logCtxCh = make(chan *http.Request, 500)
go startAcessLog()
err:= s.ListenAndServe()
fmt.Println(err.Error())
}
func startAcessLog() {
for {
select {
case ctx := <-logCtxCh:
handleAccessLog(ctx)
}
}
}
func sendAccessLog(req *http.Request) {
logCtxCh <- req
}
func handleAccessLog(req *http.Request) {
uri := req.RequestURI
ip := req.RemoteAddr
agent := req.UserAgent()
refer := req.Referer()
method := req.Method
now := time.Now().Format("2006-01-02 15:04:05")
logText := fmt.Sprintf("%s %s %s %s %s %s\n",
now,
ip,
method,
uri,
agent,
refer,
)
fileName := fmt.Sprintf("/data/logs/zyapi/access_zyapi%s.log",
time.Now().Format("2006010215"),
)
writeLog(fileName, logText)
}
func writeLog(fileName, logText string) {
var err error
var exist = true
if _, err = os.Stat(fileName); os.IsNotExist(err) {
exist = false
}
if exist == false {
if accessLogFile != nil {
accessLogFile.Sync()
accessLogFile.Close()
}
accessLogFile, err = os.OpenFile(fileName, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
if err == nil {
_, err = accessLogFile.WriteString(logText)
}
if err != nil {
fmt.Errorf(err.Error())
}
} else {
if accessLogFile == nil {
accessLogFile, err = os.OpenFile(fileName, os.O_WRONLY|os.O_APPEND, 0666)
if err != nil {
fmt.Errorf(err.Error())
return
}
}
_, err = accessLogFile.WriteString(logText)
if err != nil {
fmt.Errorf(err.Error())
}
}
}
test:
ab -n100000 -c10 -k "http://127.0.0.1:8012/"
ab -n100000 -c10 -k "http://127.0.0.1:8012/"
ab -n100000 -c10 -k "http://127.0.0.1:8012/"
ab -n100000 -c10 -k "http://127.0.0.1:8012/"
ab -n100000 -c10 -k "http://127.0.0.1:8012/"
After running several times the system file cache becomes very large
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O BLOCK I/O
api_8011 38.47% 6.442GB/6.442GB 100.00% 0B/0B 0B/115.4MB
api_8012 36.90% 6.442GB/6.442GB 99.99% 0B/0B 0B/115.6 MB

There's a bunch of things going on, I can't spot the bug right away but these things will help:
Try to use bufio.Writer as much as possible if you are calling file.WriteString, otherwise every single write will be a syscall, hurting performance.
You don't need to use select in you startAccessLog function:
func startAcessLog() {
for ctx := <-logCtxCh {
handleAccessLog(ctx)
}
}
Change your error checks from:
if err != nil {
fmt.Errorf(err.Error())
}
to:
if err != nil {
fmt.Println(err)
}
otherwise you are not printing errors. fmt.Errorf formats a string, like fmt.Sprintf does and returns it as an error. It doesn't print anything at all.
You should guard accessLog with a sync.Mutex or write to it via a channel. Why? Because there's more than one goroutine trying to work with accessLog and you don't want data races to happen.
Doing it via a channel would simplify your writeLog function a log. Currently it's hard to follow the logic. I initially thought you weren't properly closing the file.

Related

Golang SSH tunneling and ProxyJump

What I need is to perform the equivalent of the following command but in Go code:
ssh -L 9999:192.168.1.1:80 -J root#[IPv6 address] myuser#100.1.1.100
I'm not even sure where to start with this one.
I haven't been able to find any examples online and I'm at a loss.
Does anyone know how this could be done in Go?
package main
import (
"io"
"log"
"net"
"golang.org/x/crypto/ssh"
)
func main() {
client, err := ssh.Dial("tcp", "100.1.1.100:22", &ssh.ClientConfig{
User: "root",
Auth: []ssh.AuthMethod{ssh.Password("")},
HostKeyCallback: ssh.InsecureIgnoreHostKey(),
})
if err != nil {
log.Panicln(err)
return
}
log.Println("init ssh client")
ln, err := net.Listen("tcp", ":9999")
if err != nil {
log.Panicln(err)
return
}
log.Println("local listen")
for {
localconn, err := ln.Accept()
if err != nil {
log.Panicln(err)
return
}
sshconn, err := client.DialTCP("", nil, &net.TCPAddr{IP: net.ParseIP("192.168.1.1"), Port: 80})
if err != nil {
log.Panicln(err)
return
}
// local <--> remote
go func() {
errc := make(chan error, 1)
spc := switchProtocolCopier{user: localconn, backend: sshconn}
go spc.copyToBackend(errc)
go spc.copyFromBackend(errc)
log.Printf("stop conn error: %v\n", <-errc)
}()
}
}
// switchProtocolCopier exists so goroutines proxying data back and
// forth have nice names in stacks.
type switchProtocolCopier struct {
user, backend io.ReadWriter
}
func (c switchProtocolCopier) copyFromBackend(errc chan<- error) {
_, err := io.Copy(c.user, c.backend)
errc <- err
}
func (c switchProtocolCopier) copyToBackend(errc chan<- error) {
_, err := io.Copy(c.backend, c.user)
errc <- err
}

Achieving high throughput with Go TCP client-server

I'm going to develop a simple TCP client and server and I want to achieve high throughput (300000 Requests/Second) which is easy to reach with Cpp or C TCP client and server on a server hardware. I mean a server with 48 Cores and 64G Memory.
On my testbed, both client and server have 10G network interface card and I have receive-side-scaling at server side and transmit-packet-steering enabled at the client.
I configure the client to send 10 thousand requests per second. I just run multiple instances of Go go run client.go from a bash script to increase the throughput. However, in this way, Go is going to create lots of threads at the operating systems and a large number of threads results in high context switching cost, and I could not approach such throughputs. I suspected the number of Go instances I'm running from the command line. The code below is the code snippet for the client in the approach:
func Main(cmd_rate_int int, cmd_port string) {
//runtime.GOMAXPROCS(2) // set maximum number of processes to be used by this applications
//var rate float64 = float64(rate_int)
rate := float64(cmd_rate_int)
port = cmd_port
conn, err := net.Dial("tcp", port)
if err != nil {
fmt.Println("ERROR", err)
os.Exit(1)
}
var my_random_number float64 = nextTime(rate) * 1000000
var my_random_int int = int(my_random_number)
var int_message int64 = time.Now().UnixNano()
byte_message := make([]byte, 8)
go func(conn net.Conn) {
buf := make([]byte, 8)
for true {
_, err = io.ReadFull(conn, buf)
now := time.Now().UnixNano()
if err != nil {
return
}
last := int64(binary.LittleEndian.Uint64(buf))
fmt.Println((now - last) / 1000)
}
return
}(conn)
for true {
my_random_number = nextTime(rate) * 1000000
my_random_int = int(my_random_number)
time.Sleep(time.Microsecond * time.Duration(my_random_int))
int_message = time.Now().UnixNano()
binary.LittleEndian.PutUint64(byte_message, uint64(int_message))
conn.Write(byte_message)
}
}
So I try to run all my Go threads by calling go client() in the main so I do not run multiple instances in the Linux command line. I thought it may be a better idea. And it is really a better idea basically and the number of threads doesn't increase toward 700 or so in the operating system. But the throughput still is low and it seems it doesn't employ all capability of the underlying hardware. Actually, you may want to see the code I have run in the second approach:
func main() {
//runtime.GOMAXPROCS(2) // set maximum number of processes to be used by this applications
args := os.Args[1:]
rate_int, _ := strconv.Atoi(args[0])
client_size, _ := strconv.Atoi(args[1])
port := args[2]
i := 0
for i <= client_size {
go client.Main(rate_int, port)
i = i + 1
}
for true {
}
}
I was wondering what is the best practice for in order to reach high throughput? I have always heard that Go is lightweight and performant and pretty comparable with C/Cpp pthread. However, I think in terms of performance still C/Cpp is far far better than Go. I might do something really wrong on this issue, so I would be happy if anybody can help to achieve high throughput with Go.
this is a quick rework of the op code.
As the original source code is working, it does not provide a solution, however it illustrates bucket token usage, and few other small go tips.
It does re use similar default values as op source code.
It demonstrates you do not need two files / programs, to provide both client and server.
It demonstrates usage of flag package.
It shows how to parse unix nano timestamp appropriately using time.Unix(x,y)
It shows how to take advantage of io.Copy to write-what-you-read on the same net.Conn. Rather than manual writing.
Still, this is improper for production delivery.
package main
import (
"encoding/binary"
"flag"
"fmt"
"io"
"log"
"math"
"math/rand"
"net"
"os"
"sync/atomic"
"time"
"github.com/juju/ratelimit"
)
var total_rcv int64
func main() {
var cmd_rate_int float64
var cmd_port string
var client_size int
flag.Float64Var(&cmd_rate_int, "rate", 400000, "change rate of message reading")
flag.StringVar(&cmd_port, "port", ":9090", "port to listen")
flag.IntVar(&client_size, "size", 20, "number of clients")
flag.Parse()
t := flag.Arg(0)
if t == "server" {
server(cmd_port)
} else if t == "client" {
for i := 0; i < client_size; i++ {
go client(cmd_rate_int, cmd_port)
}
// <-make(chan bool) // infinite wait.
<-time.After(time.Second * 2)
fmt.Println("total exchanged", total_rcv)
} else if t == "client_ratelimit" {
bucket := ratelimit.NewBucketWithQuantum(time.Second, int64(cmd_rate_int), int64(cmd_rate_int))
for i := 0; i < client_size; i++ {
go clientRateLimite(bucket, cmd_port)
}
// <-make(chan bool) // infinite wait.
<-time.After(time.Second * 3)
fmt.Println("total exchanged", total_rcv)
}
}
func server(cmd_port string) {
ln, err := net.Listen("tcp", cmd_port)
if err != nil {
panic(err)
}
for {
conn, err := ln.Accept()
if err != nil {
panic(err)
}
go io.Copy(conn, conn)
}
}
func client(cmd_rate_int float64, cmd_port string) {
conn, err := net.Dial("tcp", cmd_port)
if err != nil {
log.Println("ERROR", err)
os.Exit(1)
}
defer conn.Close()
go func(conn net.Conn) {
buf := make([]byte, 8)
for {
_, err := io.ReadFull(conn, buf)
if err != nil {
break
}
// int_message := int64(binary.LittleEndian.Uint64(buf))
// t2 := time.Unix(0, int_message)
// fmt.Println("ROUDNTRIP", time.Now().Sub(t2))
atomic.AddInt64(&total_rcv, 1)
}
return
}(conn)
byte_message := make([]byte, 8)
for {
wait := time.Microsecond * time.Duration(nextTime(cmd_rate_int))
if wait > 0 {
time.Sleep(wait)
fmt.Println("WAIT", wait)
}
int_message := time.Now().UnixNano()
binary.LittleEndian.PutUint64(byte_message, uint64(int_message))
_, err := conn.Write(byte_message)
if err != nil {
log.Println("ERROR", err)
return
}
}
}
func clientRateLimite(bucket *ratelimit.Bucket, cmd_port string) {
conn, err := net.Dial("tcp", cmd_port)
if err != nil {
log.Println("ERROR", err)
os.Exit(1)
}
defer conn.Close()
go func(conn net.Conn) {
buf := make([]byte, 8)
for {
_, err := io.ReadFull(conn, buf)
if err != nil {
break
}
// int_message := int64(binary.LittleEndian.Uint64(buf))
// t2 := time.Unix(0, int_message)
// fmt.Println("ROUDNTRIP", time.Now().Sub(t2))
atomic.AddInt64(&total_rcv, 1)
}
return
}(conn)
byte_message := make([]byte, 8)
for {
bucket.Wait(1)
int_message := time.Now().UnixNano()
binary.LittleEndian.PutUint64(byte_message, uint64(int_message))
_, err := conn.Write(byte_message)
if err != nil {
log.Println("ERROR", err)
return
}
}
}
func nextTime(rate float64) float64 {
return -1 * math.Log(1.0-rand.Float64()) / rate
}
Edit This is a pretty bad answer. Check mh-cbon comments for the reasons.
I don't fully understand how you're trying to do so, but if I want to control the rate on Go, I usually do 2 nested for loops:
for ;; time.Sleep(time.Second) {
go func (){
for i:=0; i<rate; i++ {
go func (){
// Do whatever
}()
}
}()
}
I'm starting a goroutine inside each loop to:
on the outer loop, to ensure it's only 1 second between iterations
on the inner loop, to ensure I can start all the requests I want
Putting this on a problem like yours, it would look something like:
package main
import (
"net"
"os"
"time"
)
const (
rate = 100000
address = "localhost:8090"
)
func main() {
conn, err := net.Dial("tcp", address)
if err != nil {
os.Stderr.Write([]byte(err.Error() + "\n"))
os.Exit(1)
}
for ; err == nil; time.Sleep(time.Second) {
go func() {
for i := 0; i < rate; i++ {
go func(conn net.Conn) {
if _, err := conn.Write([]byte("01234567")); err != nil {
os.Stderr.Write([]byte("\nConnection closed: " + err.Error() + "\n"))
}
}(conn)
}
}()
}
}
To verify that this is actually sending the target request rate, you can have a test TCP listener like this:
package main
import (
"fmt"
"net"
"os"
"time"
)
const (
address = ":8090"
payloadSize = 8
)
func main() {
count := 0
b := make([]byte, payloadSize)
l, err := net.Listen("tcp", address)
if err != nil {
fmt.Fprintf(os.Stdout, "\nCan't listen to address %v: %v\n", address, err)
return
}
defer l.Close()
go func() {
for ; ; time.Sleep(time.Second) {
fmt.Fprintf(os.Stdout, "\rRate: %v/s ", count)
count = 0
}
}()
for {
conn, err := l.Accept()
if err != nil {
fmt.Fprintf(os.Stderr, "\nFailed to accept connection: %v\n", err)
}
for {
_, err := conn.Read(b)
if err != nil {
fmt.Fprintf(os.Stderr, "\nConnection closed: %v\n", err)
break
}
count = count + 1
}
}
}
I found some issues due to not being able to write concurrently into the connection with an error inconsistent fdMutex. This is due to reaching over 0xfffff concurrent writes, which fdMutex does not support. To mitigate this issue, make sure you don't go over that number of concurrent writes. In my system, it was >100k/s. This is not the 300k/s you're expecting, but my system is not prepared for that.

How to test a function that watches a file in a goroutine

I have a function that watches certian file via fsnotify and calls a callback when the file changes. If the callback returns false, the watching is ended:
import (
"github.com/golang/glog"
"github.com/fsnotify/fsnotify"
)
type WatcherFunc func(err error) bool
func WatchFileChanges(filename string, watcherFunc WatcherFunc) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
glog.Errorf("Got error creating watcher %s", err)
}
defer watcher.Close()
done := make(chan bool)
go func() {
for {
select {
case event := <-watcher.Events:
glog.Infof("inotify event %s", event)
if event.Op&fsnotify.Write == fsnotify.Write {
glog.Infof("modified file %s, calling watcher func", event.Name)
if !watcherFunc(nil) {
close(done)
}
}
case err := <-watcher.Errors:
glog.Errorf("Got error watching %s, calling watcher func", err)
if !watcherFunc(err) {
close(done)
}
}
}
}()
glog.Infof("Start watching file %s", filename)
err = watcher.Add(filename)
if err != nil {
glog.Errorf("Got error adding watcher %s", err)
}
<-done
}
Then I thought it would be nice to have a test for that, so I started out with a simple test case:
import (
"io/ioutil"
"os"
"testing"
)
func TestStuff(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "test")
if err != nil {
t.Fatal("Failed to create tmp file")
}
defer os.Remove(tmpfile.Name())
watcherFunc := func (err error) bool {
return false
}
WatchFileChanges(tmpfile.Name(), watcherFunc)
}
What I wanted do to here is to do a few modifications to the file, collect the events in an array, return then false from the watcherFunc and then assert on the array. The thing is, of course the test just hangs and waits for events, as the goroutine was started.
Is there any way how I can test a function like this, like … starting a different thread (?) that updates/modifies the file?
Is there any way how I can test a function like this, like … starting a different thread (?) that updates/modifies the file?
Of course... start a goroutine that does the updates you want.
func TestStuff(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "test")
if err != nil {
t.Fatal("Failed to create tmp file")
}
defer os.Remove(tmpfile.Name())
watcherFunc := func (err error) bool {
return false
}
go func() {
// Do updates here
}()
WatchFileChanges(tmpfile.Name(), watcherFunc)
}

Run Golang as www-data

When I run a Node HTTP server app I usually call a custom function
function runAsWWW()
{
try
{
process.setgid('www-data');
process.setuid('www-data');
} catch (err)
{
console.error('Cowardly refusal to keep the process alive as root.');
process.exit(1);
}
}
from server.listen(8080,'localhost',null,runAsWWW);
so the server is actually running as the www-data user to offer a better modicum of security. Is there something similar I can do when I start up a Golang web server by issuing go run index.go?
No. You can't reliably setuid or setgid in go, because that doesn't work for multithreaded programs.
You need to start the program as the intended user, either directly, through a supervisor of some sort (e.g. supervisord, runit, monit), or through your init system.
Expanding on #JimB's answer:
Use a process supervisor to run your application as a specific user (and handle restarts/crashes, log re-direction, etc). setuid and setgid are universally bad ideas for multi-threaded applications.
Either use your OS' process manager (Upstart, systemd, sysvinit) or a standalone process manager (Supervisor, runit, monit, etc).
Here's an example for Supervisor:
[program:yourapp]
command=/home/yourappuser/bin/yourapp # the location of your app
autostart=true
autorestart=true
startretries=10
user=yourappuser # the user your app should run as (i.e. *not* root!)
directory=/srv/www/yourapp.com/ # where your application runs from
environment=APP_SETTINGS="/srv/www/yourapp.com/prod.toml" # environmental variables
redirect_stderr=true
stdout_logfile=/var/log/supervisor/yourapp.log # the name of the log file.
stdout_logfile_maxbytes=50MB
stdout_logfile_backups=10
Further: if you're not reverse proxying and your Go application needs to bind to a port < 1024 (e.g. port 80 or 443) then use setcap - for example: setcap cap_net_bind_service=+ep /home/yourappuser/bin/yourapp
PS: I wrote a little article on how to run Go applications with Supervisor (starting from "I don't have Supervisor installed").
You can check if the program is running under a certain user with os/user package:
curr, err := user.Current()
// Check err.
www, err := user.Lookup("www-data")
// Check err.
if *curr != *www {
panic("Go away!")
}
This is not exactly what you want, but it does prevent it from running under any other user. You can run it as www-data by running it with su:
su www-data -c "myserver"
A way to achieve this safely would be to fork yourself.
This is a raw untested example on how you could achieve safe setuid:
1) Make sure you are root
2) Listen on the wanted port (as root)
3) Fork as www-data user.
4) Accept and serve requests.
http://play.golang.org/p/sT25P0KxXK
package main
import (
"flag"
"fmt"
"log"
"net"
"net/http"
"os"
"os/exec"
"os/user"
"strconv"
"syscall"
)
var listenFD = flag.Int("l", 0, "listen pid")
func handler(w http.ResponseWriter, req *http.Request) {
u, err := user.Current()
if err != nil {
log.Println(err)
return
}
fmt.Fprintf(w, "%s\n", u.Name)
}
func lookupUser(username string) (uid, gid int, err error) {
u, err := user.Lookup(username)
if err != nil {
return -1, -1, err
}
uid, err = strconv.Atoi(u.Uid)
if err != nil {
return -1, -1, err
}
gid, err = strconv.Atoi(u.Gid)
if err != nil {
return -1, -1, err
}
return uid, gid, nil
}
// FDListener .
type FDListener struct {
file *os.File
}
// Accept .
func (ln *FDListener) Accept() (net.Conn, error) {
fd, _, err := syscall.Accept(int(*listenFD))
if err != nil {
return nil, err
}
conn, err := net.FileConn(os.NewFile(uintptr(fd), ""))
if err != nil {
return nil, err
}
return conn.(*net.TCPConn), nil
}
// Close .
func (ln *FDListener) Close() error {
return ln.file.Close()
}
// Addr .
func (ln *FDListener) Addr() net.Addr {
return nil
}
func start() error {
u, err := user.Current()
if err != nil {
return err
}
if u.Uid != "0" && *listenFD == 0 {
// we are not root and we have no listen fd. Error.
return fmt.Errorf("need to run as root: %s", u.Uid)
} else if u.Uid == "0" && *listenFD == 0 {
// we are root and we have no listen fd. Do the listen.
l, err := net.Listen("tcp", "0.0.0.0:80")
if err != nil {
return fmt.Errorf("Listen error: %s", err)
}
f, err := l.(*net.TCPListener).File()
if err != nil {
return err
}
uid, gid, err := lookupUser("guillaume")
if err != nil {
return err
}
// First extra file: fd == 3
cmd := exec.Command(os.Args[0], "-l", fmt.Sprint(3))
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.ExtraFiles = append(cmd.ExtraFiles, f)
cmd.SysProcAttr = &syscall.SysProcAttr{
Credential: &syscall.Credential{
Uid: uint32(uid),
Gid: uint32(gid),
},
}
if err := cmd.Run(); err != nil {
return fmt.Errorf("cmd.Run error: %s", err)
}
return nil
} else if u.Uid != "0" && *listenFD != 0 {
// We are not root and we have a listen fd. Do the accept.
ln := &FDListener{file: os.NewFile(uintptr(*listenFD), "net")}
if err := http.Serve(ln, http.HandlerFunc(handler)); err != nil {
return err
}
}
return fmt.Errorf("setuid fail: %s, %d", u.Uid, *listenFD)
}
func main() {
flag.Parse()
if err := start(); err != nil {
log.Fatal(err)
}
}

How to make reading and writing to file concurent in Golang?

I setup a webserver and I use my own package where I do some write/read from and to files. When the server gets a tcp connection, I start a different goroutine to handle the request for each connection. In the request handler func, I call the func DoSomething() of some_package.
Here's the code for web_server.go:
package main
import (
sp "./some_package"
"log"
"net"
"os"
"net/http"
)
func main() {
l, err := net.Listen("tcp", "0.0.0.0" + ":" + "4567")
if err != nil {
log.Println("Error listening:", err.Error())
os.Exit(1)
}
defer l.Close()
log.Println("Listening on 0.0.0.0:4567")
go func() {
for {
// Listen for an incoming connection.
conn, err := l.Accept()
if err != nil {
log.Println("Error accepting: ", err.Error())
os.Exit(1)
}
// Handle connections in a new goroutine.
go handlerFunction(conn)
}
}()
log.Printf("Setting up the Webserver...")
err = http.ListenAndServe("0.0.0.0:"+"4568", nil)
if err != nil {
log.Fatal(err)
}
}
func handlerFunction(conn net.Conn) {
defer conn.Close()
sp.DoSomething()
}
The function DoSomething() reads and writes to file. You can see the code where it is declared in the package:
package some_package
import (
"io/ioutil"
"strconv"
"os"
"log"
)
func IncrementValue(pastValue string)(newValue string){
newValueInt, _ := strconv.Atoi(pastValue)
return strconv.Itoa(newValueInt + 1)
}
func DoSomething() (err error){
initialValue := "1"
filename := "myFile.txt"
if _, err := os.Stat(filename); err == nil {
someText, err := ioutil.ReadFile(filename)
if err != nil {
log.Printf("Error reading")
return err
}
newValue := IncrementValue(string(someText))
err = ioutil.WriteFile(filename,[]byte(newValue), 0644)
if err != nil {
return err
}
}else{
err = ioutil.WriteFile(filename,[]byte(initialValue), 0644)
if err != nil {
return err
}
}
return
}
How can I use a locking mechanism like mutex.Lock and mutex.Unlock in this case to make the reading and writing to file concurrent so when one routine which is currently writing can stop the other from reading till the first one writes to file successfully?
Is my example suitable to be concurrent when reading or writing to file?
Is this the right approach to do so? Thank You
You can't make the reading and writing of a file concurrent (well, it's possible, but not with the access pattern you're describing). Use a single mutex to serialize all access to your file:
var fileMutex sync.Mutex
func DoSomething() {
fileMutex.Lock()
defer fileMutex.Unlock()
//...
}

Resources