Related
I am working on a project where during startup I need to read certain files and store it in memory in a map and then periodically look for new files if there are any and then replace whatever I had in memory in the map earlier during startup with this new data. Basically every time if there is a new file which is a full state then I want to refresh my in memory map objects to this new one instead of appending to it.
Below method loadAtStartupAndProcessNewChanges is called during server startup which reads the file and store data in memory. Also it starts a go-routine detectNewFiles which periodically checks if there are any new files and store it on a deltaChan channel which is later accessed by another go-routine processNewFiles to read that new file again and store data in the same map. If there is any error then we store it on err channel. loadFiles is the function which will read files in memory and store it in map.
type customerConfig struct {
deltaChan chan string
err chan error
wg sync.WaitGroup
data *cmap.ConcurrentMap
}
// this is called during server startup.
func (r *customerConfig) loadAtStartupAndProcessNewChanges() error {
path, err := r.GetPath("...", "....")
if err != nil {
return err
}
r.wg.Add(1)
go r.detectNewFiles(path)
err = r.loadFiles(4, path)
if err != nil {
return err
}
r.wg.Add(1)
go r.processNewFiles()
return nil
}
This method basically figures out if there are any new files that needs to be consumed and if there is any then it will put it on the deltaChan channel which will be later on consumed by processNewFiles go-routine and read the file in memory. If there is any error then it will add error to the error channel.
func (r *customerConfig) detectNewFiles(rootPath string) {
}
This will read all s3 files and store it in memory and return error. In this method I clear previous state of my map so that it can have fresh state from new files. This method is called during server startup and also called whenever we need to process new files from processNewFiles go-routine.
func (r *customerConfig) loadFiles(workers int, path string) error {
var err error
...
var files []string
files = .....
// reset the map so that it can have fresh state from new files.
r.data.Clear()
g, ctx := errgroup.WithContext(context.Background())
sem := make(chan struct{}, workers)
for _, file := range files {
select {
case <-ctx.Done():
break
case sem <- struct{}{}:
}
file := file
g.Go(func() error {
defer func() { <-sem }()
return r.read(spn, file, bucket)
})
}
if err := g.Wait(); err != nil {
return err
}
return nil
}
This method read the files and add in the data concurrent map.
func (r *customerConfig) read(file string, bucket string) error {
// read file and store it in "data" concurrent map
// and if there is any error then return the error
var err error
fr, err := pars3.NewS3FileReader(context.Background(), bucket, file, r.s3Client.GetSession().Config)
if err != nil {
return errs.Wrap(err)
}
defer xio.CloseIgnoringErrors(fr)
pr, err := reader.NewParquetReader(fr, nil, 8)
if err != nil {
return errs.Wrap(err)
}
if pr.GetNumRows() == 0 {
spn.Infof("Skipping %s due to 0 rows", file)
return nil
}
for {
rows, err := pr.ReadByNumber(r.cfg.RowsToRead)
if err != nil {
return errs.Wrap(err)
}
if len(rows) <= 0 {
break
}
byteSlice, err := json.Marshal(rows)
if err != nil {
return errs.Wrap(err)
}
var invMods []CompModel
err = json.Unmarshal(byteSlice, &invMods)
if err != nil {
return errs.Wrap(err)
}
for i := range invMods {
key := strconv.FormatInt(invMods[i].ProductID, 10) + ":" + strconv.Itoa(int(invMods[i].Iaz))
hasInventory := false
if invMods[i].Available > 0 {
hasInventory = true
}
r.data.Set(key, hasInventory)
}
}
return nil
}
This method will pick what is there on the delta channel and if there are any new files then it will start reading that new file by calling loadFiles method. If there is any error then it will add error to the error channel.
// processNewFiles - load new files found by detectNewFiles
func (r *customerConfig) processNewFiles() {
// find new files on delta channel
// and call "loadFiles" method to read it
// if there is any error, then it will add it to the error channel.
}
If there is any error on the error channel then it will log those errors from below method -
func (r *customerConfig) handleError() {
// read error from error channel if there is any
// then log it
}
Problem Statement
Above logic works for me without any issues but there is one small bug in my code which I am not able to figure out on how to solve it. As you can see I have a concurrent map which I am populating in my read method and also clearing that whole map in loadFiles method. Because whenever there is a new file on delta channel I don't want to keep previous state in the map so that's why I am removing everything from the map and then adding new state from new files to it.
Now if there is any error in read method then the bug happens bcoz I have already cleared all the data in my data map which will have empty map which is not what I want. Basically if there is any error then I would like to preserve previous state in the data map. How can I resolve this issue in my above current design.
Note: I am using golang concurrent map
I think your design is over complicated. It can be solved much simpler, which gives all the benefits you desire:
safe for concurrent access
detected changes are reloaded
accessing the config gives you the most recent, successfully loaded config
the most recent config is always, immediately accessible, even if loading a new config due to detected changes takes long
if loading new config fails, the previous "snapshot" is kept and remains the current
as a bonus, it's much simpler and doesn't even use 3rd party libs
Let's see how to achieve this:
Have a CustomerConfig struct holding everything you want to cache (this is the "snapshot"):
type CustomerConfig struct {
Data map[string]bool
// Add other props if you need:
LoadedAt time.Time
}
Provide a function that loads the config you wish to cache. Note: this function is stateless, it does not access / operate on package level variables:
func loadConfig() (*CustomerConfig, error) {
cfg := &CustomerConfig{
Data: map[string]bool{},
LoadedAt: time.Now(),
}
// Logic to load files, and populate cfg.Data
// If an error occurs, return it
// If loading succeeds, return the config
return cfg, nil
}
Now let's create our "cache manager". The cache manager stores the actual / current config (the snapshot), and provides access to it. For safe concurrent access (and update), we use a sync.RWMutex. Also has means to stop the manager (to stop the concurrent refreshing):
type ConfigCache struct {
configMu sync.RWMutex
config *CustomerConfig
closeCh chan struct{}
}
Creating a cache loads the initial config. Also launches a goroutine that will be responsible to periodically check for changes.
func NewConfigCache() (*ConfigCache, error) {
cfg, err := loadConfig()
if err != nil {
return nil, fmt.Errorf("loading initial config failed: %w", err)
}
cc := &ConfigCache{
config: cfg,
closeCh: make(chan struct{}),
}
// launch goroutine to periodically check for changes, and load new configs
go cc.refresher()
return cc, nil
}
The refresher() periodically checks for changes, and if changes are detected, calls loadConfig() to load new data to be cached, and stores it as the current / actual config (while locking configMu). It also monitors closeCh to stop if that is requested:
func (cc *ConfigCache) refresher() {
ticker := time.NewTicker(1 * time.Minute) // Every minute
defer ticker.Stop()
for {
select {
case <-ticker.C:
// Check if there are changes
changes := false // logic to detect changes
if !changes {
continue // No changes, continue
}
// Changes! load new config:
cfg, err := loadConfig()
if err != nil {
log.Printf("Failed to load config: %v", err)
continue // Keep the previous config
}
// Apply / store new config
cc.configMu.Lock()
cc.config = cfg
cc.configMu.Unlock()
case <-cc.closeCh:
return
}
}
}
Closing the cache manager (the refresher goroutine) is as easy as:
func (cc *ConfigCache) Stop() {
close(cc.closeCh)
}
The last missing piece is how you access the current config. That's a simple GetConfig() method (that also uses configMu, but in read-only mode):
func (cc *ConfigCache) GetConfig() *CustomerConfig {
cc.configMu.RLock()
defer cc.configMu.RUnlock()
return cc.config
}
This is how you can use this:
cc, err := NewConfigCache()
if err != nil {
// Decide what to do: retry, terminate etc.
}
// Where ever, whenever you need the actual (most recent) config in your app:
cfg := cc.GetConfig()
// Use cfg
Before you shut down your app (or you want to stop the refreshing), you may call cc.Stop().
Added RWMutex for collectedData concurrent write protecting by worker goroutine
type customerConfig struct {
...
m sync.RWMutex
}
Instead of updating map in read method let read method just return the data and error
func (r *customerConfig) read(file string, bucket string) ([]CompModel, error) {
// read file data and return with error if any
var err error
fr, err := pars3.NewS3FileReader(context.Background(), bucket, file, r.s3Client.GetSession().Config)
if err != nil {
return (nil, errs.Wrap(err))
}
defer xio.CloseIgnoringErrors(fr)
pr, err := reader.NewParquetReader(fr, nil, 8)
if err != nil {
return (nil, errs.Wrap(err))
}
if pr.GetNumRows() == 0 {
spn.Infof("Skipping %s due to 0 rows", file)
return (nil, errors.New("No Data"))
}
var invMods = []CompModel{}
for {
rows, err := pr.ReadByNumber(r.cfg.RowsToRead)
if err != nil {
return (nil, errs.Wrap(err))
}
if len(rows) <= 0 {
break
}
byteSlice, err := json.Marshal(rows)
if err != nil {
return (nil, errs.Wrap(err))
}
var jsonData []CompModel
err = json.Unmarshal(byteSlice, &jsonData)
if err != nil {
return (nil, errs.Wrap(err))
}
invMods = append(invMods, jsonData...)
}
return invMods, nil
}
And then loadFiles you can collect the data return by read
method and if no error only then clear and update the map else
leave the old data as it was before
func (r *customerConfig) loadFiles(workers int, path string) error {
var err error
...
var files []string
files = .....
// reset the map so that it can have fresh state from new files.
// r.data.Clear() <- remove the clear from here
g, ctx := errgroup.WithContext(context.Background())
sem := make(chan struct{}, workers)
collectedData := []CompModel{}
for _, file := range files {
select {
case <-ctx.Done():
break
case sem <- struct{}{}:
}
file := file
g.Go(func() error {
defer func() { <-sem }()
data, err:= r.read(spn, file, bucket)
if err != nil {
return err
}
r.m.Lock()
append(collectedData, data...)
r.m.Unlock()
return nil
})
}
if err := g.Wait(); err != nil {
return err
}
r.data.Clear()
for i := range collectedData {
key := strconv.FormatInt(collectedData[i].ProductID, 10) + ":" + strconv.Itoa(int(collectedData[i].Iaz))
hasInventory := false
if collectedData[i].Available > 0 {
hasInventory = true
}
r.data.Set(key, hasInventory)
}
return nil
}
Note: Since the code is not runnable just updated methods for reference and I have not include mutex lock for updating the slice you may need to handle for the case.
The same can be achieved with just 3 functions - detect, read, load, detect will check for new files by interval and push to delta channel if found any, load will get file path to read from delta channel and call read method to get the data and error then checks if no error then clear the map and update with new content else log the error, so you would have 2 go routines and 1 function which would be called by load routine
package main
import (
"fmt"
"time"
"os"
"os/signal"
"math/rand"
)
func main() {
fmt.Println(">>>", center("STARTED", 30), "<<<")
c := &Config{
InitialPath: "Old Path",
DetectInterval: 3000,
}
c.start()
fmt.Println(">>>", center("ENDED", 30), "<<<")
}
// https://stackoverflow.com/questions/41133006/how-to-fmt-printprint-this-on-the-center
func center(s string, w int) string {
return fmt.Sprintf("%[1]*s", -w, fmt.Sprintf("%[1]*s", (w + len(s))/2, s))
}
type Config struct {
deltaCh chan string
ticker *time.Ticker
stopSignal chan os.Signal
InitialPath string
DetectInterval time.Duration
}
func (c *Config) start() {
c.stopSignal = make(chan os.Signal, 1)
signal.Notify(c.stopSignal, os.Interrupt)
c.ticker = time.NewTicker(c.DetectInterval * time.Millisecond)
c.deltaCh = make(chan string, 1)
go c.detect()
go c.load()
if c.InitialPath != "" {
c.deltaCh <- c.InitialPath
}
<- c.stopSignal
c.ticker.Stop()
}
// Detect New Files
func (c *Config) detect() {
for {
select {
case <- c.stopSignal:
return
case <- c.ticker.C:
fmt.Println(">>>", center("DETECT", 30), "<<<")
c.deltaCh <- fmt.Sprintf("PATH %f", rand.Float64() * 1.5)
}
}
}
// Read Files
func read(path string) (map[string]int, error) {
data := make(map[string]int)
data[path] = 0
fmt.Println(">>>", center("READ", 30), "<<<")
fmt.Println(path)
return data, nil
}
// Load Files
func (c *Config) load() {
for {
select {
case <- c.stopSignal:
return
case path := <- c.deltaCh:
fmt.Println(">>>", center("LOAD", 30), "<<<")
data, err := read(path)
if err != nil {
fmt.Println("Log Error")
} else {
fmt.Println("Success", data)
}
fmt.Println()
}
}
}
Note: Not included map in sample code it can be easily updated to include map
Just allocate new one map. Like this:
var mu sync.Mutex
before := map[string]string{} // Some map before reading
after := make(map[string]string)
// Read files and fill `after` map
mu.Lock()
before = after
mu.Unlock()
Instead of clearing the map in loadFile method, do something like this in read
func (r *customerConfig) read(file string, bucket string) error {
m := cmap.New() // create a new map
// ...
for {
rows, err := pr.ReadByNumber(r.cfg.RowsToRead)
if err != nil {
return errs.Wrap(err)
}
if len(rows) <= 0 {
break
}
byteSlice, err := json.Marshal(rows)
if err != nil {
return errs.Wrap(err)
}
var invMods []CompModel
err = json.Unmarshal(byteSlice, &invMods)
if err != nil {
return errs.Wrap(err)
}
for i := range invMods {
key := strconv.FormatInt(invMods[i].ProductID, 10) + ":" + strconv.Itoa(int(invMods[i].Iaz))
hasInventory := false
if invMods[i].Available > 0 {
hasInventory = true
}
m.Set(key, hasInventory)
}
}
r.data = m // Use the new map
return nil
}
I'm going to develop a simple TCP client and server and I want to achieve high throughput (300000 Requests/Second) which is easy to reach with Cpp or C TCP client and server on a server hardware. I mean a server with 48 Cores and 64G Memory.
On my testbed, both client and server have 10G network interface card and I have receive-side-scaling at server side and transmit-packet-steering enabled at the client.
I configure the client to send 10 thousand requests per second. I just run multiple instances of Go go run client.go from a bash script to increase the throughput. However, in this way, Go is going to create lots of threads at the operating systems and a large number of threads results in high context switching cost, and I could not approach such throughputs. I suspected the number of Go instances I'm running from the command line. The code below is the code snippet for the client in the approach:
func Main(cmd_rate_int int, cmd_port string) {
//runtime.GOMAXPROCS(2) // set maximum number of processes to be used by this applications
//var rate float64 = float64(rate_int)
rate := float64(cmd_rate_int)
port = cmd_port
conn, err := net.Dial("tcp", port)
if err != nil {
fmt.Println("ERROR", err)
os.Exit(1)
}
var my_random_number float64 = nextTime(rate) * 1000000
var my_random_int int = int(my_random_number)
var int_message int64 = time.Now().UnixNano()
byte_message := make([]byte, 8)
go func(conn net.Conn) {
buf := make([]byte, 8)
for true {
_, err = io.ReadFull(conn, buf)
now := time.Now().UnixNano()
if err != nil {
return
}
last := int64(binary.LittleEndian.Uint64(buf))
fmt.Println((now - last) / 1000)
}
return
}(conn)
for true {
my_random_number = nextTime(rate) * 1000000
my_random_int = int(my_random_number)
time.Sleep(time.Microsecond * time.Duration(my_random_int))
int_message = time.Now().UnixNano()
binary.LittleEndian.PutUint64(byte_message, uint64(int_message))
conn.Write(byte_message)
}
}
So I try to run all my Go threads by calling go client() in the main so I do not run multiple instances in the Linux command line. I thought it may be a better idea. And it is really a better idea basically and the number of threads doesn't increase toward 700 or so in the operating system. But the throughput still is low and it seems it doesn't employ all capability of the underlying hardware. Actually, you may want to see the code I have run in the second approach:
func main() {
//runtime.GOMAXPROCS(2) // set maximum number of processes to be used by this applications
args := os.Args[1:]
rate_int, _ := strconv.Atoi(args[0])
client_size, _ := strconv.Atoi(args[1])
port := args[2]
i := 0
for i <= client_size {
go client.Main(rate_int, port)
i = i + 1
}
for true {
}
}
I was wondering what is the best practice for in order to reach high throughput? I have always heard that Go is lightweight and performant and pretty comparable with C/Cpp pthread. However, I think in terms of performance still C/Cpp is far far better than Go. I might do something really wrong on this issue, so I would be happy if anybody can help to achieve high throughput with Go.
this is a quick rework of the op code.
As the original source code is working, it does not provide a solution, however it illustrates bucket token usage, and few other small go tips.
It does re use similar default values as op source code.
It demonstrates you do not need two files / programs, to provide both client and server.
It demonstrates usage of flag package.
It shows how to parse unix nano timestamp appropriately using time.Unix(x,y)
It shows how to take advantage of io.Copy to write-what-you-read on the same net.Conn. Rather than manual writing.
Still, this is improper for production delivery.
package main
import (
"encoding/binary"
"flag"
"fmt"
"io"
"log"
"math"
"math/rand"
"net"
"os"
"sync/atomic"
"time"
"github.com/juju/ratelimit"
)
var total_rcv int64
func main() {
var cmd_rate_int float64
var cmd_port string
var client_size int
flag.Float64Var(&cmd_rate_int, "rate", 400000, "change rate of message reading")
flag.StringVar(&cmd_port, "port", ":9090", "port to listen")
flag.IntVar(&client_size, "size", 20, "number of clients")
flag.Parse()
t := flag.Arg(0)
if t == "server" {
server(cmd_port)
} else if t == "client" {
for i := 0; i < client_size; i++ {
go client(cmd_rate_int, cmd_port)
}
// <-make(chan bool) // infinite wait.
<-time.After(time.Second * 2)
fmt.Println("total exchanged", total_rcv)
} else if t == "client_ratelimit" {
bucket := ratelimit.NewBucketWithQuantum(time.Second, int64(cmd_rate_int), int64(cmd_rate_int))
for i := 0; i < client_size; i++ {
go clientRateLimite(bucket, cmd_port)
}
// <-make(chan bool) // infinite wait.
<-time.After(time.Second * 3)
fmt.Println("total exchanged", total_rcv)
}
}
func server(cmd_port string) {
ln, err := net.Listen("tcp", cmd_port)
if err != nil {
panic(err)
}
for {
conn, err := ln.Accept()
if err != nil {
panic(err)
}
go io.Copy(conn, conn)
}
}
func client(cmd_rate_int float64, cmd_port string) {
conn, err := net.Dial("tcp", cmd_port)
if err != nil {
log.Println("ERROR", err)
os.Exit(1)
}
defer conn.Close()
go func(conn net.Conn) {
buf := make([]byte, 8)
for {
_, err := io.ReadFull(conn, buf)
if err != nil {
break
}
// int_message := int64(binary.LittleEndian.Uint64(buf))
// t2 := time.Unix(0, int_message)
// fmt.Println("ROUDNTRIP", time.Now().Sub(t2))
atomic.AddInt64(&total_rcv, 1)
}
return
}(conn)
byte_message := make([]byte, 8)
for {
wait := time.Microsecond * time.Duration(nextTime(cmd_rate_int))
if wait > 0 {
time.Sleep(wait)
fmt.Println("WAIT", wait)
}
int_message := time.Now().UnixNano()
binary.LittleEndian.PutUint64(byte_message, uint64(int_message))
_, err := conn.Write(byte_message)
if err != nil {
log.Println("ERROR", err)
return
}
}
}
func clientRateLimite(bucket *ratelimit.Bucket, cmd_port string) {
conn, err := net.Dial("tcp", cmd_port)
if err != nil {
log.Println("ERROR", err)
os.Exit(1)
}
defer conn.Close()
go func(conn net.Conn) {
buf := make([]byte, 8)
for {
_, err := io.ReadFull(conn, buf)
if err != nil {
break
}
// int_message := int64(binary.LittleEndian.Uint64(buf))
// t2 := time.Unix(0, int_message)
// fmt.Println("ROUDNTRIP", time.Now().Sub(t2))
atomic.AddInt64(&total_rcv, 1)
}
return
}(conn)
byte_message := make([]byte, 8)
for {
bucket.Wait(1)
int_message := time.Now().UnixNano()
binary.LittleEndian.PutUint64(byte_message, uint64(int_message))
_, err := conn.Write(byte_message)
if err != nil {
log.Println("ERROR", err)
return
}
}
}
func nextTime(rate float64) float64 {
return -1 * math.Log(1.0-rand.Float64()) / rate
}
Edit This is a pretty bad answer. Check mh-cbon comments for the reasons.
I don't fully understand how you're trying to do so, but if I want to control the rate on Go, I usually do 2 nested for loops:
for ;; time.Sleep(time.Second) {
go func (){
for i:=0; i<rate; i++ {
go func (){
// Do whatever
}()
}
}()
}
I'm starting a goroutine inside each loop to:
on the outer loop, to ensure it's only 1 second between iterations
on the inner loop, to ensure I can start all the requests I want
Putting this on a problem like yours, it would look something like:
package main
import (
"net"
"os"
"time"
)
const (
rate = 100000
address = "localhost:8090"
)
func main() {
conn, err := net.Dial("tcp", address)
if err != nil {
os.Stderr.Write([]byte(err.Error() + "\n"))
os.Exit(1)
}
for ; err == nil; time.Sleep(time.Second) {
go func() {
for i := 0; i < rate; i++ {
go func(conn net.Conn) {
if _, err := conn.Write([]byte("01234567")); err != nil {
os.Stderr.Write([]byte("\nConnection closed: " + err.Error() + "\n"))
}
}(conn)
}
}()
}
}
To verify that this is actually sending the target request rate, you can have a test TCP listener like this:
package main
import (
"fmt"
"net"
"os"
"time"
)
const (
address = ":8090"
payloadSize = 8
)
func main() {
count := 0
b := make([]byte, payloadSize)
l, err := net.Listen("tcp", address)
if err != nil {
fmt.Fprintf(os.Stdout, "\nCan't listen to address %v: %v\n", address, err)
return
}
defer l.Close()
go func() {
for ; ; time.Sleep(time.Second) {
fmt.Fprintf(os.Stdout, "\rRate: %v/s ", count)
count = 0
}
}()
for {
conn, err := l.Accept()
if err != nil {
fmt.Fprintf(os.Stderr, "\nFailed to accept connection: %v\n", err)
}
for {
_, err := conn.Read(b)
if err != nil {
fmt.Fprintf(os.Stderr, "\nConnection closed: %v\n", err)
break
}
count = count + 1
}
}
}
I found some issues due to not being able to write concurrently into the connection with an error inconsistent fdMutex. This is due to reaching over 0xfffff concurrent writes, which fdMutex does not support. To mitigate this issue, make sure you don't go over that number of concurrent writes. In my system, it was >100k/s. This is not the 300k/s you're expecting, but my system is not prepared for that.
I have a function that watches certian file via fsnotify and calls a callback when the file changes. If the callback returns false, the watching is ended:
import (
"github.com/golang/glog"
"github.com/fsnotify/fsnotify"
)
type WatcherFunc func(err error) bool
func WatchFileChanges(filename string, watcherFunc WatcherFunc) {
watcher, err := fsnotify.NewWatcher()
if err != nil {
glog.Errorf("Got error creating watcher %s", err)
}
defer watcher.Close()
done := make(chan bool)
go func() {
for {
select {
case event := <-watcher.Events:
glog.Infof("inotify event %s", event)
if event.Op&fsnotify.Write == fsnotify.Write {
glog.Infof("modified file %s, calling watcher func", event.Name)
if !watcherFunc(nil) {
close(done)
}
}
case err := <-watcher.Errors:
glog.Errorf("Got error watching %s, calling watcher func", err)
if !watcherFunc(err) {
close(done)
}
}
}
}()
glog.Infof("Start watching file %s", filename)
err = watcher.Add(filename)
if err != nil {
glog.Errorf("Got error adding watcher %s", err)
}
<-done
}
Then I thought it would be nice to have a test for that, so I started out with a simple test case:
import (
"io/ioutil"
"os"
"testing"
)
func TestStuff(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "test")
if err != nil {
t.Fatal("Failed to create tmp file")
}
defer os.Remove(tmpfile.Name())
watcherFunc := func (err error) bool {
return false
}
WatchFileChanges(tmpfile.Name(), watcherFunc)
}
What I wanted do to here is to do a few modifications to the file, collect the events in an array, return then false from the watcherFunc and then assert on the array. The thing is, of course the test just hangs and waits for events, as the goroutine was started.
Is there any way how I can test a function like this, like … starting a different thread (?) that updates/modifies the file?
Is there any way how I can test a function like this, like … starting a different thread (?) that updates/modifies the file?
Of course... start a goroutine that does the updates you want.
func TestStuff(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "test")
if err != nil {
t.Fatal("Failed to create tmp file")
}
defer os.Remove(tmpfile.Name())
watcherFunc := func (err error) bool {
return false
}
go func() {
// Do updates here
}()
WatchFileChanges(tmpfile.Name(), watcherFunc)
}
I setup a webserver and I use my own package where I do some write/read from and to files. When the server gets a tcp connection, I start a different goroutine to handle the request for each connection. In the request handler func, I call the func DoSomething() of some_package.
Here's the code for web_server.go:
package main
import (
sp "./some_package"
"log"
"net"
"os"
"net/http"
)
func main() {
l, err := net.Listen("tcp", "0.0.0.0" + ":" + "4567")
if err != nil {
log.Println("Error listening:", err.Error())
os.Exit(1)
}
defer l.Close()
log.Println("Listening on 0.0.0.0:4567")
go func() {
for {
// Listen for an incoming connection.
conn, err := l.Accept()
if err != nil {
log.Println("Error accepting: ", err.Error())
os.Exit(1)
}
// Handle connections in a new goroutine.
go handlerFunction(conn)
}
}()
log.Printf("Setting up the Webserver...")
err = http.ListenAndServe("0.0.0.0:"+"4568", nil)
if err != nil {
log.Fatal(err)
}
}
func handlerFunction(conn net.Conn) {
defer conn.Close()
sp.DoSomething()
}
The function DoSomething() reads and writes to file. You can see the code where it is declared in the package:
package some_package
import (
"io/ioutil"
"strconv"
"os"
"log"
)
func IncrementValue(pastValue string)(newValue string){
newValueInt, _ := strconv.Atoi(pastValue)
return strconv.Itoa(newValueInt + 1)
}
func DoSomething() (err error){
initialValue := "1"
filename := "myFile.txt"
if _, err := os.Stat(filename); err == nil {
someText, err := ioutil.ReadFile(filename)
if err != nil {
log.Printf("Error reading")
return err
}
newValue := IncrementValue(string(someText))
err = ioutil.WriteFile(filename,[]byte(newValue), 0644)
if err != nil {
return err
}
}else{
err = ioutil.WriteFile(filename,[]byte(initialValue), 0644)
if err != nil {
return err
}
}
return
}
How can I use a locking mechanism like mutex.Lock and mutex.Unlock in this case to make the reading and writing to file concurrent so when one routine which is currently writing can stop the other from reading till the first one writes to file successfully?
Is my example suitable to be concurrent when reading or writing to file?
Is this the right approach to do so? Thank You
You can't make the reading and writing of a file concurrent (well, it's possible, but not with the access pattern you're describing). Use a single mutex to serialize all access to your file:
var fileMutex sync.Mutex
func DoSomething() {
fileMutex.Lock()
defer fileMutex.Unlock()
//...
}
I need to use password authenticated scp to download a file from a server. How do I do so using Go? Tried the following code, but it doesn't pass in the password.
package main
import (
"os/exec"
"time"
)
func main() {
password := "password"
cmd := exec.Command("scp", "admin#192.168.1.150:file", "file")
in, err := cmd.StdinPipe()
if err != nil {
panic(err)
}
defer in.Close()
out, err := cmd.StdoutPipe()
if err != nil {
panic(err)
}
defer out.Close()
if err = cmd.Run(); err != nil {
panic(err)
}
go func() {
time.Sleep(10 * time.Second)
_, err = in.Write([]byte(password + "\n"))
if err != nil {
panic(err)
}
}()
}
Edit: I ended up using the gexpect (github.com/ThomasRooney/gexpect) library.
package main
import (
"github.com/ThomasRooney/gexpect"
"log"
)
func main() {
child, err := gexpect.Spawn("scp admin#192.168.1.150:file file")
if err != nil {
log.Fatalln(err)
}
child.Expect("password:")
child.SendLine("password")
child.Interact()
child.Close()
}
The answer to this self-answered question might help:
Golang write input and get output from terminal process
at least, he mentions in the answer that he "was able to get ssh access working with a password", which is not mentioned explicitly in the question - that's why you probably didn't find it while searching the site?