Multithreading socket connections in Golang - multithreading

Background
I'm creating a chatroom server using Golang. Every time a client connects to the server, I'm starting the Client function as a new thread, so that each client gets a thread that is listened to. I was able to create a simple client server connection based on this tutorial, but now I am trying to create connections for multiple clients so that they can send messages to chatrooms.
Code explanation
By following this tutorial, it looks like I can create a thread by using go func() and wait for a connection to be made by including a channel (<-newClient). I pass a bool value into the channel before calling the user function, because the user function will run forever and I want to make other client connections. Each client connection will have the user function running for it.
Problem
I don't know how to pass the user connection variable to my other functions. I'm placing conn net in the arguments for my functions, but this is just a placeholder because I'm not sure the proper way to do it.
Also, my go func() with the call to user() after the channel implementation is my best attempt at multithreading, but I'm not sure that I'm thinking about it right.
Server.go
package main
import (
"net"
"fmt"
"bufio"
"strings"
"container/list"
"time"
)
type chatRoom struct {
name string
messages list.List
users list.List
lastUsed time.Time
}
var chatRooms *list.List //A list of chatrooms where each client can post messages, where messages can be seen by all clients in the chatroom
var conn net.Conn
func user(conn net) {
for {
message, _ := bufio.NewReader(conn).ReadString('\n') // will listen for message to process ending in newline (\n)
fmt.Print("Message Received:", string(message)) // output message received
s := strings.Split(string(message), ":")
if strings.Compare(s[0],"create") == 0{ //create a chat room
create(conn, s[1])
}else if strings.Compare(s[0],"list") == 0 { //List the current chatrooms
msg = listRooms(conn)
}else if strings.Compare(s[0],"join") == 0 { //Join the user to a chat room
join(conn, s[1])
}else if strings.Compare(s[0],"leave") == 0 { //Remove the user from a chatroom
leave(conn, s[1])
}else if strings.Compare(s[0],"message") == 0{ //Send a message to a chatroom
message(conn, s[1], s[2])
}
}
}
func main() {
fmt.Println("Launching server...")
this.userList = list.New()
this.chatRooms = list.New();
ln, _ := net.Listen("tcp", ":8081") // listen on all interfaces
conn, _ := ln.Accept() // accept connection on port
for { // run loop forever (or until ctrl-c)
go func(){
newClient := make(chan bool)
ln, _ := net.Listen("tcp", ":8081") // listen on all interfaces
conn, _ := ln.Accept() // accept connection on port
newClient <- true
user(conn)
}
<-newClient
}
}

It is very important to understand a distinction here: this is concurrency, not multithreading, and these are goroutines, not threads. These concepts are not interchangeable. As to your core issue, there are some significant issues with your implementation:
You're starting a goroutine closing over variables that are shared across loop iterations, crucially conn. Every time you accept a connection, you overwrite the conn which is shared by every goroutine. You should instead pass the conn to your goroutine so it has its own local copy, or create a new conn variable in each loop iteration instead of re-using one.
You're starting a new listener in every loop iteration, which is going to fail because the old listener is still using that port. You don't need a new listener. Just keep calling ln.Accept on the existing listener to continue accepting new connections. Take a look at the introduction of the documentation for the net package, or check any code that uses listeners in Go for an example.
You're creating newClient inside the goroutine, then trying to reference it outside the goroutine. This won't even compile, and it's unclear what you are trying to do with this channel in the first place.
Take a look at some existing networking code - in the net or net/http libraries or some popular projects on GitHub - to see good examples of how to write a network application. Do some web searches for blog posts or tutorials or how-to's, there are tons out there. And definitely read the documentation for the packages you're using, it will help you a lot.

Related

Do I ever need to re-create listening socket?

Suppose I've created a socket, started listen()ing on it and run accept() in a loop to process incoming connections. I.e. smth like this:
s = socket();
bind(s, ...);
listen(s, ...);
loop {
new_s = accept(s, ...);
... // do smth with new_s
}
For various reasons accept() can return an error and most of these errors say this particular connection attempt failed, please carry on. Is there any scenario when you have to close the socket and start from scratch (i.e. make new socket + bind + listen) in order to be (eventually) reachable by clients? What error (returned from accept()) tell me that? I.e. should I ever structure my logic like this:
loop {
loop {
s = socket();
bind(s, ...);
listen(s, ...);
if !error { break; }
sleep(1second); // avoid busy loop
}
loop {
new_s = accept(s, ...);
if error {
if error == ??? break; <--- which error code(s)?
continue;
}
... // do smth with new_s
}
}
Notes:
Specifically I am looking at ENETDOWN (Linux) and WSAENETDOWN (Winsock2) -- looks like these happen when someone restarts the network (interface). Will my previously created socket continue accepting connections once network is up? I doubt it, but even if it is the case -- how to properly avoid busy accept loop?
Other platforms may have other error codes -- how to write a code that will work on all of them?
You don't need to recreate the listening socket if accept() fails on that listener (at least on Windows).
If one called bind on 0.0.0.0:(some port) - then you almost never need to worry about recreating the listening socket.
If one called bind on a specific IP address, and that IP address goes away, then you definitely need to recreate the listening socket (you aren't listening to anything anymore).

Node: Access Socket/Client Object Reference in ExpressJS On-Close

Get Socket Ref On Socket Close
Following this SO answer, I expect to be able to shutdown the server and restart it again. To do that accountably, it sounds like I need to not just call close() on the server instance object but also any remaining open sockets. Memory Leaks are an issue so I would like to store the socket instances in a Set and clean up expired connections as they close.
You need to
subscribe to the connection event of the server and add opened sockets to an array
keep track of the open sockets by subscribing to their close event and removing the closed ones from your array
call destroy on all of the remaining open sockets when you need to terminate the server
...
const sockets = new Set(); // use a set to easily get/delete sockets
...
function serve(compilations) {
const location = './dist', directory = express.static(location);
const server = app
.use('/', log, directory)
.get( '*', (req, res) => res.redirect('/') )
.listen(3000)
;
server.on('connection', handleConnection); // subscribe to connections (#1)
console.log(`serving...(${location})`);
return server;
}
...
function handleSocketClose(socket, a, b, ...x) { // #param#socket === false (has-error sentinel) (#2.2)
const closed = sockets.delete(socket);
console.log(`HANDLE-SOCKET-CLOSE...(${sockets.size})`, closed, a, b); // outputs "HANDLE-SOCKET-CLOSE...(N) false undefined undefined"
}
function handleConnection(socket) {
sockets.add(socket); // add socket/client to set (#1)
console.log(`HANDLE-CONNECTION...(${sockets.size})`);
socket.on('close', handleSocketClose); // subscribe to closures on individual sockets (#2.1)
}
From the SO answer (above), I have #1 & #3 completed. In fact, I even have, say, #2.1 done as I am currently listening on each socket to for the close event -- which does fire. The problem is, say, #2.2: "...and removing the closed ones from your array".
My expectation while implementing the handleSocketClose callback was that one of the argument values would be the socket object, itself, so that I could then cleanup the sockets collection.
Question
How in Laniakea can I obtain a reference to the socket in question at close time (without iteration and checking the connection status)?

Stop channels when ws not able to connect

I have the following code which works ok, the issue is that when the socket.Connect() fails to connect I want to stop the process, I’ve tried with the following code
but it’s not working, I.e. if the socket connect fails to connect the program still runs.
What I want to happen is that if the connect fails, the process stops and the channe…what am I missing here?
func run (appName string) (err error) {
done = make(chan bool)
defer close(done)
serviceURL, e := GetContext().getServiceURL(appName)
if e != nil {
err = errors.New("process failed" + err.Error())
LogDebug("Exiting %v func[err =%v]", methodName, err)
return err
}
url := "wss://" + serviceURL + route
socket := gowebsocket.New(url)
addPass(&socket, user, pass)
socket.OnConnectError = OnConnectErrorHandler
socket.OnConnected = OnConnectedHandler
socket.OnTextMessage = socketTextMessageHandler
socket.OnDisconnected = OnDisconnectedHandler
LogDebug("In %v func connecting to URL %v", methodName, url)
socket.Connect()
jsonBytes, e := json.Marshal(payload)
if e != nil {
err = errors.New("build process failed" + e.Error())
LogDebug("Exiting %v func[err =%v]", methodName, err)
return err
}
jsonStr := string(jsonBytes)
LogDebug("In %v Connecting to payload JSON is %v", methodName, jsonStr)
socket.SendText(jsonStr)
<-done
LogDebug("Exiting %v func[err =%v]", methodName, err)
return err
}
func OnConnectErrorHandler(err error, socket gowebsocket.Socket) {
methodName := "OnConnectErrorHandler"
LogDebug("Starting %v parameters [err = %v , socket = %v]", methodName, err, socket)
LogInfo("Disconnected from server ")
done <- true
}
The process should open one ws connection for process that runs about 60-90 sec (like execute npm install) and get the logs of the process via web socket and when it finish , and of course handle the issue that could happen like network issue or some error running the process
So, #Slabgorb is correct - if you look here (https://github.com/sacOO7/GoWebsocket/blob/master/gowebsocket.go#L87) you will see that the OnConnectErrorHandler is called synchronously during the execution of your call to Connect(). The Connect() function doesn't kick off a separate goroutine to handle the websocket until after the connection is fully established and the OnConnected callback has completed. So when you try to write to the unbuffered channel done, you are blocking the same goroutine that called into the run() function to begin with, and you deadlock yourself, because no goroutine will ever be able to read from the channel to unblock you.
So you could go with his solution and turn it into a buffered channel, and that will work, but my suggestion would be not to write to a channel for this sort of one-time flag behavior, but use close signaling instead. Define a channel for each condition you want to terminate run(), and in the appropriate websocket handler function, close the channel when that condition happens. At the bottom of run(), you can select on all the channels, and exit when the first one closes. It would look something like this:
package main
import "errors"
func run(appName string) (err error) {
// first, define one channel per socket-closing-reason (DO NOT defer close these channels.)
connectErrorChan := make(chan struct{})
successDoneChan := make(chan struct{})
surpriseDisconnectChan := make(chan struct{})
// next, wrap calls to your handlers in a closure `https://gobyexample.com/closures`
// that captures a reference to the channel you care about
OnConnectErrorHandler := func(err error, socket gowebsocket.Socket) {
MyOnConnectErrorHandler(connectErrorChan, err, socket)
}
OnDisconnectedHandler := func(err error, socket gowebsocket.Socket) {
MyOnDisconectedHandler(surpriseDisconnectChan, err, socket)
}
// ... declare any other handlers that might close the connection here
// Do your setup logic here
// serviceURL, e := GetContext().getServiceURL(appName)
// . . .
// socket := gowebsocket.New(url)
socket.OnConnectError = OnConnectErrorHandler
socket.OnConnected = OnConnectedHandler
socket.OnTextMessage = socketTextMessageHandler
socket.OnDisconnected = OnDisconnectedHandler
// Prepare and send your message here...
// LogDebug("In %v func connecting to URL %v", methodName, url)
// . . .
// socket.SendText(jsonStr)
// now wait for one of your signalling channels to close.
select { // this will block until one of the handlers signals an exit
case <-connectError:
err = errors.New("never connected :( ")
case <-successDone:
socket.Close()
LogDebug("mission accomplished! :) ")
case <-surpriseDisconnect:
err = errors.New("somebody cut the wires! :O ")
}
if err != nil {
LogDebug(err)
}
return err
}
// *Your* connect error handler will take an extra channel as a parameter
func MyOnConnectErrorHandler(done chan struct{}, err error, socket gowebsocket.Socket) {
methodName := "OnConnectErrorHandler"
LogDebug("Starting %v parameters [err = %v , socket = %v]", methodName, err, socket)
LogInfo("Disconnected from server ")
close(done) // signal we are done.
}
This has a few advantages:
1) You don't need to guess which callbacks happen in-process and which happen in background goroutines (and you don't have to make all your channels buffered 'just in case')
2) Selecting on the multiple channels lets you find out why you are exiting and maybe handle cleanup or logging differently.
Note 1: If you choose to use close signaling, you have to use different channels for each source in order to avoid race conditions that might cause a channel to get closed twice from different goroutines (e.g. a timeout happens just as you get back a response, and both handlers fire; the second handler to close the same channel causes a panic.) This is also why you don't want to defer close all the channel at the top of the function.
Note 2: Not directly relevant to your question, but -- you don't need to close every channel - once all the handles to it go out of scope, the channel will get garbage collected whether or not it has been closed.
Ok, what is happening is the channel is blocking when you try to add something to it. Try initializing the done channel with a buffer (I used 1) like this:
done = make(chan bool, 1)

Troubles with gitlab scraping via golang

I'm newbie in programming and I need help. Trying to write gitlab scraper on golang.
Something goes wrong when i'm trying to get information about projects in multithreading mode.
Here is the code:
func (g *Gitlab) getAPIResponce(url string, structure interface{}) error {
responce, responce_error := http.Get(url)
if responce_error != nil {
return responce_error
}
ret, _ := ioutil.ReadAll(responce.Body)
if string(ret) != "[]" {
err := json.Unmarshal(ret, structure)
return err
}
return errors.New(error_emptypage)
}
...
func (g *Gitlab) GetProjects() {
projects_chan := make(chan Project, g.LatestProjectID)
var waitGroup sync.WaitGroup
queue := make(chan struct{}, 50)
for i := g.LatestProjectID; i > 0; i-- {
url := g.BaseURL + projects_url + "/" + strconv.Itoa(i) + g.Token
waitGroup.Add(1)
go func(url string, channel chan Project) {
queue <- struct{}{}
defer waitGroup.Done()
var oneProject Project
err := g.getAPIResponce(url, &oneProject)
if err != nil {
fmt.Println(err.Error())
}
fmt.Printf(".")
channel <- oneProject
<-queue
}(url, projects_chan)
}
go func() {
waitGroup.Wait()
close(projects_chan)
}()
for project := range projects_chan {
if project.ID != 0 {
g.Projects = append(g.Projects, project)
}
}
}
And here is the output:
$ ./gitlab-auditor
latest project = 1532
Gathering projects...
.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................Get https://gitlab.example.com/api/v4/projects/563&private_token=SeCrEt_ToKeN: unexpected EOF
Get https://gitlab.example.com/api/v4/projects/558&private_token=SeCrEt_ToKeN: unexpected EOF
..Get https://gitlab.example.com/api/v4/projects/531&private_token=SeCrEt_ToKeN: unexpected EOF
Get https://gitlab.example.com/api/v4/projects/571&private_token=SeCrEt_ToKeN: unexpected EOF
.Get https://gitlab.example.com/api/v4/projects/570&private_token=SeCrEt_ToKeN: unexpected EOF
..Get https://gitlab.example.com/api/v4/projects/467&private_token=SeCrEt_ToKeN: unexpected EOF
Get https://gitlab.example.com/api/v4/projects/573&private_token=SeCrEt_ToKeN: unexpected EOF
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
Every time it's different projects, but it's id is around 550.
When I'm trying to curl links from output, i'm getting normal JSON. When I'm trying to run this code with queue := make(chan struct{}, 1) (in single thread) - everything is fine.
What can it be?
i would say this not a very clear way to achieve concurrency.
what seems to be happening here is
you create a buffered channel that has a size of 50.
then you fire up 1532 goroutines
the first 50 of them enqueue themselves and start processing. by the time they <-queue and free up somespace a random one from the next manages to get on the queue.
as people say in the comments most certainly you hit some limits around the time it the blast has made it around id 550. Then gitlab's API is angry at you and rate limits.
then another goroutine is fired that will close the channel to notify the main goroutine
the main goroutine reads messages.
the talk go concurrency patterns
as well as this blog post concurrency in go might help.
personally i rarely use buffered channels. for your problem i would go like:
define a number of workers
have the main goroutine fire up the workers with a func listening on a channel of ints , doing the api call, writing to a channel of projects
have the main goroutine send to a channel of ints the project number to be fetched and read from the channel of projects.
maybe ratelimit by firing a ticker and have main read from it before it sends the next request?
main closes the number channel to notify the others to die.

Implementing the concept of "events" (with notifiers/receivers) in Golang?

I'm wondering what is the proper way to handle the concept of "events" (with notifiers/receivers) in Golang. I suppose I need to use channels but not sure about the best way.
Specifically, I have the program below with two workers. Under certain conditions, "worker1" goes in and out of a "fast mode" and notify this via channels. "worker2" can then receive this event. This works fine, but then the two workers are tightly coupled. In particular, if worker2 is not running, worker1 gets stuck waiting when writing to the channel.
What would be the best way in Golang to implement this logic? Basically, one worker does something and notify any other worker that it has done so. Whether other workers listen to this event or not must not block worker1. Ideally, there could be any number of workers that could listen to this event.
Any suggestion?
var fastModeEnabled = make(chan bool)
var fastModeDisabled = make(chan bool)
func worker1() {
mode := "normal"
for {
// under some conditions:
mode := "fast"
fastModeEnabled <- true
// later, under different conditions:
mode := "normal"
fastModeDisabled <- true
}
}
func worker2() {
for {
select {
case <-fastModeEnabled:
fmt.Println("Fast mode started")
case <-fastModeDisabled:
fmt.Println("Fast mode ended")
}
}
}
func main() {
go worker2()
go worker1()
for {}
}
Use a non-blocking write to the channel. This way if anyone is listening they receive it. If there is no one listening it doesn't block the sender, although the event is lost.
You could use a buffered channel so that at least some events are buffered if you need that.
You implement a non-blocking send by using the select keyword with a default case. The default makes it non-blocking. Without the default case a select will block until one of its channels becomes usable.
Code snippit:
select {
case ch <- event:
sent = true
default:
sent = false
}

Resources