I am trying to test my game server with websockets, but i am having problems using multiple clients.
Connecting every single one of them works fine, but when i start the game and send the first messages over the sockets they throw an "ConnectionClosed" error.
The server code looks like this:
main :: IO ()
main = do
state <- newMVar []
putStrLn "listening"
WS.runServer "127.0.0.1" 9000 $ \pen -> do
c <- WS.acceptRequest pen
putStrLn "user connected"
cs <- takeMVar state
let cs' = c : cs
case Vector.fromList cs' :: Maybe (Vector.Vector 4 WS.Connection) of
Just v -> do
let
us = Vector.imap
(\i x -> websocketUser x ("websocket user " ++ show i))
v
putMVar state []
g <- exampleGame us
_ <- playGame g
return ()
Nothing -> putMVar state cs'
To test this I opened 4 chrome tabs and connected each one of them to the server. For every tab i connect the server prints the "user connected" line and after 4 connections the game should start but all sockets get disconnected. It does work fine if I only connect one client. So I think the problem is that I am connection all clients from the same machine. However I don't know how I can test this without using multiple devices. I already tried using different browsers and ports for each client.
My question:
Is this caused by connecting all clients from the same machine and chrome or the server reuses the existing connections or something similar, if yes how can I solve this so I can test my game localy?
Or is there something wrong with my Haskell code?
websocket library
vector-sized
See the documentation for the ServerApp type. Once the IO action finishes, the underlying socket is closed. So, each of your first three clients accepts a connection, stores the connection in the state, and then finishes, closing the connection. Only the fourth client keeps its connection open, and it can't do anything useful with the other three connections, which have now been closed.
If you replace the last line with:
Nothing -> do
putMVar state cs'
threadDelay (10^9) -- wait a while
then that will probably get all four clients connected.
To fix this "for real", you could have the first three connections wait forever and then arrange for the fourth thread to kill them when the game is over.
However, I'm not sure this is the right architecture. Instead of having the only the fourth connection thread running and polling all four connections, you probably want each connection thread to enter a loop to process incoming messages from its client. These threads can modify a shared game state and/or directly broadcast messages to the other client (like the sample "chat" program in the websockets documentation) or else queue up incoming messages for a separate game thread to process.
Related
I would like to reuse a HTTPS connection for many requests to keep latency as low as possible for every request to the same host. Using the haskell package req, if I send too many requests in a short period of time then it seems I might be reaching a connection limit for the Manager and a new connection is started, showing higher latency. I can't change the connection limit in the req package for the Manager to fix this. Instead the package documentation recommends to use withReqManager to reuse connections. But i cant comprehend how to use this function. Could someone explain to me how to use it to always reuse a connection for an explicit series of requests, please?
Another example where the connection is not reused is when too much time passes between requests to the same host. When i delay a request for 5 seconds the connection is reused but if i delay for 60 seconds it is not reused. If someone could provide an example using withReqManger to reuse the connection for every time i run ttst I'd really appreciate it.
lttest :: IO ()
lttest = do ttst
threadDelay 5000000
ttst
threadDelay 60000000
ttst
where ttst = do metm <- getCurrentTime
runReq defaultHttpConfig { httpConfigCheckResponse = \_ _ _ -> Nothing } $ do
v <- req GET (https "ifconfig.me") (NoReqBody) lbsResponse mempty
liftIO $ print (responseBody v :: Data.ByteString.Lazy.ByteString)
metm2 <- getCurrentTime
print (diffUTCTime metm2 metm)
Edit: I think i may have found a way to ensure that the connection is reused, but it requires a small hack and not using the req package. As #WillemVanOnsem commented, the server will likely close the connection after some time. So I have to send a dummy request every few seconds to the same host to keep the connection alive. But then I still need to find a way to keep the connection alive from having many requests being sent over a small period. The wreq package has a module called Network.Wreq.Session. This module allows you to initialize a Session and do all of your requests on the same connection by passing the same Session to each request. So far this seems to be working. An important note is that your dummy request or any other request for that matter using the same Session should not occur at the same time. If they ever do occur at the same time, the connection wont be reused that time.
The WebSockets library contains an open issue about sending messages from multiple threads.
As an example I took a look at websocket-shootout, and noticed a forked thread for receiveData.
void $ fork $ silentLoop (Unagi.readChan readEnd >>= Ws.sendTextData conn)
silentLoop $ do
msg <- Ws.receiveData conn
case parseMsg msg of
Nothing -> Ws.sendClose conn ("Invalid message" :: LByteString)
Just Echo -> Ws.sendTextData conn msg
Just (Broadcast res) -> do
Unagi.writeChan writeEnd msg
Ws.sendTextData conn res
I was under the impression from the open issue that this would cause issues.
Would it be safe to assume that it's only unsafe to use sendTextData from more than one thread?
In my actual backend server, I'm creating 3 threads per connection:
Ping thread via withPingThread
"Consumer" thread, where it polls with receiveData like the above example
"Producer" thread, where it polls from a TQueue of messages for a given connection, and sends the message via sendTextData.
This thread is to allow multiple threads to queue up messages for a single connection, while only a single thread (this thread) sends text data to the client (except for the fact that receiveData can send text data as well, from the consumer thread).
Is there any obvious mistakes with my approach?
Note that the reported issue is only a problem if compression is used. The websocket-shootout example uses Ws.defaultConnectionOptions which means compression is disabled. As long as you also leave compression disabled, you shouldn't run into any problems with this issue.
The first line of documentation of the Network module reads:
This module is kept for backwards-compatibility. New users are encouraged to use Network.Socket instead.
The Network library has functions that allow to send and receive Strings in a convenient manner, for instance:
h <- connectTo "localhost" (PortNumber 9090)
-- ...
line <- hGetLine h
Without using this library (and using Network.Socket instead), the code above will become something like:
addrinfos <- getAddrInfo Nothing (Just "") (Just "9090")
let serveraddr = head addrinfos
sock <- socket (addrFamily serveraddr) Stream defaultProtocol
connect sock (addrAddress serveraddr)
-- ...
msg <- recv sock Size
-- What `Size` should be? The line above should be probably repeated until all the data is received.
Which is is quite low level, and requires coding/decoding a String's. So my question is, if all I want to do is to send and receive strings (with some encoding) over a socket what are the alternatives for accomplishing this task, provided that Network is not an option?
EDIT: although pipes are conduits are good options, the project I'm working on makes heavy use of these Network functions, so I ended up developing a library for sending and receiving Text using similar functions as in the Network library.
I want to put together the basics for asynchronous UDP IPC in Haskell. For this the sender/receiver should issue e.g. an synchronous receive (or send, depending from what side you view it) thread and carry on with other tasks.
This might involve to define a new data type that consists of optional message/data serial numbers and some sort of buffer so that the send thread can stop sending when it gets a notification from the receiver that it cannot cope with the speed.
I aim to make this as light weight & asynchronous as possible.
I have tried a number of things such as starting a new receive thread for every packet (took this approach from a paper about multi player online games), but this was grinding almost everything to a halt.
Below is my innocent first take on this. Any help on e.g. creating buffers, creating serial numbers or a DCCP implementation (that I could not find) in Haskell appreciated. - I would not like to get into opinionated discussions about UDP vs TCP etc..
My snippet stops working once something gets out of sync e.g. when no data arrives any more or when less data arrives than expected. I am looking as said for some way of lightweight (featherweight :D) sync between the send and the receive thread of for an example of such.
main = withSocketsDo $ do
s <- socket AF_INET Datagram defaultProtocol
hostAddr <- inet_addr host
done <- newEmptyMVar
let p = B.pack "ping"
thread <- forkIO $ receiveMessages s done
forM_ [0 .. 10000] $ \i -> do
sendAllTo s (B.pack "ping") (SockAddrInet port hostAddr)
takeMVar done
killThread thread
sClose s
return()
receiveMessages :: Socket -> MVar () -> IO ()
receiveMessages socket done = do
forM_ [0 .. 10000] $ \i -> do
r <- recvFrom socket 1024
print (r) --this is a placeholder to make the fun complete
putMVar done ()
If you don't trust your messenger, you can never agree on anything -- not even a single bit like "are we done yet"!
I've worked my way through Don Stewart's Roll your own IRC bot tutorial, and am playing around with some extensions to it. My current code is essentially the same as the "The monadic, stateful, exception-handling bot in all its glory"; it's a bit too long to paste here unless someone requests it.
Being a Comcast subscriber, it's particularly important that the bot be able to reconnect after periods of poor connectivity. My approach is to simply time the PING requests from the server, and if it goes without seeing a PING for a certain time, to try reconnecting.
So far, the best solution I've found is to wrap the hGetLine in the listen loop with System.Timeout.timeout. However, this seems to require defining a custom exception so that the catch in main can call main again, rather than return (). It also seems quite fragile to specify a timeout value for each individual hGetLine.
Is there a better solution, perhaps something that wraps an IO a like bracket and catch so that the entire main can handle network timeouts without the overhead of a new exception type?
How about running a separate thread that performs all the reading and writing and takes care of periodically reconnecting the handle?
Something like this
input :: Chan Char
output :: Chan Char
putChar c = writeChan output c
keepAlive = forever $ do
h <- connectToServer
catch
(forever $
do c <- readChan output; timeout 4000 (hPutChar h c); return ())
(\_ -> return ())
The idea is to encapsulate all the difficulty with periodically reconnecting into a separate thread.