Close WebSocket Threadsafe - multithreading

Using System.Net.WebSockets.WebSocket I have a read loop in one thread.
var result = await ws.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);
From another thread I Write.
lock (ws)
ws.SendAsync(new ArraySegment<byte>(buffer), WebSocketMessageType.Text, true, CancellationToken.None).Wait();
This works fine.
My question is, how can I cleanly close this connection?
Any attempts to close result in an exception:
await ws.CloseAsync(WebSocketCloseStatus.NormalClosure, "Bye", CancellationToken.None);
InvalidOperationException: 'Concurrent reads are not supported.'
I've tried to pass a CancellationToken to ws.ReceiveAsync so that I could call ws.CloseAsync from inside the receive loop. That didn't interrupt the ReceiveAsync call, it returned when the next message arrived.
Is it possible to cleanly close the socket outside the read loop.
I could implement a "Close connection" message but that seems overkill when there is a concept of sending a close message in the WebSocket protocol.

I don't see a concurrent read in this code - if you are calling ReceiveAsync only from a single thread and if you await the result of the operation before starting the next one there should not be this exception. Where is it thrown?
Suggestion for the writing part: If you use a lock and a blocking write (with .Wait()) you are blocking a full thread (which potentially runs other handles), which might not be what you want. Better use SemaphoreSlim instead, where you can do:
await semaphore.WaitAsync();
try
{
await ws.SendAsync();
}
finally
{
semaphore.Release();
}
If the text of the error message is not precise and it actually complains about a concurrent Send and Close then you could use the same semaphore also for protecting the Close call:
await semaphore.WaitAsync();
try
{
await ws.CloseAsync(WebSocketCloseStatus.NormalClosure, "Bye", CancellationToken.None);
}
finally
{
semaphore.Release();
}

In that case, you'll need to call CloseOutputAsync and handle the
CloseSent message type in your reads. – Stephen Cleary Mar 14 at 13:49
You can call CloseOutputAsync while another thread is waiting on ReceiveAsync.

Related

How do I avoid a race condition with Node.js's process.send?

What exactly happens when a child process (created by child_process.fork()) in Node sends a message to its parent (process.send()) before the parent has an event handler for the message (child.on("message",...))? (It seems, at least, like there must be some kind of buffer.)
In particular, I'm faced with what seems like an unavoidable race condition - I cannot install a message handler on a child process until after I've finished the call to fork, but the child could potentially send me (the parent) a message right away. What guarantee do I have that, assuming a particularly horrible interleaving of OS processes, I will receive all messages sent by my child?
Consider the following example code:
parent.js:
const child_process = require("child_process");
const child_module = require.resolve("./child");
const run = async () => {
console.log("parent start");
const child = child_process.fork(child_module);
await new Promise(resolve => setTimeout(resolve, 40));
console.log("add handler");
child.on("message", (m) => console.log("parent receive:", m));
console.log("parent end");
};
run();
child.js:
console.log("child start");
process.send("123abc");
console.log("child end");
In the above, I'm hoping to simulate a "bad interleaving" by preventing the message handler from being installed for a few milliseconds (suppose that a context switch takes place immediately after the fork, and that some other processes run for a while before the parent's node.js process can be scheduled again). In my own testing, the parent seems to "reliably" receive the message with numbers << 40ms (e.g. 20ms), but for values >35ms, it's flaky at best, and for values >> 40ms (e.g. 50 or 60), the message is never received. What's special about these numbers - just how fast the processes are being scheduled on my machine?
It seems to be independent of whether the handler is installed before or after the message is sent. For example, I've observed both of the following executions with the timeout set to 40 milliseconds. Notice that in each one, the child's "end" message (indicating that the process.send() has already happened) comes before "add handler". In one case, the message is received, but in the next, it's lost. It's possible, I suppose, that buffering of the standard output of these processes could potentially cause these outputs to be misrepresenting the true execution - is that's what's going on here?
Execution A:
parent start
child start
child end
add handler
parent end
parent receive: 123abc
Execution B:
parent start
child start
child end
add handler
parent end
In short - is there a solution to this apparent race condition? I seem to be able to "reliably" receive messages as long as I install a handler "soon" enough - but am I just getting lucky, or is there some guarantee that I'm getting? How do I ensure, without relying on luck, that this code will always work (barring cosmic rays, spilled coffee, etc...)? I can't seem to find any detail about how this is supposed to work in the Node documentation.
What exactly happens when a child process (created by child_process.fork()) in Node sends a message to its parent (process.send()) before the parent has an event handler for the message (child.on("message",...))? (It seems, at least, like there must be some kind of buffer.)
First off, the fact that a message arrived from another process goes into the nodejs event queue. It won't be processed until the current nodejs code finishes whatever it was doing and returns control back to the event loop so that it can process the next event in the event queue. If that moment arrives before there is any listener for that incoming event, then it is just received and then thrown away. The message arrives, the code looks to call any registered event handlers and if there are none, then it's done. It's the same as if you call eventEmitter.emit("someMsg", data) and there are no listeners for "someMsg". But, read on, there is hope for your specific situation.
In particular, I'm faced with what seems like an unavoidable race condition - I cannot install a message handler on a child process until after I've finished the call to fork, but the child could potentially send me (the parent) a message right away. What guarantee do I have that, assuming a particularly horrible interleaving of OS processes, I will receive all messages sent by my child?
Fortunately, due to the single-threaded, event-driven nature of nodejs, this is not a problem. You can install the message handler before there's any chance of the message arriving and being processed. This is because even though the child may be started up and may be running independently using other CPUs or interleaved with your process, the single-threaded nature and the event driven architecture help you solve this problem.
If you do something like this:
const child = child_process.fork(child_module);
child.on("message", (m) => console.log("parent receive:", m));
Then you are guaranteed that your message handler will be installed before there's any chance of an incoming message being processed and you will not miss it. This is because the interpreter is busy running these two lines of code and does not return control back to the event loop until after these two lines of code are run. Therefore, no incoming message from the child_module can get processed before your child.on(...) handler is installed.
Now, if you purposely do return back to the event loop as you are doing here with the await before installing your event handler like your code here:
const run = async () => {
console.log("parent start");
const child = child_process.fork(child_module);
// this await allows events in the event queue to be processed
// while this function is suspended waiting for the await
await new Promise(resolve => setTimeout(resolve, 40));
console.log("add handler");
child.on("message", (m) => console.log("parent receive:", m));
console.log("parent end");
};
run();
Then, you have purposely introduced a race condition with your own coding that can be avoided by just installing the event handler BEFORE the await like this:
const run = async () => {
console.log("parent start");
// no events will be processed before these next three statements run
const child = child_process.fork(child_module);
console.log("add handler");
child.on("message", (m) => console.log("parent receive:", m));
await new Promise(resolve => setTimeout(resolve, 40));
console.log("parent end");
};
run();

Run NodeJS event loop / wait for child process to finish

I first tried a general description of the problem, then some more detail why the usual approaches don't work. If you would like to read these abstracted explanations go on. In the end I explain the greater problem and the specific application, so if you would rather read that, jump to "Actual application".
I am using a node.js child-process to do some computationally intensive work. The parent process does it's work but at some point in the execution it reaches a point where it must have the information from the child process before continuing. Therefore, I am looking for a way to wait for the child-process to finish.
My current setup looks somewhat like this:
importantDataCalculator = fork("./runtime");
importantDataCalculator.on("message", function (msg) {
if (msg.type === "result") {
importantData = msg.data;
} else if (msg.type === "error") {
importantData = null;
} else {
throw new Error("Unknown message from dataGenerator!");
}
});
and somewhere else
function getImportantData() {
while (importantData === undefined) {
// wait for the importantDataGenerator to finish
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have a proper data now
return importantData;
}
}
So when the parent process starts, it executes the first bit of code, spawning a child process to calculate the data and goes on doing it's own bit of work. When the time comes that it needs the result from the child process to continue it calls getImportantData(). So the idea is that getImportantData() blocks until the data is calculated.
However, the way I used doesn't work. I think this is due to me preventing the event loop from executing by using the while-loop. And since the Event-Loop does not execute no message from the child-process can be received and thus the condition of the while-loop can not change, making it an infinite loop.
Of course, I don't really want to use this kind of while-loop. What I would rather do is tell node.js "execute one iteration of the event loop, then get back to me". I would do this repeatedly, until the data I need was received and then continue the execution where I left of by returning from the getter.
I realize that his poses the danger of reentering the same function several times, but the module I want to use this in does almost nothing on the event loop except for waiting for this message from the child process and sending out other messages reporting it's progress, so that shouldn't be a problem.
Is there way to execute just one iteration of the event loop in Node.js? Or is there another way to achieve something similar? Or is there a completely different approach to achieve what I'm trying to do here?
The only solution I could think of so far is to change the calculation in such a way that I introduce yet another process. In this scenario, there would be the process calculating the important data, a process calculating the bits of data for which the important data is not needed and a parent process for these two, which just waits for data from the two child-processes and combines the pieces when they arrive. Since it does not have to do any computationally intensive work itself, it can just wait for events from the event loop (=messages) and react to them, forwarding the combined data as necessary and storing pieces of data that cannot be combined yet.
However this introduces yet another process and even more inter-process communication, which introduces more overhead, which I would like to avoid.
Edit
I see that more detail is needed.
The parent process (let's call it process 1) is itself a process spawned by another process (process 0) to do some computationally intensive work. Actually, it just executes some code over which I don't have control, so I cannot make it work asynchronously. What I can do (and have done) is make the code that is executed regularly call a function to report it's progress and provided partial results. This progress report is then send back to the original process via IPC.
But in rare cases the partial results are not correct, so they have to be modified. To do so I need some data I can calculate independently from the normal calculation. However, this calculation could take several seconds; thus, I start another process (process 2) to do this calculation and provide the result to process 1, via an IPC message. Now process 1 and 2 are happily calculating there stuff, and hopefully the corrective data calculated by process 2 is finished before process 1 needs it. But sometimes one of the early results of process 1 needs to be corrected and in that case I have to wait for process 2 to finish its calculation. Blocking the event loop of process 1 is theoretically not a problem, since the main process (process 0) would not be be affected by it. The only problem is, that by preventing the further execution of code in process 1 I am also blocking the event loop, which prevents it from ever receiving the result from process 2.
So I need to somehow pause the further execution of code in process 1 without blocking the event loop. I was hoping that there was a call like process.runEventLoopIteration that executes an iteration of the event loop and then returns.
I would then change the code like this:
function getImportantData() {
while (importantData === undefined) {
process.runEventLoopIteration();
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have a proper data now
return importantData;
}
}
thus executing the event loop until I have received the necessary data but NOT continuing the execution of the code that called getImportantData().
Basically what I'm doing in process 1 is this:
function callback(partialDataMessage) {
if (partialDataMessage.needsCorrection) {
getImportantData();
// use data to correct message
process.send(correctedMessage); // send corrected result to main process
} else {
process.send(partialDataMessage); // send unmodified result to main process
}
}
function executeCode(code) {
run(code, callback); // the callback will be called from time to time when the code produces new data
// this call is synchronous, run is blocking until the calculation is finished
// so if we reach this point we are done
// the only way to pause the execution of the code is to NOT return from the callback
}
Actual application/implementation/problem
I need this behaviour for the following application. If you have a better approach to achieve this feel free to propose it.
I want to execute arbitrary code and be notified about what variables it changes, what functions are called, what exceptions occur etc. I also need the location of these events in the code to be able to display the gathered information in the UI next to the original code.
To achieve this, I instrument the code and insert callbacks into it. I then execute the code, wrapping the execution in a try-catch block. Whenever the callback is called with some data about the execution (e.g. a variable change) I send a message to the main process telling it about the change. This way, the user is notified about the execution of the code, while it is running. The location information for the events generated by these callbacks is added to the callback call during the instrumentation, so that is not a problem.
The problem appears, when an exception occurs. I also want to notify the user about exceptions in the tested code. Therefore, I wrapped the execution of the code in a try-catch and any exceptions that get out of the execution are caught and send to the user interface. But the location of the errors is not correct. An Error object created by node.js has a complete call stack so it knows where it occurred. But this location if relative to the instrumented code, so I cannot use this location information as is, to display the error next to the original code. I need to transform this location in the instrumented code into a location in the original code. To do so, after instrumenting the code, I calculate a source map to map locations in the instrumented code to locations in the original code. However, this calculation might take several seconds. So, I figured, I would start a child process to calculate the source map, while the execution of the instrumented code is already started. Then, when an exception occurs, I check whether the source map has already been calculated, and if it hasn't I wait for the calculation to finish to be able to correct the location.
Since the code to be executed and watched can be completely arbitrary I cannot trivially rewrite it to be asynchronous. I only know that it calls the provided callback, because I instrumented the code to do so. I also cannot just store the message and return to continue the execution of the code, checking back during the next call whether the source map has been finished, because continuing the execution of the code would also block the event-loop, preventing the calculated source map from ever being received in the execution process. Or if it is received, then only after the code to execute has completely finished, which could be quite late or never (if the code to execute contains an infinite loop). But before I receive the sourceMap I cannot send further updates about the execution state. Combined, this means I would only be able to send the corrected progress messages after the code to execute has finished (which might be never) which completely defeats the purpose of the program (to enable the programmer to watch what the code does, while it executes).
Temporarily surrendering control to the event loop would solve this problem. However, that does not seem to be possible. The other idea I have is to introduce a third process which controls both the execution process and the sourceMapGeneration process. It receives progress messages from the execution process and if any of the messages needs correction it waits for the sourceMapGeneration process. Since the processes are independent, the controlling process can store the received messages and wait for the sourceMapGeneration process while the execution process continues executing, and as soon as it receives the source map, it corrects the messages and sends all of them off.
However, this would not only require yet another process (overhead) it also means I have to transfer the code once more between processes and since the code can have thousands of line that in itself can take some time, so I would like to move it around as little as possible.
I hope this explains, why I cannot and didn't use the usual "asynchronous callback" approach.
Adding a third ( :) ) solution to your problem after you clarified what behavior you seek I suggest using Fibers.
Fibers let you do co-routines in nodejs. Coroutines are functions that allow multiple entry/exit points. This means you will be able to yield control and resume it as you please.
Here is a sleep function from the official documentation that does exactly that, sleep for a given amount of time and perform actions.
function sleep(ms) {
var fiber = Fiber.current;
setTimeout(function() {
fiber.run();
}, ms);
Fiber.yield();
}
Fiber(function() {
console.log('wait... ' + new Date);
sleep(1000);
console.log('ok... ' + new Date);
}).run();
console.log('back in main');
You can place the code that does the waiting for the resource in a function, causing it to yield and then run again when the task is done.
For example, adapting your example from the question:
var pausedExecution, importantData;
function getImportantData() {
while (importantData === undefined) {
pausedExecution = Fiber.current;
Fiber.yield();
pausedExecution = undefined;
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have proper data now
return importantData;
}
}
function callback(partialDataMessage) {
if (partialDataMessage.needsCorrection) {
var theData = getImportantData();
// use data to correct message
process.send(correctedMessage); // send corrected result to main process
} else {
process.send(partialDataMessage); // send unmodified result to main process
}
}
function executeCode(code) {
// setup child process to calculate the data
importantDataCalculator = fork("./runtime");
importantDataCalculator.on("message", function (msg) {
if (msg.type === "result") {
importantData = msg.data;
} else if (msg.type === "error") {
importantData = null;
} else {
throw new Error("Unknown message from dataGenerator!");
}
if (pausedExecution) {
// execution is waiting for the data
pausedExecution.run();
}
});
// wrap the execution of the code in a Fiber, so it can be paused
Fiber(function () {
runCodeWithCallback(code, callback); // the callback will be called from time to time when the code produces new data
// this callback is synchronous and blocking,
// but it will yield control to the event loop if it has to wait for the child-process to finish
}).run();
}
Good luck! I always say it is better to solve one problem in 3 ways than solving 3 problems the same way. I'm glad we were able to work out something that worked for you. Admittingly, this was a pretty interesting question.
The rule of asynchronous programming is, once you've entered asynchronous code, you must continue to use asynchronous code. While you can continue to call the function over and over via setImmediate or something of the sort, you still have the issue that you're trying to return from an asynchronous process.
Without knowing more about your program, I can't tell you exactly how you should structure it, but by and large the way to "return" data from a process that involves asynchronous code is to pass in a callback; perhaps this will put you on the right track:
function getImportantData(callback) {
importantDataCalculator = fork("./runtime");
importantDataCalculator.on("message", function (msg) {
if (msg.type === "result") {
callback(null, msg.data);
} else if (msg.type === "error") {
callback(new Error("Data could not be generated."));
} else {
callback(new Error("Unknown message from sourceMapGenerator!"));
}
});
}
You would then use this function like this:
getImportantData(function(error, data) {
if (error) {
// handle the error somehow
} else {
// `data` is the data from the forked process
}
});
I talk about this in a bit more detail in one of my screencasts, Thinking Asynchronously.
What you are running into is a very common scenario that skilled programmers who are starting with nodejs often struggle with.
You're correct. You can't do this the way you are attempting (loop).
The main process in node.js is single threaded and you are blocking the event loop.
The simplest way to resolve this is something like:
function getImportantData() {
if(importantData === undefined){ // not set yet
setImmediate(getImportantData); // try again on the next event loop cycle
return; //stop this attempt
}
if (importantData === null) {
throw new Error("Data could not be generated.");
} else {
// we should have a proper data now
return importantData;
}
}
What we are doing, is that the function is re-attempting to process the data on the next iteration of the event loop using setImmediate.
This introduces a new problem though, your function returns a value. Since it will not be ready, the value you are returning is undefined. So you have to code reactively. You need to tell your code what to do when the data arrives.
This is typically done in node with a callback
function getImportantData(err,whenDone) {
if(importantData === undefined){ // not set yet
setImmediate(getImportantData.bind(null,whenDone)); // try again on the next event loop cycle
return; //stop this attempt
}
if (importantData === null) {
err("Data could not be generated.");
} else {
// we should have a proper data now
whenDone(importantData);
}
}
This can be used in the following way
getImportantData(function(err){
throw new Error(err); // error handling function callback
}, function(data){ //this is whenDone in our case
//perform actions on the important data
})
Your question (updated) is very interesting, it appears to be closely related to a problem I had with asynchronously catching exceptions. (Also Brandon and Ihad an interesting discussion with me about it! It's a small world)
See this question on how to catch exceptions asynchronously. The key concept is that you can use (assuming nodejs 0.8+) nodejs domains to constrain the scope of an exception.
This will allow you to easily get the location of the exception since you can surround asynchronous blocks with atry/catch. I think this should solve the bigger issue here.
You can find the relevant code in the linked question. The usage is something like:
atry(function() {
setTimeout(function(){
throw "something";
},1000);
}).catch(function(err){
console.log("caught "+err);
});
Since you have access to the scope of atry you can get the stack trace there which would let you skip the more complicated source-map usage.
Good luck!

how to stop(or terminate ) MPI_Recv after some perticular time when there is deadlock in MPI?

I am trying to detect deadlocks in MPI
is there any method in which we can jump from function like MPI_Recv after particular time.
MPI_Recv is a blocking function and will just sit there untill it receives the data it is waiting for, so if you are looking to have it timeout and error if things lock up then I don't think that's the one for you.
You could look into using MPI_Irecv, which is the non-blocking version. You could then emulate the blocking behaviour of MPI_Recv using MPI_Wait or MPI_Test.
If you use a combination of MPI_Irecv and MPI_Test you could make a snippet that waits to recieve for a specified length of time, then errors if it hasn't. Rough example:
MPI_Irecv(..., &request); //start a receive request, non-blocking
time_t start_time = time(); //get start time
MPI_Test(&request, &gotData, ...); //test, have we got it yet
//loop until we have received, or taken too long
while (!gotData && difftime(time(),start_time) < TIMEOUT_TIME) {
//wait a bit.
MPI_Test(&request, &gotData, ...); //test again
}
//By now we either have received the data, or taken too long, so...
if (!gotData) {
//we must have timed out
MPI_Cancel(&request);
MPI_Request_free(&request);
//throw an error
}

Scala: wake up sleeping thread

In scala, how can I tell a thread: sleep t seconds, or until you receive a message? i.e. sleep at most t seconds, but wake up in case t is not over and you receive a certain message.
The answer depends greatly on what the message is. If you're using Actors (either the old variety or the Akka variety) then you can simply state a timeout value on receive. (React isn't really running until it gets a message, so you can't place a timeout on it.)
// Old style
receiveWithin(1000) {
case msg: Message => // whatever
case TIMEOUT => // Handle timeout
}
// Akka style
context.setTimeoutReceive(1 second)
def receive = {
case msg: Message => // whatever
case ReceiveTimeout => // handle timeout
}
Otherwise, what exactly do you mean by "message"?
One easy way to send a message is to use the Java concurrent classes made for exactly this kind of thing. For example, you can use a java.util.concurrent.SynchronousQueue to hold the message, and the receiver can call the poll method which takes a timeout:
// Common variable
val q = new java.util.concurrent.SynchronousQueue[String]
// Waiting thread
val msg = q.poll(1000)
// Sending thread will also block until receiver is ready to take it
q.offer("salmon", 1000)
An ArrayBlockingQueue is also useful in these situations (if you want the senders to be able to pack messages in a buffer).
Alternatively, you can use condition variables.
val monitor = new AnyRef
var messageReceived: Boolean = false
// The waiting thread...
def waitUntilMessageReceived(timeout: Int): Boolean = {
monitor synchronized {
// The time-out handling here is simplified for the purpose
// of exhibition. The "wait" may wake up spuriously for no
// apparent reason. So in practice, this would be more complicated,
// actually.
while (!messageReceived) monitor.wait(timeout * 1000L)
messageReceived
}
}
// The thread, which sends the message...
def sendMessage: Unit = monitor synchronized {
messageReceived = true
monitor.notifyAll
}
Check out Await. If you have some Awaitable objects then that's what you need.
Instead of making it sleep for a given time, make it only wake up on a Timeout() msg and then you can send this message prematurely if you want it to "wake up".

TcpClient and StreamReader blocks on Read

Here's my situation:
I'm writing a chat client to connect to a chat server. I create the connection using a TcpClient and get a NetworkStream object from it. I use a StreamReader and StreamWriter to read and write data back and forth.
Here's what my read looks like:
public string Read()
{
StringBuilder sb = new StringBuilder();
try
{
int tmp;
while (true)
{
tmp = StreamReader.Read();
if (tmp == 0)
break;
else
sb.Append((char)tmp);
Thread.Sleep(1);
}
}
catch (Exception ex)
{
// log exception
}
return sb.ToString();
}
That works fine and dandy. In my main program I create a thread that continually calls this Read method to see if there is data. An example is below.
private void Listen()
{
try
{
while (IsShuttingDown == false)
{
string data = Read();
if (!string.IsNullOrEmpty(data))
{
// do stuff
}
}
}
catch (ThreadInterruptedException ex)
{
// log it
}
}
...
Thread listenThread = new Thread(new ThreadStart(Listen));
listenThread.Start();
This works just fine. The problem comes when I want to shut down the application. I receive a shut down command from the UI, and tell the listening thread to stop listening (that is, stop calling this read function). I call Join and wait for this child thread to stop running. Like so:
// tell the thread to stop listening and wait for a sec
IsShuttingDown = true;
Thread.Sleep(TimeSpan.FromSeconds(1.00));
// if we've reach here and the thread is still alive
// interrupt it and tell it to quit
if (listenThread.IsAlive)
listenThread.Interrupt();
// wait until thread is done
listenThread.Join();
The problem is it never stops running! I stepped into the code and the listening thread is blocking because the Read() method is blocking. Read() just sits there and doesn't return. Hence, the thread never gets a chance to sleep that 1 millisecond and then get interrupted.
I'm sure if I let it sit long enough I'd get another packet and get a chance for the thread to sleep (if it's an active chatroom or a get a ping from the server). But I don't want to depend on that. If the user says shut down I want to shut it down!!
One alternative I found is to use the DataAvailable method of NetworkStream so that I could check it before I called StreamReader.Read(). This didn't work because it was undependable and I lost data when reading from packets from the server. (Because of that I wasn't able to login correctly, etc, etc)
Any ideas on how to shutdown this thread gracefully? I'd hate to call Abort() on the listening thread...
Really the only answer is to stop using Read and switch to using asynchronous operations (i.e. BeginRead). This is a harder model to work with, but means no thread is blocked (and you don't need to dedicate a thread—a very expensive resource—to each client even if the client is not sending any data).
By the way, using Thread.Sleep in concurrent code is a bad smell (in the Refactoring sense), it usually indicates deeper problems (in this case, should be doing asynchronous, non-blocking, operations).
Are you actually using System.IO.StreamReader and System.IO.StreamWriter to send and receive data from the socket? I wasn't aware this was possible. I've only ever used the Read() and Write() methods on the NetworkStream object returned by the TcpClient's GetStream() method.
Assuming this is possible, StreamReader returns -1 when the end of the stream is reached, not 0. So it looks to me like your Read() method is in an infinite loop.

Resources