I have a client application that receives video stream from a server via UDP or TCP socket.
Originally, when it was written using .NET 2.0 the code was using BeginReceive/EndReceive and IAsyncResult.
The client displays each video in it's own window and also using it's own thread for communicating with the server.
However, since the client is supposed to be up for a long period of time, and there might be 64 video streams simultaneously, there is a "memory leak" of IAsyncResult objects that are allocated each time the data receive callback is called.
This causes the application eventually to run out of memory, because the GC can't handle releasing of the blocks in time. I verified this using VS 2010 Performance Analyzer.
So I modified the code to use SocketAsyncEventArgs and ReceiveFromAsync (UDP case).
However, I still see a growth in memory blocks at:
System.Net.Sockets.Socket.ReceiveFromAsync(class System.Net.Sockets.SocketAsyncEventArgs)
I've read all the samples and posts about implementing the code, and still no solution.
Here's how my code looks like:
// class data members
private byte[] m_Buffer = new byte[UInt16.MaxValue];
private SocketAsyncEventArgs m_ReadEventArgs = null;
private IPEndPoint m_EndPoint; // local endpoint from the caller
Initializing:
m_Socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
m_Socket.Bind(m_EndPoint);
m_Socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReceiveBuffer, MAX_SOCKET_RECV_BUFFER);
//
// initalize the socket event args structure.
//
m_ReadEventArgs = new SocketAsyncEventArgs();
m_ReadEventArgs.Completed += new EventHandler<SocketAsyncEventArgs>(readEventArgs_Completed);
m_ReadEventArgs.SetBuffer(m_Buffer, 0, m_Buffer.Length);
m_ReadEventArgs.RemoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
m_ReadEventArgs.AcceptSocket = m_Socket;
Starting the read process:
bool waitForEvent = m_Socket.ReceiveFromAsync(m_ReadEventArgs);
if (!waitForEvent)
{
readEventArgs_Completed(this, m_ReadEventArgs);
}
Read completion handler:
private void readEventArgs_Completed(object sender, SocketAsyncEventArgs e)
{
if (e.BytesTransferred == 0 || e.SocketError != SocketError.Success)
{
//
// we got error on the socket or connection was closed
//
Close();
return;
}
try
{
// try to process a new video frame if enough data was read
base.ProcessPacket(m_Buffer, e.Offset, e.BytesTransferred);
}
catch (Exception ex)
{
// log and error
}
bool willRaiseEvent = m_Socket.ReceiveFromAsync(e);
if (!willRaiseEvent)
{
readEventArgs_Completed(this, e);
}
}
Basically the code works fine and I see the video streams perfectly, but this leak is a real pain.
Did I miss anything???
Many thanks!!!
Instead of recursively calling readEventArgs_Completed after !willRaiseEvent use goto to return to the top of the method. I noticed I was slowly chewing up stack space when I had a pattern similar to yours.
Related
I'm connecting to Azure Redis and they show me the number of open connections to my redis server. I've got the following c# code that encloses all my Redis sets and gets. Should this be leaking connections?
using (var connectionMultiplexer = ConnectionMultiplexer.Connect(connectionString))
{
lock (Locker)
{
redis = connectionMultiplexer.GetDatabase();
}
var o = CacheSerializer.Deserialize<T>(redis.StringGet(cacheKeyName));
if (o != null)
{
return o;
}
lock (Locker)
{
// get lock but release if it takes more than 60 seconds to complete to avoid deadlock if this app crashes before release
//using (redis.AcquireLock(cacheKeyName + "-lock", TimeSpan.FromSeconds(60)))
var lockKey = cacheKeyName + "-lock";
if (redis.LockTake(lockKey, Environment.MachineName, TimeSpan.FromSeconds(10)))
{
try
{
o = CacheSerializer.Deserialize<T>(redis.StringGet(cacheKeyName));
if (o == null)
{
o = func();
redis.StringSet(cacheKeyName, CacheSerializer.Serialize(o),
TimeSpan.FromSeconds(cacheTimeOutSeconds));
}
redis.LockRelease(lockKey, Environment.MachineName);
return o;
}
finally
{
redis.LockRelease(lockKey, Environment.MachineName);
}
}
return o;
}
}
}
You can keep connectionMultiplexer in a static variable and not create it for every get/set. That will keep one connection to Redis always opening and proceed your operations faster.
Update:
Please, have a look at StackExchange.Redis basic usage:
https://github.com/StackExchange/StackExchange.Redis/blob/master/Docs/Basics.md
"Note that ConnectionMultiplexer implements IDisposable and can be disposed when no longer required, but I am deliberately not showing using statement usage, because it is exceptionally rare that you would want to use a ConnectionMultiplexer briefly, as the idea is to re-use this object."
It works nice for me, keeping single connection to Azure Redis (sometimes, create 2 connections, but this by design). Hope it will help you.
I was suggesting try using Close (or CloseAsync) method explicitly. In a test setting you may be using different connections for different test cases and not want to share a single multiplexer. A search for public code using Redis client shows a pattern of Close followed by Dispose calls.
Noting in the XML method documentation of Redis client that close method is described as doing more:
//
// Summary:
// Close all connections and release all resources associated with this object
//
// Parameters:
// allowCommandsToComplete:
// Whether to allow all in-queue commands to complete first.
public void Close(bool allowCommandsToComplete = true);
//
// Summary:
// Close all connections and release all resources associated with this object
//
// Parameters:
// allowCommandsToComplete:
// Whether to allow all in-queue commands to complete first.
[AsyncStateMachine(typeof(<CloseAsync>d__183))]
public Task CloseAsync(bool allowCommandsToComplete = true);
...
//
// Summary:
// Release all resources associated with this object
public void Dispose();
And then I looked up the code for the client, found it here:
https://github.com/StackExchange/StackExchange.Redis/blob/master/src/StackExchange.Redis/ConnectionMultiplexer.cs
And we can see Dispose method calling Close (not the usual override-able protected Dispose(bool)), further more with the wait for connections to close set to true. It appears to be an atypical dispose pattern implementation in that by trying all the closure and waiting on them it is chancing to run into exception while Dispose method contract is supposed to never throw one.
I have a BlackBerry App that sends data over a web service when a button has it state set to ON. When the button is ON a timer is started which is running continuously in the background at fixed intervals. The method for HttpConnection is called as follows:
if(C0NNECTION_EXTENSION==null)
{
Dialog.alert("Check internet connection and try again");
return;
}
else
{
confirmation=PostMsgToServer(encryptedMsg);
}
The method PostMsgToServer is as follows:
public static String PostMsgToServer(String encryptedGpsMsg) {
//httpURL= "https://prerel.track24c4i.com/track24prerel/service/spi/post?access_id="+DeviceBoardPassword+"&IMEI="+IMEI+"&hex_data="+encryptedGpsMsg+"&device_type=3";
httpURL= "https://t24.track24c4i.com/track24c4i/service/spi/post?access_id="+DeviceBoardPassword+"&IMEI="+IMEI+"&hex_data="+encryptedGpsMsg+"&device_type=3";
//httpURL= "http://track24.unit1.overwatch/track24/service/spi/post?access_id="+DeviceBoardPassword+"&IMEI="+IMEI+"&hex_data="+encryptedGpsMsg+"&device_type=3";
try {
String C0NNECTION_EXTENSION = checkInternetConnection();
if(C0NNECTION_EXTENSION==null)
{
Dialog.alert("Check internet connection and try again");
return null;
}
else
{
httpURL=httpURL+C0NNECTION_EXTENSION+";ConnectionTimeout=120000";
//Dialog.alert(httpURL);
HttpConnection httpConn;
httpConn = (HttpConnection) Connector.open(httpURL);
httpConn.setRequestMethod(HttpConnection.POST);
DataOutputStream _outStream = new DataOutputStream(httpConn.openDataOutputStream());
byte[] request_body = httpURL.getBytes();
for (int i = 0; i < request_body.length; i++) {
_outStream.writeByte(request_body[i]);
}
DataInputStream _inputStream = new DataInputStream(
httpConn.openInputStream());
StringBuffer _responseMessage = new StringBuffer();
int ch;
while ((ch = _inputStream.read()) != -1) {
_responseMessage.append((char) ch);
}
String res = (_responseMessage.toString());
responce = res.trim();
httpConn.close();
}
}catch (Exception e) {
//Dialog.alert("Connection Time out");
}
return responce;
}
My Question: The App freezes whenever the method is called, i.e. whenever the timer has to execute and send data to the web service the App freezes - at times for a few seconds and at times for a considerable amount of time applying to the user as if the handset has hanged. Can this be solved? Kindly help!!
You are running your networking operation on the Event Thread - i.e. the same Thread that processes your application's Ui interactions. Networking is a blocking operation so effectively this is stopping your UI operation. Doing this on the Event Thread is not recommended and to be honest, I'm surprised it is not causing your application to be terminated, as this is often what the OS will do, if it thinks the application has blocked the Event Thread.
The way to solve this is start your network processing using a separate Thread. This is generally the easy part, the difficult part is
blocking the User from doing anything else while waiting for the
response (assuming you need to do this)
updating the User Interface with the results of your networking
processing
I think the second of these issues are discussed in this Thread:
adding-field-from-a-nonui-thread-throws-exception-in-blackberry
Since it appears you are trying to do this update at regular intervals in the background, I don't think the first is an issue, - for completeness, you can search SO for answers including this one:
blackberry-please-wait-screen-with-time-out
There is more information regarding the Event Thread here:
Event Thread
I know related questions are asked in other places but mine is different :)
I'm using BasicHttpClient and a HttpPoster to send stuff to a thirdparty service. I'm using this in a scenario where i have JMS listeners using a single bean to post stuff. I didn't think this was a problem since the BasicHttpclient uses SingleClientConnectionManager and the javadoc says
This connection manager maintains only one active connection at a time. Even though this class is thread-safe it ought to be used by one execution thread only.
(thread-safe is key here) But, when i have two simultaneous requests i get the classic
java.lang.IllegalStateException: Invalid use of SingleClientConnManager: connection still allocated.
Why do i get that? I don't clean up anything since the basicclient does that according to the docs.
my bean constructor:
HttpParams params = new BasicHttpParams();
params.setParameter(CoreConnectionPNames.CONNECTION_TIMEOUT, SMS_SOCKET_TIMEOUT);
params.setParameter(CoreConnectionPNames.SO_TIMEOUT, SMS_SOCKET_TIMEOUT);
params.setParameter(CoreProtocolPNames.HTTP_CONTENT_CHARSET,
encoding);
params.setParameter(CoreProtocolPNames.HTTP_ELEMENT_CHARSET,
encoding);
httpclient = new DefaultHttpClient(params);
poster = new HttpPost(mtUrl);
poster.setHeader("Content-type", contentType);
responseHandler = new BasicResponseHandler();
my code to run a post call:
public String[] sendMessage(MtMessage mess) throws MtSendException, MtHandlingException {
StringEntity input;
try {
String postBody = assembleMessagePostBody(mess);
input = new StringEntity(postBody);
poster.setEntity(input);
ResponseHandler<String> responseHandler = new BasicResponseHandler();
String response = httpclient.execute(poster, responseHandler);
return new String[]{extractResponseMessageId(response)};
} catch(HttpResponseException ee){
throw new MtSendException(ee.getStatusCode(), ee.getMessage(), false);
} catch (IOException e) {
throw new MtSendException(0, e.getMessage(), false);
} finally{
}
}
I thought that although the "sendMessage" could be called from multiple JMS listener threads at once, it would be thread safe, since the connectionhandler is thread safe. I guess i could just make the sendMessage() method synchronized perhaps.
If anyone has any input, i'd be most thankful.
SingleClientConnectionManager is fully thread safe in the sense that when used by multiple execution threads its internal state is synchronized and is always consistent. This does not change the fact that it can dispense a single connection only. So, if two threads attempt to lease a connection, only one can succeed, while the other is likely to get 'java.lang.IllegalStateException: Invalid use of SingleClientConnManager'
You should be using a pooling connection manager if your application needs to execute requests concurrently.
In the following thread, UDP packets are read from clients until the boolean field Run is set to false.
If Run is set to false while the Receive method is blocking, it stays blocked forever (unless a client sends data, which will make the thread loop and check for the Run condition again).
while (Run)
{
IPEndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
byte[] data = udpClient.Receive(ref remoteEndPoint); // blocking method
// process received data
}
I usually get around the problem by setting a timeout on the server. It works fine, but seems to be a patchy solution to me.
udpClient.Client.ReceiveTimeout = 5000;
while (Run)
{
try
{
IPEndPoint remoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
byte[] data = udpClient.Receive(ref remoteEndPoint); // blocking method
// process received data
}
catch(SocketException ex) {} // timeout reached
}
How would you handle this problem? Is there any better way?
Use UdpClient.Close(). That will terminate the blocking Receive() call. Be prepared to catch the ObjectDisposedException, it signals your thread that the socket is closed.
You could do something like this:
private bool run;
public bool Run
{
get
{
return run;
}
set
{
run = value;
if(!run)
{
udpClient.Close();
}
}
}
This allows you to close the client once whatever condition is met to stop your connection from listening. An exception will likely be thrown, but I don't believe it will be a SocketTimeoutException, so you'll need to handle that.
Here's my situation:
I'm writing a chat client to connect to a chat server. I create the connection using a TcpClient and get a NetworkStream object from it. I use a StreamReader and StreamWriter to read and write data back and forth.
Here's what my read looks like:
public string Read()
{
StringBuilder sb = new StringBuilder();
try
{
int tmp;
while (true)
{
tmp = StreamReader.Read();
if (tmp == 0)
break;
else
sb.Append((char)tmp);
Thread.Sleep(1);
}
}
catch (Exception ex)
{
// log exception
}
return sb.ToString();
}
That works fine and dandy. In my main program I create a thread that continually calls this Read method to see if there is data. An example is below.
private void Listen()
{
try
{
while (IsShuttingDown == false)
{
string data = Read();
if (!string.IsNullOrEmpty(data))
{
// do stuff
}
}
}
catch (ThreadInterruptedException ex)
{
// log it
}
}
...
Thread listenThread = new Thread(new ThreadStart(Listen));
listenThread.Start();
This works just fine. The problem comes when I want to shut down the application. I receive a shut down command from the UI, and tell the listening thread to stop listening (that is, stop calling this read function). I call Join and wait for this child thread to stop running. Like so:
// tell the thread to stop listening and wait for a sec
IsShuttingDown = true;
Thread.Sleep(TimeSpan.FromSeconds(1.00));
// if we've reach here and the thread is still alive
// interrupt it and tell it to quit
if (listenThread.IsAlive)
listenThread.Interrupt();
// wait until thread is done
listenThread.Join();
The problem is it never stops running! I stepped into the code and the listening thread is blocking because the Read() method is blocking. Read() just sits there and doesn't return. Hence, the thread never gets a chance to sleep that 1 millisecond and then get interrupted.
I'm sure if I let it sit long enough I'd get another packet and get a chance for the thread to sleep (if it's an active chatroom or a get a ping from the server). But I don't want to depend on that. If the user says shut down I want to shut it down!!
One alternative I found is to use the DataAvailable method of NetworkStream so that I could check it before I called StreamReader.Read(). This didn't work because it was undependable and I lost data when reading from packets from the server. (Because of that I wasn't able to login correctly, etc, etc)
Any ideas on how to shutdown this thread gracefully? I'd hate to call Abort() on the listening thread...
Really the only answer is to stop using Read and switch to using asynchronous operations (i.e. BeginRead). This is a harder model to work with, but means no thread is blocked (and you don't need to dedicate a thread—a very expensive resource—to each client even if the client is not sending any data).
By the way, using Thread.Sleep in concurrent code is a bad smell (in the Refactoring sense), it usually indicates deeper problems (in this case, should be doing asynchronous, non-blocking, operations).
Are you actually using System.IO.StreamReader and System.IO.StreamWriter to send and receive data from the socket? I wasn't aware this was possible. I've only ever used the Read() and Write() methods on the NetworkStream object returned by the TcpClient's GetStream() method.
Assuming this is possible, StreamReader returns -1 when the end of the stream is reached, not 0. So it looks to me like your Read() method is in an infinite loop.