Dart Websocket memory leak - memory-leaks

I am using websockets to receive protcol buffers and experiencing a memory leak. This leak occurs regardless of incoming buffer size and frequency.
The protobufs are being received as Blobs but the same leak was present when receiving as an arrayBuffer. Currently all I have implemented is a packet handler that sets the Blob to null to attempt to invoke garbage collection.
My listen call:
ws.onMessage.listen(handlePacket);
My event handler: void handlePacket(message) { message = null; }
I don't fully understand if the Stream of messageEvents in the websocket is a queue that is not dequeuing processed events, but it appears that all the memory allocated for the incoming events fails to be garbage collected. All help is appreciated.
EDIT
Client side code:
void _openSocket() {
if (ws == null) {
ws = new WebSocket('ws://localhost:8080/api/ws/open');
// ws.binaryType = "arraybuffer";
}
}
void _closeSocket() {
if (ws != null) {
ws.close();
print("socket closed");
ws = null;
}
}
void _openStream (String fieldName, [_]) {
//Check if we need to open the socket
_openSocket();
//Request the proper data
Map ask = {"Request": "Stream", "Field": fieldName};
if (ws.readyState == 0){
ws.onOpen.listen((_) {
ws.send(JSON.encode(ask));
});
} else {
ws.send(JSON.encode(ask));
}
activeQuantities++;
if (activeQuantities == 1) {
_listen();
}
}
// Receive data from the socket
_listen() {
ws.onError.listen((_){
print("Error");
});
ws.onClose.listen((_){
print("Close");
});
ws.onMessage.listen(handlePacket);
}
void handlePacket(message) {
message = null;
}

Looks like Dartium is expected to leak memory, but when using Dart2js and running in Chrome it did manage to GC, albeit after showing the same symptoms as in Dartium. https://github.com/dart-lang/sdk/issues/26660

Related

The connection was inactive for more than the allowed 60000 milliseconds and is closed by container

I have an azure function that sends a message to the service bus queue. Since a recent deployment, I see an exception occurring frequently: The connection was inactive for more than the allowed 60000 milliseconds and is closed by container.
I looked into this GitHub post: https://github.com/Azure/azure-service-bus-java/issues/280 it says this is a warning. Is there a way to increase this timeout? Or any suggestions on how to resolve this? Here is my code:
namespace Repositories.ServiceBusQueue
{
public class MembershipServiceBusRepository : IMembershipServiceBusRepository
{
private readonly QueueClient _queueClient;
public MembershipServiceBusRepository(string serviceBusNamespacePrefix, string queueName)
{
var msiTokenProvider = TokenProvider.CreateManagedIdentityTokenProvider();
_queueClient = new QueueClient($"https://{serviceBusNamespacePrefix}.servicebus.windows.net", queueName, msiTokenProvider);
}
public async Task SendMembership(GroupMembership groupMembership, string sentFrom = "")
{
if (groupMembership.SyncJobPartitionKey == null) { throw new ArgumentNullException("SyncJobPartitionKey must be set."); }
if (groupMembership.SyncJobRowKey == null) { throw new ArgumentNullException("SyncJobRowKey must be set."); }
foreach (var message in groupMembership.Split().Select(x => new Message
{
Body = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(x)),
SessionId = groupMembership.RunId.ToString(),
ContentType = "application/json",
Label = sentFrom
}))
{
await _queueClient.SendAsync(message);
}
}
}
}
This could be due to deadlock in the thread pool, please check if you are calling an async method from a sync method.

Native Messaging stops working after receiving chunked response

I made two extensions, one for firefox and one for chrome(will post only firefox code here, if it is necessary to post Chrome code, let me know, but it is extremely similar) that has some actions, calls to native host app. One of this calls returns a message with > 1 mb. According to https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging, "The maximum size of a single message from the application is 1 MB". According to chrome native messaging: how to receive > 1MB, data > 1mb must be chunked into less than 1 mb data and sent as many chunks. I Made a Java host app, and made a solution to chunk data. The data is chunked and many responses are sent, each with a chunk in a json string. The sending and responses worked ok, the messages are joined in an input of the web page, and the action works. However, after making this action, any other call fail. If I go to windows task manager, and kill the native host process, everything works again. If I do the action that gets chunked response, it works again, but next calls fail silently, firefox browser console gives me nothing. I guess there is something wrong with my solution. The actions work. When I do the action that receives more than 1 mb, the action works, but after that no other action work again. Any help is appreciated. Tell me if the question is not clear, or is a big mess, I will improve it. I did everythink I could to make it work, and it works, but oddly only one time, then it fails and do nothing. Sorry, but I can't say what the actions are, i mean, what im tryng to do, if it is realy necessary to know, I will ask my manager to explain it here.
Background.js
function sendNativeMessage(message) {
port.postMessage(message);
}
function onDisconnected() {
msg = '{"message": "Failed to connect: '+browser.runtime.lastError+'"}';
browser.tabs.query({active: true, currentWindow: true}, function(tabs){
browser.tabs.sendMessage(tabs[0].id, msg, function(response) {});
});
port = null;
}
function connect() {
var hostName = "com.companyname.hostname";
port = browser.runtime.connectNative(hostName);
port.onMessage.addListener(function(msg) {
browser.tabs.query({active: true, currentWindow: true}, function(tabs){
browser.tabs.sendMessage(tabs[0].id, JSON.stringify(msg), function(response) {});
});
});
port.onDisconnect.addListener(onDisconnected);
}
browser.runtime.onMessageExternal.addListener(
function(request, sender, sendResponse) {
if (port == null) {
connect();
}
sendNativeMessage(request);
});
Inject.js
//Prevent code being injected twice(probably because of click listeners on web page)
if (document.readyState !== "complete") {
browser.runtime.onMessage.addListener(function(msg, sender, sendResponse) {
var obj = JSON.parse(msg);
if (!obj.success) {
(...)
}
if (obj.action == "doSomething") {
if (obj.last == "true") {
do something
}else {
var value = $("#input").val().trim() + obj.value.trim();
$("#input").val(value.trim());
}
}
var s = document.createElement('script');
// TODO: add "script.js" to web_accessible_resources in manifest.json
s.src = browser.runtime.getURL('script.js');
s.onload = function() {
this.remove();
};
(document.head || document.documentElement).appendChild(s);
}
script.js
class extensionName{
getValues() {
window.postMessage({ type: "FROM_PAGE", text: {action: "getValues"} }, "*");
}
//This is the function that returns more than 1 mb.
dosomething(parameter1,parameter2,parameter3,parameter4) {
window.postMessage({ type: "FROM_PAGE", text: {action: "dosomething",parameter1Key:parameter1,parameter2Key:parameter2,parameter3Key:parameter3,parameter4Key:parameter4} }, "*");
}
}
Native app in Java (aux is an instance of an auxiliary class that has a sendMessage function that sends messages according to the native messaging protocol)
switch (parameter1) {
case "getValues":
json = functions.getValues();
aux.sendMessage(json);
break;
case "doSomething":
funcoes.doSomething(...);
break;
(...)
public String doSomething(...) {
(...)
Matcher m = Pattern.compile(".{1,565048}").matcher(valueMoreThan1Mb);
String chunk = "";
while(m.find()) {
System.setOut(originalStream);//let stream output values
chunk = newSignatureBase64.substring(m.start(), m.end());
//a json is done here, with needed parameters, action, chunked value.
json = "{\"success\":\"true\",\"action\":\"sign\",\"last\":\"false\",\"value\":\""+chunk+"\"}";/Sorry for this. This code will be improved when everything is working.
aux.sendMessage(json);//auxiliar class sends message according to native messaging protocol.
System.setOut(dummyStream);//avoid 3rd party library output contents and mess native messaging port.
}
System.setOut(originalStream);//let stream output values
json = "{\"success\":\"true\",\"action\":\"sign\",\"last\":\"true\"}";//Sorry for this. This code will be improved when everything is working.
aux.sendMessage(json);
System.setOut(dummyStream);
return "";
}
Auxiliar.java
public class Auxiliar {
public int getInt(byte[] bytes) {
return (bytes[3] << 24) & 0xff000000 | (bytes[2] << 16) & 0x00ff0000
| (bytes[1] << 8) & 0x0000ff00 | (bytes[0] << 0) & 0x000000ff;
}
public String readMessage(InputStream in) throws IOException {
byte[] b = new byte[4];
in.read(b);
int size = getInt(b);
//log(String.format("The size is %d", size));
if (size == 0) {
throw new InterruptedIOException("Blocked communication");
}
b = new byte[size];
in.read(b);
return new String(b, "UTF-8");
}
public boolean sendMessage(String message) {
try {
System.out.write(getBytes(message.length()));
System.out.write(message.getBytes("UTF-8"));
System.out.flush();
return true;
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
return false;
}
}
//Use this in case the other method get deprecated
public byte[] getBytes(int length) {
byte[] bytes = new byte[4];
bytes[0] = (byte) (length & 0xFF);
bytes[1] = (byte) ((length >> 8) & 0xFF);
bytes[2] = (byte) ((length >> 16) & 0xFF);
bytes[3] = (byte) ((length >> 24) & 0xFF);
return bytes;
}
}

Exception when exceed 64K in azure WCF relay

I have this code I am implemented from this azure wcf relay
I am getting this exception when sending a message bigger than 64K (with smaller message it works OK):
System.ServiceModel.CommunicationException: 'The maximum message size quota for incoming messages has been exceeded for the remote channel. See the server logs for more details.
the quota is unlimited in NetTcpRelayBinding according to this quota web page
here is my code
class WCFRelay
{
[ServiceContract(Namespace = "urn:ps")]
interface IProblemSolver
{
[OperationContract]
int Test(byte[] bytes);
}
class ProblemSolver : IProblemSolver
{
public int Test(byte[] bytes)
{
return bytes.Length;
}
}
interface IProblemSolverChannel : IProblemSolver, IClientChannel { }
public static void CreateClient()
{
var cf = new ChannelFactory<IProblemSolverChannel>(
new NetTcpRelayBinding(),
new EndpointAddress(ServiceBusEnvironment.CreateServiceUri("sb", "...", "solver")));
cf.Endpoint.Behaviors.Add(new TransportClientEndpointBehavior
{ TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider("RootManageSharedAccessKey", "...") });
using (var ch = cf.CreateChannel())
{
// if its 50K its ok - if its 70K i get exception
Console.WriteLine(ch.Test(new byte[1000 * 70]));
}
}
public static void CreateServer()
{
ServiceHost sh = new ServiceHost(typeof(ProblemSolver));
sh.AddServiceEndpoint(
typeof(IProblemSolver), new NetTcpRelayBinding(),
ServiceBusEnvironment.CreateServiceUri("sb", "...", "solver"))
.Behaviors.Add(new TransportClientEndpointBehavior
{
TokenProvider = TokenProvider.CreateSharedAccessSignatureTokenProvider("RootManageSharedAccessKey", "...")
});
sh.Open();
while (true)
{
Thread.Sleep(1000);
}
Console.WriteLine("Press ENTER to close");
Console.ReadLine();
sh.Close();
}
}
According to your description,I checked this issue and found the cause. When you construct the NetTcpRelayBinding, the default value for MaxBufferSize and MaxReceivedMessageSize is 64K as follows:
You could specific the MaxBufferSize,MaxReceivedMessageSize,MaxBufferPoolSize to a larger value when constructing the NetTcpRelayBinding instance both in your server and client side.
Result:

Servicestack RabbitMQ: Infinite loop fills up dead-letter-queue when RabbitMqProducer cannot redeclare temporary queue in RPC-pattern

When I declare a temporary reply queue to be exclusive (e.g. anonymous queue (exclusive=true, autodelete=true) in rpc-pattern), the response message cannot be posted to the specified reply queue (e.g. message.replyTo="amq.gen-Jg_tv8QYxtEQhq0tF30vAA") because RabbitMqProducer.PublishMessage() tries to redeclare the queue with different parameters (exclusive=false), which understandably results in an error.
Unfortunately, the erroneous call to channel.RegisterQueue(queueName) in RabbitMqProducer.PublishMessage() seems to nack the request message in the incoming queue so that, when ServiceStack.Messaging.MessageHandler.DefaultInExceptionHandler tries to acknowlege the request message (to remove it from the incoming queue), the message just stays on top of the incoming queue and gets processed all over again. This procedure repeats indefinitely and results in one dlq-message per iteration which slowly fills up the dlq.
I am wondering,
if ServiceStack handles the case, when ServiceStack.RabbitMq.RabbitMqProducer cannot declare the response queue, correctly
if ServiceStack.RabbitMq.RabbitMqProducer muss always declare the response queue before publishing the response
if it wouldn't be best to have some configuration flag to omit all exchange and queue declaration calls (outside of the first initialization). The RabbitMqProducer would just assume every queue/exchange to be properly set up and just publish the message.
(At the moment our client just declares its response queue to be exclusive=false and everything works fine. But I'd really like to use rabbitmq's built-in temporary queues.)
MQ-Client Code, requires simple "SayHello" service:
const string INQ_QUEUE_NAME = "mq:SayHello.inq";
const string EXCHANGE_NAME="mx.servicestack";
var factory = new ConnectionFactory() { HostName = "192.168.179.110" };
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
// Create temporary queue and setup bindings
// this works (because "mq:tmp:" stops RabbitMqProducer from redeclaring response queue)
string responseQueueName = "mq:tmp:SayHello_" + Guid.NewGuid().ToString() + ".inq";
channel.QueueDeclare(responseQueueName, false, false, true, null);
// this does NOT work (RabbitMqProducer tries to declare queue again => error):
//string responseQueueName = Guid.NewGuid().ToString() + ".inq";
//channel.QueueDeclare(responseQueueName, false, false, true, null);
// this does NOT work either (RabbitMqProducer tries to declare queue again => error)
//var responseQueueName = channel.QueueDeclare().QueueName;
// publish simple SayHello-Request to standard servicestack exchange ("mx.servicestack") with routing key "mq:SayHello.inq":
var props = channel.CreateBasicProperties();
props.ReplyTo = responseQueueName;
channel.BasicPublish(EXCHANGE_NAME, INQ_QUEUE_NAME, props, Encoding.UTF8.GetBytes("{\"ToName\": \"Chris\"}"));
// consume response from response queue
var consumer = new QueueingBasicConsumer(channel);
channel.BasicConsume(responseQueueName, true, consumer);
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
// print result: should be "Hello, Chris!"
Console.WriteLine(Encoding.UTF8.GetString(ea.Body));
}
}
Everything seems to work fine when RabbitMqProducer does not try to declare the queues, like that:
public void PublishMessage(string exchange, string routingKey, IBasicProperties basicProperties, byte[] body)
{
const bool MustDeclareQueue = false; // new config parameter??
try
{
if (MustDeclareQueue && !Queues.Contains(routingKey))
{
Channel.RegisterQueueByName(routingKey);
Queues = new HashSet<string>(Queues) { routingKey };
}
Channel.BasicPublish(exchange, routingKey, basicProperties, body);
}
catch (OperationInterruptedException ex)
{
if (ex.Is404())
{
Channel.RegisterExchangeByName(exchange);
Channel.BasicPublish(exchange, routingKey, basicProperties, body);
}
throw;
}
}
The issue got adressed in servicestack's version v4.0.32 (fixed in this commit).
The RabbitMqProducer no longer tries to redeclare temporary queues and instead assumes that the reply queue already exist (which solves my problem.)
(The underlying cause of the infinite loop (wrong error handling while publishing response message) probably still exists.)
Edit: Example
The following basic mq-client (which does not use ServiceStackmq client and instead depends directly on rabbitmq's .net-library; it uses ServiceStack.Text for serialization though) can perform generic RPCs:
public class MqClient : IDisposable
{
ConnectionFactory factory = new ConnectionFactory()
{
HostName = "192.168.97.201",
UserName = "guest",
Password = "guest",
//VirtualHost = "test",
Port = AmqpTcpEndpoint.UseDefaultPort,
};
private IConnection connection;
private string exchangeName;
public MqClient(string defaultExchange)
{
this.exchangeName = defaultExchange;
this.connection = factory.CreateConnection();
}
public TResponse RpcCall<TResponse>(IReturn<TResponse> reqDto, string exchange = null)
{
using (var channel = connection.CreateModel())
{
string inq_queue_name = string.Format("mq:{0}.inq", reqDto.GetType().Name);
string responseQueueName = channel.QueueDeclare().QueueName;
var props = channel.CreateBasicProperties();
props.ReplyTo = responseQueueName;
var message = ServiceStack.Text.JsonSerializer.SerializeToString(reqDto);
channel.BasicPublish(exchange ?? this.exchangeName, inq_queue_name, props, UTF8Encoding.UTF8.GetBytes(message));
var consumer = new QueueingBasicConsumer(channel);
channel.BasicConsume(responseQueueName, true, consumer);
var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
//channel.BasicAck(ea.DeliveryTag, false);
string response = UTF8Encoding.UTF8.GetString(ea.Body);
string responseType = ea.BasicProperties.Type;
Console.WriteLine(" [x] New Message of Type '{1}' Received:{2}{0}", response, responseType, Environment.NewLine);
return ServiceStack.Text.JsonSerializer.DeserializeFromString<TResponse>(response);
}
}
~MqClient()
{
this.Dispose();
}
public void Dispose()
{
if (connection != null)
{
this.connection.Dispose();
this.connection = null;
}
}
}
Key points:
client declares anonymous queue (=with empty queue name) channel.QueueDeclare()
server generates queue and returns queue name (amq.gen*)
client adds queue name to message properties (props.ReplyTo = responseQueueName;)
ServiceStack automatically sends response to temporary queue
client picks up response and deserializes
It can be used like that:
using (var mqClient = new MqClient("mx.servicestack"))
{
var pingResponse = mqClient.RpcCall<PingResponse>(new Ping { });
}
Important: You've got to use servicestack version 4.0.32+.

Handling threads for multiprocessing

I'm experiencing an issue managing threads on .Net 4.0 C#, and my knowledge of threads is not sufficient to solve it, so I've post it here expecting that somebody could give me some piece of advise please.
The scenario is the following:
We have a Windows service on C# framework 4.0 that (1)connects via socket to a server to get a .PCM file, (2)then convert it to a .WAV file, (3)send it via Email - SMTP and finally (4)notify the initial server that it was successfully sent.
The server where the service had been installed has 8 processors and 8 GB or RAM.
To allow multiprocessing I've built the service with 4 threads, each one of them performs each task I mentioned previously.
On the code, I have classes and methods for each task, so I create threads and invoke methods as follows:
Thread eachThread = new Thread(object.PerformTask);
Inside each method I'm having a While that checks if the connection of the socket is alive and continue fetching data or processing data depending on their porpuse.
while (_socket.Connected){
//perform task
}
The problem is that as more services are being installed (the same windows service is replicated and pointed between two endpoints on the server to get the files via socket) the CPU consumption increases dramatically, each service continues running and processing files but there is a moment were the CPU consumption is too high that the server just collapse.
The question is: what would you suggest me to handle this scenario, I mean in general terms what could be a good approach of handling this highly demanded processing tasks to avoid the server to collapse in CPU consumption?
Thanks.
PS.: If anybody needs more details on the scenario, please let me know.
Edit 1
With CPU collapse I mean that the server gets too slow that we have to restart it.
Edit 2
Here I post some part of the code so you can get an idea of how it's programmed:
while(true){
//starting the service
try
{
IPEndPoint endPoint = conn.SettingConnection();
string id = _objProp.Parametros.IdApp;
using (socket = conn.Connect(endPoint))
{
while (!socket.Connected)
{
_log.SetLog("INFO", "Conectando socket...");
socket = conn.Connect(endPoint);
//if the connection failed, wait 5 seconds for a new try.
if (!socket.Connected)
{
Thread.Sleep(5000);
}
}
proInThread = new Thread(proIn.ThreadRun);
conInThread = new Thread(conIn.ThreadRun);
conOutThread = new Thread(conOut.ThreadRun);
proInThread.Start();
conInThread.Start();
conOutThread.Start();
proInThread.Join();
conInThread.Join();
conOutThread.Join();
}
}
}
Edit 3
Thread 1
while (_socket.Connected)
{
try
{
var conn = new AppConection(ref _objPropiedades);
try
{
string message = conn.ReceiveMessage(_socket);
lock (((ICollection)_queue).SyncRoot)
{
_queue.Enqueue(message);
_syncEvents.NewItemEvent.Set();
_syncEvents.NewResetEvent.Set();
}
lock (((ICollection)_total_rec).SyncRoot)
{
_total_rec.Add("1");
}
}
catch (SocketException ex)
{
//log exception
}
catch (IndexOutOfRangeException ex)
{
//log exception
}
catch (Exception ex)
{
//log exception
}
//message received
}
catch (Exception ex)
{
//logging error
}
}
//release ANY instance that could be using memory
_socket.Dispose();
log = null;
Thread 2
while (_socket.Connected)
{
try{
_syncEvents.NewItemEventOut.WaitOne();
if (_socket.Connected)
{
lock (((ICollection)_queue).SyncRoot)
{
total_queue = _queue.Count();
}
int i = 0;
while (i < total_queue)
{
//EMail Emails;
string mail = "";
lock (((ICollection)_queue).SyncRoot)
{
mail = _queue.Dequeue();
i = i + 1;
}
try
{
conn.SendMessage(_socket, mail);
_syncEvents.NewResetEvent.Set();
}
catch (SocketException ex)
{
//log exception
}
}
}
else
{
//log exception
_syncEvents.NewAbortEvent.Set();
Thread.CurrentThread.Abort();
}
}
catch (InvalidOperationException e)
{
//log exception
}
catch (Exception e)
{
//log exception
}
}
//release ANY instance that could be using memory
_socket.Dispose();
conn = null;
log = null;
Thread 3
while (_socket.Connected)
{
int total_queue = 0;
try
{
_syncEvents.NewItemEvent.WaitOne();
lock (((ICollection) _queue).SyncRoot)
{
total_queue = _queue.Count();
}
int i = 0;
while (i < total_queue)
{
if (mgthreads.GetThreatdAct() <
mgthreads.GetMaxThread())
{
string message = "";
lock (((ICollection) _queue).SyncRoot)
{
message = _queue.Dequeue();
i = i + 1;
}
count++;
lock (((ICollection) _queueO).SyncRoot)
{
app.SetParameters(_socket, _id,
message, _queueO, _syncEvents,
_total_Env, _total_err);
}
Thread producerThread = new
Thread(app.ThreadJob) { Name =
"ProducerThread_" +
DateTime.Now.ToString("ddMMyyyyhhmmss"),
Priority = ThreadPriority.AboveNormal
};
producerThread.Start();
producerThread.Join();
mgthreads.IncThreatdAct(producerThread);
}
mgthreads.DecThreatdAct();
}
mgthreads.DecThreatdAct();
}
catch (InvalidOperationException e)
{
}
catch (Exception e)
{
}
Thread.Sleep(500);
}
//release ANY instance that could be using memory
_socket.Dispose();
app = null;
log = null;
mgthreads = null;
Thread 4
MessageVO mesVo =
fac.ParseMessageXml(_message);
I would lower the thread priority and have all threads pass through a Semaphore that limits concurrency to Environment.ProcessorCount. This not a perfect solution but it sounds like it is enough in this case and an easy fix.
Edit: Thinking about it, you have to fold the 10 services into one single process because otherwise you won't have centralized control about the threads that are running. If you have 10 independent processes they cannot coordinate.
There should normally be no collapse because of high cpu usage. While any of the threads is waiting for something remote to happen (for instance for the remote server to response to the request), that thread uses no cpu resource. But while it is actually doing something, it uses cpu accordingly. In the Task you mentioned, there is no inherent high cpu usage (as the saving of PCM file as WAV requires no complex algorithm), so the high cpu usage seems to be a sign of an error in programming.

Resources