c# Using SslStream.WriteAsync at high speeds - sslstream

i am facing an annoying problem with SslStream.WriteAsync
here is the code
public void Send(PacketWriter writer)
{
var buffer = writer.GetWorkspace();
_sslStream.WriteAsync(buffer, 0, buffer.Length);
}
When writing the data at extremely high speed it tells me
The beginwrite method cannot be called when another write operation is pending
[NOTE] : I mean by high speed something like this
for (var i = 0 ; i <= 1000; i++)
{
Send(somedata);
}

You must wait until the previous asynchronous write has finished.
Change your send method to return a Task:
public Task Send(PacketWriter writer)
{
var buffer = writer.GetWorkspace();
return _sslStream.WriteAsync(buffer, 0, buffer.Length);
}
Make the calling method async, then await until each send operation completes:
for (var i = 0 ; i <= 1000; i++)
{
await Send(somedata);
}

May be try this-
private static void send_data(SslStream socket_stream, byte[] data)
{
Task task = new Task(() => { socket_stream.Write(data, 0, data.Length); });
task.RunSynchronously();
task.Wait();
}
This will make sure all the messaages are sent in oderly manner without any exception

Related

c# Serialport data received event handler

I am currently reading data from serialport using WPF. I am able to read data from serialport & write it in File using Binarywriter.
What Problem I am facing right now is:
I want to Analyse this data. I have developed one function which contains Use case structures to split the data which i read in serialport datahandler. Purpose is to Analyse captured data. But my program is just hanging on in receiving data from Serialport. Its not approaching down to Analyse the data.
This are few Options which i read from some Posts which may be a solution for my problem:
Backgroundworker: If yes then how i can fetch in my current program?
Threading: I tried it but its not working. My program is handing on writing the Bytes in the file & not going down to start the thread which i declared.
So can anybody suggest me a better option?
My code:
private void port_DataReceived(Object sender, SerialDataReceivedEventArgs e) {
BinaryWriter writer=new BinaryWriter(File.Open("c:\\temp\\lbus2snifflog1.txt", FileMode.Append));
int bytes=comport.Read(buffer, 0, 4096);
for (int i=0;
i < bytes;
i++) {
//writer.Write(buffer, 0, bytes); // Write the data to the file on temp folder
data.Enqueue(buffer[i]); // Enqueue data from the buffer
writer.Write(buffer, 0, bytes);
}
//writer.Write(buffer, 0, bytes);
writer.Flush(); // Send all remaining data to the writer
writer.Close(); // Close the writer
/*Initilaise the Thread for the Analysis*/
Thread Analyser=new Thread(datacollection);
Analyser.Start();
}
I think you should run thread with setting IsBackground = true;
I use it like that
new Thread(() =>
{
Thread.CurrentThread.IsBackground = true;
//Do what you want to Run in background here
}).Start();
or maybe simple setting will do the job :
Analyser.IsBackground = true;
Analyser.Start();
Edit
For your case, don't know if this is best approach, but this may work.
You have a ConcurrentDictionary in main thread. In BackgroundWorker thread you get the data from the ConcurrentDictionary. Then you report to main thread the processed data.
Here is a working scenario.
Example:
class MyClass
{
private ConcurrentDictionary<int, string> serialPortsQueue =
new ConcurrentDictionary<int, string>();
private BackgroundWorker _worker;
public MyClass()
{
_worker = new BackgroundWorker()
{
WorkerSupportsCancellation = true,
WorkerReportsProgress = true
};
_worker.DoWork += DeviceDataAcquisition_DoWork;
_worker.ProgressChanged += DeviceDataAcquisition_ProgressChanged;
_worker.RunWorkerCompleted += DeviceDataAcquisition_RunWorkerCompleted;
_worker.RunWorkerAsync();
System.Diagnostics.Debug.WriteLine($"{DateTime.Now}: begin add first data");
serialPortsQueue.TryAdd(1, "data 1");
serialPortsQueue.TryAdd(2, "data 2");
serialPortsQueue.TryAdd(3, "data 3");
serialPortsQueue.TryAdd(4, "data 4");
serialPortsQueue.TryAdd(5, "data 5");
serialPortsQueue.TryAdd(6, "data 6");
serialPortsQueue.TryAdd(7, "data 7");
System.Diagnostics.Debug.WriteLine($"{DateTime.Now}: end add first data");
Thread.Sleep(2000);
System.Diagnostics.Debug.WriteLine($"{DateTime.Now}: begin add second data");
serialPortsQueue.TryAdd(8, "data 8");
System.Diagnostics.Debug.WriteLine($"{DateTime.Now}: end add second data");
}
private void DeviceDataAcquisition_DoWork(object sender, DoWorkEventArgs e)
{
// backgroundworker thread
if (sender is BackgroundWorker worker)
{
//just demo propose
int cnt = 0;
while (true)
{
if (worker.CancellationPending)
break;
if (serialPortsQueue.Count > 0)
{
KeyValuePair<int, string> kv = serialPortsQueue.ElementAt(0);
serialPortsQueue.TryRemove(kv.Key, out string value);
//just demo propose
// Simulate some processing
Thread.Sleep(1000);
worker.ReportProgress(0, kv);
//just demo propose
cnt++;
}
//just demo propose
if (cnt == 8)
break;
}
}
}
private void DeviceDataAcquisition_ProgressChanged(object sender, ProgressChangedEventArgs e)
{
// main thread
if (e.UserState is KeyValuePair<int, string> kv)
{
System.Diagnostics.Debug.WriteLine($"{DateTime.Now}: {kv.Key} -> {kv.Value}");
}
}
private void DeviceDataAcquisition_RunWorkerCompleted(object sender, RunWorkerCompletedEventArgs e)
{
if (sender is BackgroundWorker worker)
{
worker.DoWork -= DeviceDataAcquisition_DoWork;
worker.ProgressChanged -= DeviceDataAcquisition_ProgressChanged;
worker.RunWorkerCompleted -= DeviceDataAcquisition_RunWorkerCompleted;
worker.Dispose(); // i think this does nothing...
System.Diagnostics.Debug.WriteLine($"{DateTime.Now}: end backgroundworker");
}
}
}
You can see in the window Output -> Debug the results.
Hope this helps.

C# how to call async await in a for loop

I am developing a quartz.net job which runs every 1 hour. It executes the following method. I am calling a webapi inside a for loop. I want to make sure i return from the GetChangedScripts() method only after all thread is complete? How to do this or have i done it right?
Job
public void Execute(IJobExecutionContext context)
{
try
{
var scripts = _scriptService.GetScripts().GetAwaiter().GetResult();
}
catch (Exception ex)
{
_logProvider.Error("Error while executing Script Changed Notification job : " + ex);
}
}
Service method:
public async Task<IEnumerable<ChangedScriptsByChannel>> GetScripts()
{
var result = new List<ChangedScriptsByChannel>();
var currentTime = _systemClock.CurrentTime;
var channelsToProcess = _lastRunReader.GetChannelsToProcess().ToList();
if (!channelsToProcess.Any()) return result;
foreach (var channel in channelsToProcess)
{
var changedScripts = await _scriptRepository.GetChangedScriptAsync(queryString);
if (changedScriptsList.Any())
{
result.Add(new ChangedScriptsByChannel()
{
ChannelCode = channel.ChannelCode,
ChangedScripts = changedScriptsList
});
}
}
return result;
}
As of 8 days ago there was a formal announcement from the Quartz.NET team stating that the latest version, 3.0 Alpha 1 has full support for async and await. I would suggest upgrading to that if at all possible. This would help your approach in that you'd not have to do the .GetAwaiter().GetResult() -- which is typically a code smell.
How can I use await in a for loop?
Did you mean a foreach loop, if so you're already doing that. If not the change isn't anything earth-shattering.
for (int i = 0; i < channelsToProcess.Count; ++ i)
{
var changedScripts =
await _scriptRepository.GetChangedScriptAsync(queryString);
if (changedScriptsList.Any())
{
var channel = channelsToProcess[i];
result.Add(new ChangedScriptsByChannel()
{
ChannelCode = channel.ChannelCode,
ChangedScripts = changedScriptsList
});
}
}
Doing these in either a for or foreach loop though is doing so in a serialized fashion. Another approach would be to use Linq and .Select to map out the desired tasks -- and then utilize Task.WhenAll.

Tight Loop - Disk at 100%, Quad Core CPU #25% useage, only 15MBsec disk write speed

I have a tight loop which runs through a load of carts, which themselves contain around 10 events event objects and writes them to the disk in JSON via an intermediate repository (jOliver common domain rewired with GetEventStore.com):
// create ~200,000 carts, each with ~5 events
List<Cart> testData = TestData.GenerateFrom(products);
foreach (var cart in testData)
{
count = count + (cart as IAggregate).GetUncommittedEvents().Count;
repository.Save(cart);
}
I see the disk says it is as 100%, but the throughout is 'low' (15MB/sec, ~5,000 events per second) why is this, things i can think of are:
Since this is single threaded does the 25% CPU usage actually mean 100% of the 1 core that I am on (any way to show specific core my app is running on in Visual Studio)?
Am i constrained by I/O, or by CPU? Can I expect better performance if i create my own thread pool one for each CPU?
How come I can copy a file at ~120MB/sec, but I can only get throughput of 15MB/sec in my app? Is this due to the write size of lots of smaller packets?
Anything else I have missed?
The code I am using is from the geteventstore docs/blog:
public class GetEventStoreRepository : IRepository
{
private const string EventClrTypeHeader = "EventClrTypeName";
private const string AggregateClrTypeHeader = "AggregateClrTypeName";
private const string CommitIdHeader = "CommitId";
private const int WritePageSize = 500;
private const int ReadPageSize = 500;
IStreamNamingConvention streamNamingConvention;
private readonly IEventStoreConnection connection;
private static readonly JsonSerializerSettings serializerSettings = new JsonSerializerSettings { TypeNameHandling = TypeNameHandling.None };
public GetEventStoreRepository(IEventStoreConnection eventStoreConnection, IStreamNamingConvention namingConvention)
{
this.connection = eventStoreConnection;
this.streamNamingConvention = namingConvention;
}
public void Save(IAggregate aggregate)
{
this.Save(aggregate, Guid.NewGuid(), d => { });
}
public void Save(IAggregate aggregate, Guid commitId, Action<IDictionary<string, object>> updateHeaders)
{
var commitHeaders = new Dictionary<string, object>
{
{CommitIdHeader, commitId},
{AggregateClrTypeHeader, aggregate.GetType().AssemblyQualifiedName}
};
updateHeaders(commitHeaders);
var streamName = this.streamNamingConvention.GetStreamName(aggregate.GetType(), aggregate.Identity);
var newEvents = aggregate.GetUncommittedEvents().Cast<object>().ToList();
var originalVersion = aggregate.Version - newEvents.Count;
var expectedVersion = originalVersion == 0 ? ExpectedVersion.NoStream : originalVersion - 1;
var eventsToSave = newEvents.Select(e => ToEventData(Guid.NewGuid(), e, commitHeaders)).ToList();
if (eventsToSave.Count < WritePageSize)
{
this.connection.AppendToStreamAsync(streamName, expectedVersion, eventsToSave).Wait();
}
else
{
var startTransactionTask = this.connection.StartTransactionAsync(streamName, expectedVersion);
startTransactionTask.Wait();
var transaction = startTransactionTask.Result;
var position = 0;
while (position < eventsToSave.Count)
{
var pageEvents = eventsToSave.Skip(position).Take(WritePageSize);
var writeTask = transaction.WriteAsync(pageEvents);
writeTask.Wait();
position += WritePageSize;
}
var commitTask = transaction.CommitAsync();
commitTask.Wait();
}
aggregate.ClearUncommittedEvents();
}
private static EventData ToEventData(Guid eventId, object evnt, IDictionary<string, object> headers)
{
var data = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(evnt, serializerSettings));
var eventHeaders = new Dictionary<string, object>(headers)
{
{
EventClrTypeHeader, evnt.GetType().AssemblyQualifiedName
}
};
var metadata = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(eventHeaders, serializerSettings));
var typeName = evnt.GetType().Name;
return new EventData(eventId, typeName, true, data, metadata);
}
}
It was partially mentioned in the comments, but to enhance on that, as you are working fully single-threaded in the mentioned code (though you use async, you are just waiting for them, so effectively working sync) you are suffering from latency and overhead for context switching and EventStore protocol back and forth. Either really go the async route, but avoid waiting on the async threads and rather parallelize it (EventStore likes parallelization because it can batch multiple writes) or do batching yourself and send, for example, 20 events at a time.

.NET 4.0 BackgroundWorker class and unusual behavior

I am running into some strange behavior in the backgroundworker class that leads me to believe that I don't fully understand how it works. I assumed that the following code sections were more or less equal except for some extra features that the BackgroundWorker implements (like progress reporting, etc.):
section 1:
void StartSeparateThread(){
BackgroundWorker bw = new BackgroundWorker();
bw.DoWork += new DoWorkEventHandler(bw_DoWork);
bw.RunWorkerAsync();
}
void bw_DoWork(object sender, DoWorkEventArgs e)
{
//Execute some code asynchronous to the thread that owns the function
//StartSeparateThread() but synchronous to itself.
var SendCommand = "SomeCommandToSend";
var toWaitFor = new List<string>(){"Various","Possible","Outputs to wait for"};
var SecondsToWait = 30;
//this calls a function that sends the command over the NetworkStream and waits
//for various responses.
var Result=SendAndWaitFor(SendCommand,toWaitFor,SecondsToWait);
}
Section 2:
void StartSeparateThread(){
Thread pollThread = new Thread(new ThreadStart(DoStuff));
pollThread.Start();
}
void DoStuff(object sender, DoWorkEventArgs e)
{
//Execute some code asynchronous to the thread that owns the function
//StartSeparateThread() but synchronous to itself.
var SendCommand = "SomeCommandToSend";
var toWaitFor = new List<string>(){"Various","Possible","Outputs to wait for"};
var SecondsToWait = 30;
//this calls a function that sends the command over the NetworkStream and waits
//for various responses.
var Result=SendAndWaitFor(SendCommand,toWaitFor,SecondsToWait);
}
I was using Section 1 to run some code that sent a string over a networkstream and waited for a desired response string, capturing all output during that time. I wrote a function to do this that would return the networkstream output, the index of the the sent string, as well as the index of the desired response string. I was seeing some strange behavior with this so I changed the function to only return when both the send string and the output string were found, and that the index of the found string was greater than the index of the sent string. It would otherwise loop forever (just for testing). I would find that the function would indeed return but that the index of both strings were -1 and the output string was null or sometimes filled with the expected output of the previous call. If I were to make a guess about what was happening, it would be that external functions called from within the bw_DoWork() function are run asynchronously to the thread that owns the bw_DoWork() function. As a result, since my SendAndWaitFor() function was called multiple times in succession. the second call would be run before the first call finished, overwriting the results of the first call after they were returned but before they could be evaluated. This seems to make sense because the first call would always run correctly and successive calls would show the strange behavior described above but it seems counter intuitive to how the BackgroundWorker class should behave. Also If I were to break within the SendAndWaitFor function, things would behave properly. This again leads me to believe there is some multi-threading going on within the bwDoWork function itself.
When I change the code in the first section above to the code of the second section, things work entirely as expected. So, can anyone who understands the BackgroundWorker class explain what could be going on? Below are some related functions that may be relevant.
Thanks!
public Dictionary<string, string> SendAndWaitFor(string sendString, List<string> toWaitFor, int seconds)
{
var toReturn = new Dictionary<string, string>();
var data = new List<byte>();
var enc = new ASCIIEncoding();
var output = "";
var FoundString = "";
//wait for current buffer to clear
output = this.SynchronousRead();
while(!string.IsNullOrEmpty(output)){
output = SynchronousRead();
}
//output should be null at this point and the buffer should be clear.
//send the desired data
this.write(enc.GetBytes(sendString));
//look for all desired strings until timeout is reached
int sendIndex=-1;
int foundIndex = -1;
output += SynchronousRead();
for (DateTime start = DateTime.Now; DateTime.Now - start < new TimeSpan(0, 0, seconds); )
{
//wait for a short period to allow the buffer to fill with new data
Thread.Sleep(300);
//read the buffer and add it to the output
output += SynchronousRead();
foreach (var s in toWaitFor)
{
sendIndex = output.IndexOf(sendString);
foundIndex = output.LastIndexOf(s);
if (foundIndex>sendIndex)
{
toReturn["sendIndex"] = sendIndex.ToString();
toReturn["foundIndex"] = sendIndex.ToString();
toReturn["Output"] = output;
toReturn["FoundString"] = s;
return toReturn;
}
}
}
//Set this to loop infinitely while debuging to make sure the function was only
//returning above
while(true){
}
toReturn["sendIndex"]="";
toReturn["foundIndex"]="";
toReturn["Output"] =output;
toReturn["FoundString"] = "";
return toReturn;
}
public void write(byte[] toWrite)
{
var enc = new ASCIIEncoding();
var writeString = enc.GetString(toWrite);
var ns = connection.GetStream();
ns.Write(toWrite, 0, toWrite.Length);
}
public string SynchronousRead()
{
string toReturn = "";
ASCIIEncoding enc = new ASCIIEncoding();
var ns = connection.GetStream();
var sb = new StringBuilder();
while (ns.DataAvailable)
{
var buffer = new byte[4096];
var numberOfBytesRead = ns.Read(buffer, 0, buffer.Length);
sb.AppendFormat("{0}", Encoding.ASCII.GetString(buffer, 0, numberOfBytesRead));
toReturn += sb.ToString();
}
return toReturn;
}
All data to be used by a background worker should be passed in through the DoWorkEventArgs and nothing should be pulled off of the class (or GUI interface).
In looking at your code I could not identify where the property(?) connnection was being created. My guess is that connection is created on a different thread, or may be pulling read information, maybe from a GUI(?) and either one of those could cause problems.
I suggest that you create the connection instance in the dowork event and not pull an existing one off of a different thread. Also verify that the data connection works with does not access any info off of a GUI, but its info is passed in as its made.
I discuss an issue with the Background worker on my blog C# WPF: Linq Fails in BackgroundWorker DoWork Event which might show you where the issue lies in your code.

Should I use CRITICAL_SECTION or not?

I have a program which has a Ui with which users choose the way to display and do small configurations. It also has a background procedure, which continuously reads data from the network and update the data to display.
Now I put them in one process:
background procedure:
STATE MainWindow::Rcv()
{
DeviceMAP::iterator dev;
for(dev= dev_map.begin(); dev!= dev_map.end(); dev++)
{
dev->second.rcvData();//receive data from the network, the time can be ignored.
BitLog* log = new BitLog();
dev->second.parseData(log);
LogItem* logItem = new LogItem();
logItem->time = QString::fromLocal8Bit(log->rcvTime.c_str());
logItem->name = QString::fromLocal8Bit(log->basicInfo.getName().c_str());
logItem->PIN = QString::fromLocal8Bit(log->basicInfo.getPIN().c_str()).toShort();
delete log;
add_logItem(logItem);
}
return SUCCESS;
}
add_logItem:
void MainWindow::add_logItem(LogItem* logItem)
{
writeToFile(logItem);
Device* r = getDevbyPIN(QString::number(logItem->PIN));
if(r == NULL)return;
devInfo_inside_widget::States state = logItem->state;
bool bool_list[portsNum_X];
for(int i =0; i < portsNum_X; i++)
{
bool_list[i] = 0;
}
for(int i = 0; i < portsNum; i++)
{
bool_list[i] = (logItem->BITS[i/8] >> (7 - i%8)) & 0x1;
}
r->refresh(state, logItem->time, bool_list);//update data inside...state, time , BITS...
IconLabel* icl = getIConLabelByDev(r);//update data
icl->refresh(state);
logDisplayQueue.enqueue(logItem);//write queue here
int size = logDisplayQueue.size();
if(size > 100)
{
logDisplayQueue.dequeue();//write queue here
}
}
The section above has not dealt with any ui operations yet, but when user push a radio button in the ui, the program has to filter the data in the queue and display it in the table widget:
ui operations:
void MainWindow::filter_log_display(bool bol)
{
row_selectable = false;
ui->tableWidget->setRowCount(0);//delete table items all
row_selectable = true;
int size_1 = logDisplayQueue.size() - 1;
ui->tableWidget->verticalScrollBar()->setSliderPosition(0);
if(size_1+1 < 100)
{
ui->tableWidget->setRowCount(size_1 + 1);
}
else
{
ui->tableWidget->setRowCount(100);//display 100 rows at most
}
if(bol)//filter from logDisplayQueue and display unworking-state-log rows
{
int index = 0;
for(int queue_i = size_1; queue_i >= 0; queue_i--)
{
LogItem* logItem = (LogItem*)logDisplayQueue.at(queue_i); // read queue here
if(logItem->state == STATE_WORK || logItem->state == STATE_UN)continue;
QString BITS_str = bits2Hexs(logItem->BITS);
ui->tableWidget->setItem(index, 0, new QTableWidgetItem(logItem->time));//time
ui->tableWidget->setItem(index, 1, new QTableWidgetItem(logItem->name));//name
ui->tableWidget->setItem(index, 2, new QTableWidgetItem(BITS_str));//BITS
if(queue_i == oldRowItemNo)ui->tableWidget->selectRow(index);
index++;
}
ui->tableWidget->setRowCount(index);
}
else//display all rows
{
for(int queue_i = size_1, index = 0; queue_i >= 0; queue_i--, index++)
{
LogItem* logItem = (LogItem*)logDisplayQueue.at(queue_i); //read queue here
QString BITS_str = bits2Hexs(logItem->BITS);//
finish = clock();
ui->tableWidget->setItem(index, 0, new QTableWidgetItem(logItem->time));//time
ui->tableWidget->setItem(index, 1, new QTableWidgetItem(logItem->name));//name
ui->tableWidget->setItem(index, 2, new QTableWidgetItem(BITS_str));//BITS
if(queue_i == oldRowItemNo)ui->tableWidget->selectRow(index);
}
}
}
So the queue is quite samll and the background procedure is quite frequent(nearly 500 times per sec). That is, the queue will be written 500 times in 1 sec, but displayed time from time by the user.
I want to split the functions into two threads and run them together, one rev and update data, one display.
If i do not use any lock or mutex, the user may get the wrong data, but if i force the write-data procedure enter critical section and leave critical section everytime, it will be a heavy overload. :)
Should I use CRITICAL_SECTION or something else, any suggestions related?(my words could be verbose for you :) , i only hope for some hints :)
I'd put "Recv" function in another QObject derived class, put it under other QThread not main gui thread and connect "logItemAdded(LogItem* item)" signal to main window's "addLogItem(LogItem* item)" slot.
for just quick and dirty hint my conceptual code follows.
#include <QObject>
class Logger : public QObject
{
Q_OBJECT
public:
Logger(QObject* parent=0);
virtual ~Logger();
signals:
void logItemAdded(LogItem* logItem);
public slots:
protected:
void Rcv()
{
// ...
// was "add_logItem(logItem)"
emit logItemAdded(logItem);
}
};
MainWindow::MainWindow(...)
{
Logger logger = new Logger;
// setup your logger
QThread* thread = new QThread;
logger->moveToThread(thread);
connect(thread, SIGNAL(finished()), thread, SLOT(deleteLater()));
thread->start();
}
Hope this helps.
Good luck.

Resources