QTcp[server and socket]: can't read file sent - qt4.7

Good morning, I’m looking for an example about sending a file from one pc to an other with QTcpSocket. I tried to create my own code. I have an application, in which, the user will choose a file from his DD ( all types) and send it to the TcpServer, this server will then send this file to the other clients.But, I have a problem, when i choose the file and i send it, in the client’s side, i have this message: file is sending , but in the server’s side, it shows me that the file isn’t recieved with it’s totaly bytes.
Any suggestion please. This is the function for sending the file in the client’s side:
void FenClient::on_boutonEnvoyer_2_clicked()
{
QString nomFichier = lineEdit->text();
QFile file(lineEdit->text());
if(!file.open(QIODevice::ReadOnly))
{
qDebug() << "Error, file can't be opened successfully !";
return;
}
QByteArray bytes = file.readAll();
QByteArray block;
QDataStream out(&block, QIODevice::WriteOnly);
out << quint32(0);
out << nomFichier;
out << bytes;
out.device()->seek(0);
out << quint32((block.size() - sizeof(quint32)));
qDebug() << "Etat : envoi en cours...";
listeMessages->append("status : sending the file...");
socket->write(block);
}
and the server side:
void FenServeur::datarecieved()
{
QTcpSocket *socket = qobject_cast<QTcpSocket *>(sender());
if(socket == 0)
{
qDebug() << "no Socket!";
return;
}
forever
{
QDataStream in(socket);
if(blockSize == 0)
{
if(socket->bytesAvailable() )
{
qDebug() << "Error < sizeof(quint32))";
return;
}
in >> blockSize;
}
if(socket->bytesAvailable() < blockSize)
{
qDebug() << "data not recieved with its total bytes";
return;
}
qDebug() << "!!!!!!";
QByteArray dataOut;
QString nameFile;
in >> nameFile >> dataOut;
QFile fileOut(nameFile);
fileOut.open(QIODevice::WriteOnly);
fileOut.write(dataOut);
fileOut.close();
blockSize = 0;
}
}
void FenServeur::sendToAll(const QString &message)
{
QByteArray paquet;
QDataStream out(&paquet, QIODevice::WriteOnly);
out << (quint32) 0;
out << message;
out.device()->seek(0);
out << (quint32) (paquet.size() - sizeof(quint32));
for (int i = 0; i < clients.size(); i++)
{
clients[i]->write(paquet);
}
}
So i can't write the file that the server recieved into a new file.
Any suggestion please!! and thanks in advance

Your code is waiting for the other side, but the other side is waiting for you. Any protocol that allows both sides to wait for each other is fundamentally broken.
TCP allows the sender to wait for the receiver but does not allow the receiver to wait for the sender. This makes sense because not allowing the sender to wait for the receiver requires an unlimited amount of buffering. Thus for any application layered on top of TCP, the receiver may not wait for the sender.
But you do:
if(socket->bytesAvailable() < blockSize)
{
qDebug() << "data not recieved with its total bytes";
return;
}
Here, you are waiting for the sender to make progress (bytesAvailable to increase) before you are willing to receive (pull data from the socket). But the sender is waiting for you to make progress before it is willing to send more data. This causes a deadlock. Don't do this.
Receive as much data as you can, as soon as you can, whenever you can. Never insist on receiving more data over the network before you will pull already received data from the network stack.

Related

TCP packets sent in a particular sequence are not received accordingly

For an online game I connect multiple clients to a server. And the sequence of messages is crucial to comply with the game logic. For example, to start a new game I want all clients to agree first.
Problem is that my messages go through, but I don't receive it in the right sequence (the sender is also receiver).
class network : public QTcpSocket
void network::doSend(const MessageType msgType, QString msgReceiver, QString msgText) {
...
if( this->write( msgText.toUtf8() ) != msgText.toUtf8().length() )
qWarning() << "Not all data have been sent";
this->waitForBytesWritten(5000);
#ifdef QT_DEBUG
qDebug() << "sent" << QVariant::fromValue(msgType).toString() << "from" << m_sName;
#endif
}
void network::doReadyRead() {
...
case nwSyncNewGame: emit onSyncNewGame(aLastMessage); break;
...
#ifdef QT_DEBUG
qDebug() << "received" << aLastMessage["MessageType"].toString() << "from" << aLastMessage["Sender"].toString();
#endif
}
Both messages to send and received are handled in the main thread. Like:
QObject::connect(m_pNetwork, SIGNAL(onSyncNewGame(QVariantMap)),
this, SLOT(doNetworkSyncNewGame(QVariantMap)));
void GamePlay::syncNewGame(QVariantMap aConfig) {
m_pNetwork->doSend(network::nwPoll, "group", "");
...
m_pNetwork->doSend(network::nwSyncNewGame, "group", configData.join("\a"));
void GamePlay::doNetworkSyncNewGame(QVariantMap aMsg) {
emit applyConfig(aMsg);
emit newGame(aMsg["IsLoading"].toBool());
}
sent "nwPoll" from "Scotty"
sent "nwSyncNewGame" from "Scotty"
sent "nwAnswer" from "Scotty"
received "nwSyncNewGame" from "Scotty"
received "nwRefresh" from "Scotty"
received "nwPoll" from "Scotty"
Where shall I look for a solution?
Sent more than one package from a function in the main thread to the socket which ended in a buffer. Restructured the code and everything is fine now. Plenty of questions about immediately sending packages are handled in different topics.

Virtual COM Port STM32 and Qt Serial Port

My aim is to enable communication via USB CDC HS on STM32 with Ubuntu 20.04 based PC in Qt app created in QtCreator.
So far I've managed to run communication via UART and everything is working fine. Then I decided to switch to USB and I still can read incoming data (but only in CuteCom) and in my Qt app nothing appears.
To be honest I have no idea what is going on and where to look for mistakes. Here I put the code:
void MainWindow::on_pushButtonConnect_clicked()
{
if (ui->comboBoxDevices->count() == 0){
this->addToLogs("No devices found.");
return;
}
QString portName = ui->comboBoxDevices->currentText().split(" ").first();
this->device->setPortName(portName);
this->device->setBaudRate(QSerialPort::Baud115200);
this->device->setDataBits(QSerialPort::Data8);
this->device->setParity(QSerialPort::NoParity);
this->device->setStopBits(QSerialPort::OneStop);
this->device->setFlowControl(QSerialPort::NoFlowControl);
if(device->open(QIODevice::ReadWrite)){
this->addToLogs("Port opened. Setting the connection params...");
this->addToLogs("UART enabled.");
qDebug() << "Writing down the parameters...";
qDebug() << "Baud rate:" << this->device->baudRate();
qDebug() << "Data bits:" << this->device->dataBits();
qDebug() << "Stop bits:" << this->device->stopBits();
qDebug() << "Parity:" << this->device->parity();
qDebug() << "Flow control:" << this->device->flowControl();
qDebug() << "Read buffer size:" << this->device->readBufferSize();
qDebug() << "Read buffer size:" << this->device->portName();
connect(this->device, SIGNAL(readyRead()), this, SLOT(readFromPort()));
} else {
this->addToLogs("The port can not be opened.");
}
And the readFromPort() function:
void MainWindow::readFromPort()
{
while(this->device->canReadLine()){
QString line = this->device->readLine();
qDebug() << line;
QString terminator = "\r";
int pos = line.lastIndexOf(terminator);
qDebug()<<line.left(pos);
this->addToLogs(line.left(pos));
}
}
Do you have any idea what might be wrong or not set properly? Would be thankful for all help.
As it seems, in my code I put commands to read the port in if (with function canReadLine()). When I commented the whole condition out leaving just the reading, everything worked fine.

Asio on Linux stalls in epoll()

We're experiencing a problem with asynchronous operation of standalone (non-Boost) Asio 1.10.6 on Linux, which is demonstrated using the following test app:
#define ASIO_STANDALONE
#define ASIO_HEADER_ONLY
#define ASIO_NO_EXCEPTIONS
#define ASIO_NO_TYPEID
#include "asio.hpp"
#include <chrono>
#include <iostream>
#include <list>
#include <map>
#include <thread>
static bool s_freeInboundSocket = false;
static bool s_freeOutboundSocket = false;
class Tester
{
public:
Tester(asio::io_service& i_ioService, unsigned i_n)
: m_inboundStrand(i_ioService)
, m_listener(i_ioService)
, m_outboundStrand(i_ioService)
, m_resolver(i_ioService)
, m_n(i_n)
, m_traceStart(std::chrono::high_resolution_clock::now())
{}
~Tester()
{}
void TraceIn(unsigned i_line)
{
m_inboundTrace.emplace_back(i_line, std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::high_resolution_clock::now() - m_traceStart));
}
void AbortIn(unsigned i_line)
{
TraceIn(i_line);
abort();
}
void TraceOut(unsigned i_line)
{
m_outboundTrace.emplace_back(i_line, std::chrono::duration_cast<std::chrono::nanoseconds>(std::chrono::high_resolution_clock::now() - m_traceStart));
}
void AbortOut(unsigned i_line)
{
TraceOut(i_line);
abort();
}
void DumpTrace(std::map<unsigned, unsigned>& o_counts)
{
std::cout << "## " << m_n << " ##\n";
std::cout << "-- " << m_traceStart.time_since_epoch().count() << "\n";
std::cout << "- in - - out -\n";
auto in = m_inboundTrace.begin();
auto out = m_outboundTrace.begin();
while ((in != m_inboundTrace.end()) || (out != m_outboundTrace.end()))
{
if (in == m_inboundTrace.end())
{
++o_counts[out->first];
std::cout << " " << out->first << " : " << out->second.count() << "\n";
++out;
}
else if (out == m_outboundTrace.end())
{
++o_counts[in->first];
std::cout << in->first << " : " << in->second.count() << "\n";
++in;
}
else if (out->second < in->second)
{
++o_counts[out->first];
std::cout << " " << out->first << " : " << out->second.count() << "\n";
++out;
}
else
{
++o_counts[in->first];
std::cout << in->first << " : " << in->second.count() << "\n";
++in;
}
}
std::cout << std::endl;
}
//////////////
// Inbound
void Listen(uint16_t i_portBase)
{
m_inboundSocket.reset(new asio::ip::tcp::socket(m_inboundStrand.get_io_service()));
asio::error_code ec;
if (m_listener.open(asio::ip::tcp::v4(), ec)
|| m_listener.bind(asio::ip::tcp::endpoint(asio::ip::tcp::v4(), i_portBase+m_n), ec)
|| m_listener.listen(-1, ec))
{
AbortIn(__LINE__); return;
}
TraceIn(__LINE__);
m_listener.async_accept(*m_inboundSocket,
m_inboundStrand.wrap([this](const asio::error_code& i_error)
{
OnInboundAccepted(i_error);
}));
}
void OnInboundAccepted(const asio::error_code& i_error)
{
TraceIn(__LINE__);
if (i_error) { AbortIn(__LINE__); return; }
asio::async_read_until(*m_inboundSocket, m_inboundRxBuf, '\n',
m_inboundStrand.wrap([this](const asio::error_code& i_err, size_t i_nRd)
{
OnInboundReadCompleted(i_err, i_nRd);
}));
}
void OnInboundReadCompleted(const asio::error_code& i_error, size_t i_nRead)
{
TraceIn(__LINE__);
if (i_error.value() != 0) { AbortIn(__LINE__); return; }
if (bool(i_error)) { AbortIn(__LINE__); return; }
if (i_nRead != 4) { AbortIn(__LINE__); return; } // "msg\n"
std::istream is(&m_inboundRxBuf);
std::string s;
if (!std::getline(is, s)) { AbortIn(__LINE__); return; }
if (s != "msg") { AbortIn(__LINE__); return; }
if (m_inboundRxBuf.in_avail() != 0) { AbortIn(__LINE__); return; }
asio::async_read_until(*m_inboundSocket, m_inboundRxBuf, '\n',
m_inboundStrand.wrap([this](const asio::error_code& i_err, size_t i_nRd)
{
OnInboundWaitCompleted(i_err, i_nRd);
}));
}
void OnInboundWaitCompleted(const asio::error_code& i_error, size_t i_nRead)
{
TraceIn(__LINE__);
if (i_error != asio::error::eof) { AbortIn(__LINE__); return; }
if (i_nRead != 0) { AbortIn(__LINE__); return; }
if (s_freeInboundSocket)
{
m_inboundSocket.reset();
}
}
//////////////
// Outbound
void Connect(std::string i_host, uint16_t i_portBase)
{
asio::error_code ec;
auto endpoint = m_resolver.resolve(asio::ip::tcp::resolver::query(i_host, std::to_string(i_portBase+m_n)), ec);
if (ec) { AbortOut(__LINE__); return; }
m_outboundSocket.reset(new asio::ip::tcp::socket(m_outboundStrand.get_io_service()));
TraceOut(__LINE__);
asio::async_connect(*m_outboundSocket, endpoint,
m_outboundStrand.wrap([this](const std::error_code& i_error, const asio::ip::tcp::resolver::iterator& i_ep)
{
OnOutboundConnected(i_error, i_ep);
}));
}
void OnOutboundConnected(const asio::error_code& i_error, const asio::ip::tcp::resolver::iterator& i_endpoint)
{
TraceOut(__LINE__);
if (i_error) { AbortOut(__LINE__); return; }
std::ostream(&m_outboundTxBuf) << "msg" << '\n';
asio::async_write(*m_outboundSocket, m_outboundTxBuf.data(),
m_outboundStrand.wrap([this](const asio::error_code& i_error, size_t i_nWritten)
{
OnOutboundWriteCompleted(i_error, i_nWritten);
}));
}
void OnOutboundWriteCompleted(const asio::error_code& i_error, size_t i_nWritten)
{
TraceOut(__LINE__);
if (i_error) { AbortOut(__LINE__); return; }
if (i_nWritten != 4) { AbortOut(__LINE__); return; } // "msg\n"
TraceOut(__LINE__);
m_outboundSocket->shutdown(asio::socket_base::shutdown_both);
asio::async_read_until(*m_outboundSocket, m_outboundRxBuf, '\n',
m_outboundStrand.wrap([this](const asio::error_code& i_error, size_t i_nRead)
{
OnOutboundWaitCompleted(i_error, i_nRead);
}));
}
void OnOutboundWaitCompleted(const asio::error_code& i_error, size_t i_nRead)
{
TraceOut(__LINE__);
if (i_error != asio::error::eof) { AbortOut(__LINE__); return; }
if (i_nRead != 0) { AbortOut(__LINE__); return; }
if (s_freeOutboundSocket)
{
m_outboundSocket.reset();
}
}
private:
//////////////
// Inbound
asio::io_service::strand m_inboundStrand;
asio::ip::tcp::acceptor m_listener;
std::unique_ptr<asio::ip::tcp::socket> m_inboundSocket;
asio::streambuf m_inboundRxBuf;
asio::streambuf m_inboundTxBuf;
//////////////
// Outbound
asio::io_service::strand m_outboundStrand;
asio::ip::tcp::resolver m_resolver;
std::unique_ptr<asio::ip::tcp::socket> m_outboundSocket;
asio::streambuf m_outboundRxBuf;
asio::streambuf m_outboundTxBuf;
//////////////
// Common
unsigned m_n;
const std::chrono::high_resolution_clock::time_point m_traceStart;
std::vector<std::pair<unsigned, std::chrono::nanoseconds>> m_inboundTrace;
std::vector<std::pair<unsigned, std::chrono::nanoseconds>> m_outboundTrace;
};
static int Usage(int i_ret)
{
std::cout << "[" << i_ret << "]" << "Usage: example <nThreads> <nConnections> <inboundFree> <outboundFree>" << std::endl;
return i_ret;
}
int main(int argc, char* argv[])
{
if (argc < 5)
return Usage(__LINE__);
const unsigned nThreads = unsigned(std::stoul(argv[1]));
if (nThreads == 0)
return Usage(__LINE__);
const unsigned nConnections = unsigned(std::stoul(argv[2]));
if (nConnections == 0)
return Usage(__LINE__);
s_freeInboundSocket = (*argv[3] == 'y');
s_freeOutboundSocket = (*argv[4] == 'y');
const uint16_t listenPortBase = 25000;
const uint16_t connectPortBase = 25000;
const std::string connectHost = "127.0.0.1";
asio::io_service ioService;
std::cout << "Creating." << std::endl;
std::list<Tester> testers;
for (unsigned i = 0; i < nConnections; ++i)
{
testers.emplace_back(ioService, i);
testers.back().Listen(listenPortBase);
testers.back().Connect(connectHost, connectPortBase);
}
std::cout << "Starting." << std::endl;
std::vector<std::thread> threads;
for (unsigned i = 0; i < nThreads; ++i)
{
threads.emplace_back([&]()
{
ioService.run();
});
}
std::cout << "Waiting." << std::endl;
for (auto& thread : threads)
{
thread.join();
}
std::cout << "Stopped." << std::endl;
return 0;
}
void DumpAllTraces(std::list<Tester>& i_testers)
{
std::map<unsigned, unsigned> counts;
for (auto& tester : i_testers)
{
tester.DumpTrace(counts);
}
std::cout << "##############################\n";
for (const auto& count : counts)
{
std::cout << count.first << " : " << count.second << "\n";
}
std::cout << std::endl;
}
#if defined(ASIO_NO_EXCEPTIONS)
namespace asio
{
namespace detail
{
template <typename Exception>
void throw_exception(const Exception& e)
{
abort();
}
} // namespace detail
} // namespace asio
#endif
We compile as follows (the problem only occurs in optimised builds):
g++ -o example -m64 -g -O3 --no-exceptions --no-rtti --std=c++11 -I asio-1.10.6/include -lpthread example.cpp
We're running on Debian Jessie. uname -a reports (Linux <hostname> 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux.
The problem appears under both GCC (g++ (Debian 4.9.2-10) 4.9.2) and Clang (Debian clang version 3.5.0-10 (tags/RELEASE_350/final) (based on LLVM 3.5.0)).
[EDITED TO ADD: It also happens on Debian Stretch Linux <hostname> 4.6.0-1-amd64 #1 SMP Debian 4.6.1-1 (2016-06-06) x86_64 GNU/Linux with g++ (Debian 6.2.1-5) 6.2.1 20161124.]
In summary, the test app does the following:
We create N connections, each consisting of an inbound (listening)
end and an outbound (connecting) end. Each inbound listener is bound
to a unique port (starting at 25000), and each outbound connector
uses a system-selected originating port.
The inbound end performs an async_accept, and on
completion issues an async_read. When the read completes it issues
another async_read that we expect to return eof. When that
completes, we either free the socket immediately, or leave it as-is
(with no pending async operations) to be cleaned up by the relevant
destructors at program exit. (Note that the listener socket is
always left as-is, with no pending accept, until exit.)
The outbound end performs an async_connect, and on completion issues
an async_write. When the write completes it issues a shutdown
(specifically, shutdown(both)) followed by an async_read that we
expect to return eof. On completion, we once again either leave the
socket as-is, with no pending operations, or we free it immediately.
Any error or unexpected receive data results in an immediate abort()
call.
The test app lets us specify the number of worker threads for the
io_service, as well as the total number of connections to create, as
well as flags controlling whether inbound and outbound sockets
respectively are freed or left as-is.
We run the test app repeatedly, specifying 50 threads and 1000
connections.
i.e. while ./example 50 1000 n y >out.txt ; do echo -n . ; done
If we specify that all sockets are left as-is, the test loop runs indefinitely. To avoid muddying the waters with SO_REUSEADDR considerations, we take care that no sockets are in TIME_WAIT state from a previous test run before we start the test, otherwise the listens can fail. But with this caveat satisfied, the test app runs literally hundreds, even thousands of times with no error. Similarly, if we specify that inbound sockets (but NOT outbound sockets) should be explicitly freed, all runs fine.
However, if we specify that outbound sockets should be freed, the app stalls after a variable number of executions - sometimes ten or fewer, sometimes a hundred or more, usually somewhere in between.
Connecting to the stalled process with GDB, we see that the main thread is waiting to join the worker threads, all but one of the worker threads are idle (waiting on an Asio internal condition variable), and that one worker thread is waiting in Asio's call to epoll(). The internal trace instrumentation verifies that some of the sockets are waiting on async operations to complete - sometimes the initial (inbound) accept, sometimes the (inbound) data read, and sometimes the final inbound or outbound reads that normally complete with eof.
In all cases, the other end of the connection has successfully done its bit: if the inbound accept is still pending, we see that the corresponding outbound connect has successfully completed, along with the outbound write; likewise if the inbound data read is pending, the corresponding outbound connect and write have completed; if the inbound EOF read is pending, the outbound shutdown has been performed, and likewise if the outbound EOF read is pending, the inbound EOF read has completed due to the outbound shutdown.
Examining the process's /proc/N/fdinfo shows that the epoll file descriptor is indeed waiting on the file descriptors indicated by the instrumentation.
Most puzzlingly, netstat shows nonzero RecvQ sizes for the waiting sockets - that is, sockets for which there is a read operation pending are shown to have receive data or close events ready to read. This is consistent with our instrumentation, in that it shows that write data has been delivered to the inbound socket, but has not yet been read (or alternatively that the outbound shutdown has issued a FIN to the inbound side, but that the EOF has not yet been 'read').
This leads me to suspect that Asio's epoll bookkeeping - in particular its edge-triggered event management - is getting out of sync somewhere due to a race condition. Clearly this is more than likely due to incorrect operations on my part, but I can't see where the problem would be.
All insights, suggestions, known issues, and pointing-out-glaring-screwups would be greatly appreciated.
[EDITED TO ADD: Using strace to capture kernel calls interferes with execution such that the stall doesn't happen. Using sysdig doesn't have this effect, but it currently doesn't capture the parameters of the epoll_wait and epoll_ctl syscalls. Sigh.]
This appears to have been resolved by the maintainer of ASIO:
See https://github.com/chriskohlhoff/asio/issues/180
and https://github.com/chriskohlhoff/asio/commit/669e6b8b9de1309927b29d8b6be3630cc69c07ac

Using boost::thread to start/stop logging data

I'm currently trying to log real-time data by using boost::thread and a check box. When I check the box, the logging thread starts. When I uncheck, the logging thread stops. The problem arises when I check/uncheck repeatedly and very fast (program crashes, some files aren't logged, etc.). How can I write a reliable thread-safe program where these problems don't occur when repeatedly and quickly checking/unchecking? I also don't want to use join() since this temporarily stops the data input coming from the main thread. Below is a code snippet:
//Main thread
if(m_loggingCheckBox->isChecked())
{
...
if(m_ThreadLogData.InitializeReadThread(socketInfo))//opens the socket.
//If socket is opened and can be read, start thread.
m_ThreadLogData.StartReadThread();
else
std::cout << "Did not initialize thread\n";
}
else if(!m_loggingCheckBox->isChecked())
{
m_ThreadLogData.StopReadThread();
}
void ThreadLogData::StartReadThread()
{
//std::cout << "Thread started." << std::endl;
m_stopLogThread = false;
m_threadSendData = boost::thread(&ThreadLogData::LogData,this);
}
void ThreadLogData::StopReadThread()
{
m_stopLogThread = true;
m_ReadDataSocket.close_socket(); // close the socket
if(ofstreamLogFile.is_open())
{
ofstreamLogFile.flush(); //flush the log file before closing it.
ofstreamLogFile.close(); // close the log file
}
m_threadSendData.interrupt(); // interrupt the thread
//m_threadSendData.join(); // join the thread. Commented out since this temporarily stops data input.
}
//secondary thread
bool ThreadLogData::LogData()
{
...
while(!m_stopLogThread)
{
try {
//log the data to an output file
...
boost::this_thread::interruption_point();
} catch (boost::thread_interrupted& interruption) {
std::cout << "ThreadLogData::LogData(): Caught Interruption thread." << std::endl;
StopReadThread();
} catch (...) {
std::cout << "ThreadLogData::LogData(): Caught Something." << std::endl;
StopReadThread();
}
} // end while()
}

boost::asio::write does not seem to work while boost::asio::read is outstanding

I am using boost 1.52.0 32 bit libraries with OpenSSL 32 bit libraries with unmanaged Visual C++ 2008 for a new client I am writing to communicate with an existing server. My test machine uses Windows 8. I am using synchronous reads and writes. The code is built into a DLL that is accessed from C#, but all asio calls are done on unmanaged threads created with boost::thread_group.
What I have discovered is that when a synchronous read is waiting for data, then a synchronous write taking place in another thread appears to be blocked and will not go out - at least with the way I have things coded. So my question is - should a synchronous write be able to be completely executed while a synchronous read is waiting for data in another thread?
I have verified that I can write data out successfully when there is no pending read in another thread. I did this by freezing the thread the read was on right before it was about to read. The thread for writing then wrote a message out. I then thawed the read thread and it was able to successfully read the response back from the server about the message that was sent.
The following method is called by the create_thread method to handle reading messages off the wire from the server:
void SSLSocket::ProcessServerRequests()
{
// This method is responsible for processing requests from a server.
Byte *pByte;
int ByteCount;
size_t BytesTransferred;
boost::system::error_code Err;
Byte* pReqBuf;
string s;
stringstream ss;
//
try
{
ss << "ProcessServerRequests: Worker thread: " << Logger::NumberToString(boost::this_thread::get_id()) << " started.\n";
Log.LogString(ss.str(), LogInfo);
// Enable the handlers for the handshaking.
IOService->run();
// Wait for the handshake to be sucessfully completed.
do
{
Sleep(50);
} while (!HandShakeReady);
//
sClientIp = pSocket->lowest_layer().remote_endpoint().address().to_string();
uiClientPort = pSocket->lowest_layer().remote_endpoint().port();
ReqAlive = true;
// If the thread that handles sending msgs to all servers has not been created yet, then create that one.
// This thread is created just once to handle all outbound msgs to all servers.
WorkerThreads.create_thread(boost::bind(&SSLSocket::SendWorkerThread));
// Loop until the user quits, or an error is detected. The read method should wait until there is something to read.
do
{
pReqBuf = BufMang.GetPtr(MsgLenBytes);
boost::asio::read(*pSocket, boost::asio::buffer(pReqBuf, MsgLenBytes), boost::asio::transfer_exactly(MsgLenBytes), Err);
if (Err)
{
s = Err.message();
if ((s.find("short r")) == string::npos)
{
ss.str("");
ss << "SSLSocket::ProcessServerRequests: read(1) error = " << Err.message() << "\n. Terminating.\n\n";
Log.LogString(ss.str(), LogError);
}
Terminate();
// Notify the client that an error has been encountered and the program needs to shut down. TBD.
}
else
{
// Get the number of bytes in the message.
pByte = pReqBuf;
B2I.B.B1 = *pByte++;
B2I.B.B2 = *pByte++;
B2I.B.B3 = *pByte++;
B2I.B.B4 = *pByte;
ByteCount = B2I.IntVal;
pReqBuf = BufMang.GetPtr(ByteCount);
// Do a synchronous read which will hang until the entire message is read off the wire.
BytesTransferred = boost::asio::read(*pSocket, boost::asio::buffer(pReqBuf, ByteCount), boost::asio::transfer_exactly(ByteCount), Err);
ss.str("");
ss << "SSLSocket::ProcessServerRequests: # bytes rcvd = " << Logger::NumberToString(BytesTransferred).c_str() << " from ";
ss << sClientIp.c_str() << " : " << Logger::NumberToString(uiClientPort) << "\n";
Log.LogString(ss.str(), LogDebug2);
Log.LogBuf(pReqBuf, (int)BytesTransferred, DisplayInHex, LogDebug3);
if ((Err) || (ByteCount != BytesTransferred))
{
if (Err)
{
ss.str("");
ss << "ProcessServerRequests:read(2) error = " << Err.message() << "\n. Terminating.\n\n";
}
else
{
ss.str("");
ss << "ProcessServerRequests:read(3) error - BytesTransferred (" << Logger::NumberToString(BytesTransferred).c_str() <<
") != ByteCount (" << Logger::NumberToString(ByteCount).c_str() << "). Terminating.\n\n";
}
Log.LogString(ss.str(), LogError);
Terminate();
// Notify the client that an error has been encountered and the program needs to shut down. TBD.
break;
}
// Call the C# callback method that will handle the message.
Log.LogString("SSLSocket::ProcessServerRequests: sending msg to the C# client.\n\n", LogDebug2);
CallbackFunction(this, BytesTransferred, (void*)pReqBuf);
}
} while (ReqAlive);
Log.LogString("SSLSocket::ProcessServerRequests: worker thread done.\n", LogInfo);
}
catch (std::exception& e)
{
stringstream ss;
ss << "SSLSocket::ProcessServerRequests: threw an error - " << e.what() << ".\n";
Log.LogString(ss.str(), LogError);
}
}
The following method is called by the create_thread method to handle sending messages to the server:
void SSLSocket::SendWorkerThread()
{
// This method handles sending msgs to the server. It is called upon 1st time class initialization.
//
DWORD WaitResult;
Log.LogString("SSLSocket::SendWorkerThread: Worker thread " + Logger::NumberToString(boost::this_thread::get_id()) + " started.\n", LogInfo);
// Loop until the user quits, or an error of some sort is thrown.
try
{
do
{
// If there are one or more msgs that need to be sent to a server, then send them out.
if (SendMsgQ.Count() > 0)
{
Message* pMsg = SendMsgQ.Pop();
// Byte* pBuf = pMsg->pBuf;
const Byte* pBuf = pMsg->pBuf;
SSLSocket* pSSL = pMsg->pSSL;
int BytesInMsg = pMsg->BytesInMsg;
boost::system::error_code Error;
unsigned int BytesTransferred = boost::asio::write(*pSSL->pSocket, boost::asio::buffer(pBuf, BytesInMsg), Error);
string s = "SSLSocket::SendWorkerThread: # bytes sent = ";
s += Logger::NumberToString(BytesInMsg).c_str();
s += "\n";
Log.LogString(s, LogDebug2);
Log.LogBuf(pBuf, BytesInMsg, DisplayInHex, LogDebug3);
if (Error)
{
Log.LogString("SSLSocket::SendWorkerThread: error sending message - " + Error.message() + "\n", LogError);
}
}
else
{
// Nothing to send, so go into a wait state.
WaitResult = WaitForSingleObject(hEvent, INFINITE);
if (WaitResult != 0L)
{
Log.LogString("SSLSocket::SendWorkerThread: WaitForSingleObject event error. Code = " + Logger::NumberToString(GetLastError()) + ". \n", LogError);
}
}
} while (ReqAlive);
Log.LogString("SSLSocket::SendWorkerThread: Worker thread " + Logger::NumberToString(boost::this_thread::get_id()) + " done.\n", LogInfo);
}
catch (std::exception& e)
{
stringstream ss;
ss << "SSLSocket::SendWorkerThread: threw an error - " << e.what() << ".\n";
Log.LogString(ss.str(), LogError);
}
}
So, if a synchronous write should be able to be executed while a synchronous read is pending in another thread, then can someone please tell me what my code is doing wrong.
Asio socket is not thread-safe, so you may not access it from different threads.
Use async_read and async_write instead.

Resources