Problem with Named event of length 4*n-1using CreateEvent() and Openevent() - multithreading

I have two applications, one creates a named event using CreateEvent() and other opens the same event using OpenEvent(), as follows:
Application A.exe:
DLSRemoteConnnectionRqstEvent = CreateEvent(NULL, FALSE, FALSE, (LPCWSTR)DLS_REMOTE_CONNECTION_RQST_EVENT);
if (GetLastError() != 0)
{
DLSRemoteConnnectionRqstEvent = OpenEvent(EVENT_ALL_ACCESS, TRUE,(LPCWSTR)DLS_REMOTE_CONNECTION_RQST_EVENT);
}
else
{
cout<<"DLS_REMOTE_CONNECTION_RQST_EVENT created"<<endl;
}
Application B.exe :
DLSRemoteConnnectionRqstEvent =OpenEvent(EVENT_ALL_ACCESS, TRUE, (LPCWSTR)DLS_REMOTE_CONNECTION_RQST_EVENT);
if (GetLastError() != 0 && INVALID_HANDLE_VALUE == m_DLSRemoteConnnectionRqstEvent)
{
DLSRemoteConnnectionRqstEvent = CreateEvent(NULL, FALSE, FALSE, (LPCWSTR)DLS_REMOTE_CONNECTION_RQST_EVENT);
}
else
{
cout<<"m_DLSRemoteConnnectionRqstEvent opened"<<endl;
}
The event name is defined in a common header file as below:
#define CS_REMOTE_CONNECTION_RQST_EVENT "DLS_REMOTE_CONNECTION_RQST_MSG_TYPE"
Application A is able to create the event successfully, but Application B is not able to open the event, getting a handle value of NULL.
I have tested a few scenarios and came to know that if the event name is a length of 4n-1 then the open event always gives me a NULL handle value, and if the event name is less then 4n-1 or more than 4*n-1 then my application works fine.
Please help with why my application is behaving like this when the event name length is 4*n-1.
Other events created and opened in applications similarly as above are working fine as their lengths are not 4*n-1.
CreateEvent() and OpenEvent() should work on all the event lengths.

Your LPCWSTR typecasts are wrong. You can't cast narrow strings into wide strings like you are doing. Use a wide string literal to begin with, eg:
#define CS_REMOTE_CONNECTION_RQST_EVENT L"DLS_REMOTE_CONNECTION_RQST_MSG_TYPE"
...
CreateEventW(..., DLS_REMOTE_CONNECTION_RQST_EVENT);
...
OpenEventW(..., DLS_REMOTE_CONNECTION_RQST_EVENT);
That being said, using CreateEvent() and OpenEvent() the way you are is causing a race condition. Since it is clear that either application can create the event if it doesn't already exist, you should just use CreateEvent() in both applications and let it avoid the race condition for you. There is no reason to use OpenEvent() in this code at all.
Also, your error checking is wrong. Application A is not checking that a failure actually occurred before looking for a failure error code. CreateEvent() can report non-zero error codes in both success and failure conditions. Application B is at least trying to check if OpenEvent() failed, but it is doing so incorrectly since OpenEvent() returns NULL on failure, not INVALID_HANDLE_VALUE.
Try this instead:
Application A.exe and B.exe:
#define CS_REMOTE_CONNECTION_RQST_EVENT L"DLS_REMOTE_CONNECTION_RQST_MSG_TYPE"
...
DLSRemoteConnnectionRqstEvent = CreateEventW(NULL, FALSE, FALSE, DLS_REMOTE_CONNECTION_RQST_EVENT);
if (NULL == DLSRemoteConnnectionRqstEvent)
{
// error handling...
}
else
{
if (GetLastError() == ERROR_ALREADY_EXISTS)
cout << "DLS_REMOTE_CONNECTION_RQST_EVENT opened" << endl;
else
cout << "DLS_REMOTE_CONNECTION_RQST_EVENT created" << endl;
}

Related

nlohmann json has a string member called name, how can I check it is null or valid string

I know that I have this:
json var["thirdName"].get<std::string>().c_str();
It is used in C++. The protocol says this member is mandatory, but lots of people don't have third name.
I got exception if it is
nullptr;
, because I can say
var["thirdName"] = nullptr;
How can I easily check if it is valid or not?
I've found only one very complex form.
You can use the count method:
if (var.count("thirdName") > 0) {
...
}
Personally, I would omit the > 0:
if (var.count("thirdName")) {
...
}

"this" argument in boost bind

I am writing multi-threaded server that handles async read from many tcp sockets. Here is the section of code that bothers me.
void data_recv (void) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)),
boost::bind ( &RPC::on_data_recv, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (boost::system::error_code ec, std::size_t bytesRx) {
if ( rawDataW[bytesRx-1] == ENDMARKER { // <-- this code is fine
process_and_write_rawdata_to_file
}
else {
read_socket_until_endmarker // <-- HELP REQUIRED!!
process_and_write_rawadata_to_file
}
}
Nearly always the async_read_some reads in data including the endmarker, so it works fine. Rarely, the endmarker's arrival is delayed in the stream and that's when my program fails. I think it fails because I have not understood how boost bind works.
My first question:
I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
my second question:
In the on_data_recv() method, how do I read data from the same socket that was read in the on_data() method? In other words, how do I pass the socket as argument from calling method to the handler? when the handler is executed in another thread? Any help in form of a few lines of code that can fit into my "read_socket_until_endmarker" will be appreciated.
My first question: I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
In the example (and I'm assuming this holds for your functions as well) the start_accept() is a member function. The bind function is conveniently designed such that when you use & in front of its first argument, it interprets it as a member function that is applied to its second argument.
So while a code like this:
void foo(int x) { ... }
bind(foo, 3)();
Is equivalent to just calling foo(3)
Code like this:
struct Bar { void foo(int x); }
Bar bar;
bind(&foo, &bar, 3)(); // <--- notice the & before foo
Would be equivalent to calling bar.foo(3).
And thus as per your example
boost::bind ( &RPC::on_data_recv, this, // <--- notice & again
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
When this object is invoked inside Asio it shall be equivalent to calling this->on_data_recv(error, size). Checkout this link for more info.
For the second part, it is not clear to me how you're working with multiple threads, do you run io_service.run() from more than one thread (possible but I think is beyond your experience level)? It might be the case that you're confusing async IO with multithreading. I'm gonna assume that is the case and if you correct me I'll change my answer.
The usual and preferred starting point is to have just one thread running the io_service.run() function. Don't worry, this will allow you to handle many sockets asynchronously.
If that is the case, your two functions could easily be modified as such:
void data_recv (size_t startPos = 0) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)) + startPos,
boost::bind ( &RPC::on_data_recv, this,
startPos,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (size_t startPos,
boost::system::error_code ec,
std::size_t bytesRx) {
// TODO: Check ec
if (rawDataW[startPos + bytesRx-1] == ENDMARKER) {
process_and_write_rawdata_to_file
}
else {
// TODO: Error if startPos + bytesRx == 648*2
data_recv(startPos + bytesRx);
}
}
Notice though that the above code still has problems, the main one being that if the other side sent two messages quickly one after another, we could receive (in one async_read_some call) the full first message + part of the second message, and thus missing the ENDMARKER from the first one. Thus it is not enough to only test whether the last received byte is == to the ENDMARKER.
I could go on and modify this function further (I think you might get the idea on how), but you'd be better off using async_read_until which is meant exactly for this purpose.

Autosar Software Component

I have reading several AUTOSAR Documents. For now, my concern is just developing Software Component. I have two software component designs, take a look to the picture below.
Explanation:
I got a data from port 1 and 2. Each of ports will correspondence with RunnableEntity which running when a new data has come. Then, the RunnableEntity sets that data to InterRunnableVariable. The main RunnableEntity which is RunnableEntity 1 will process the InterRunnableVariable to make an output.
The data freely come to the port and waiting to be proceed in the buffer. Then, The only one RunnableEntity will process the data with some help by common global variable (The purpose of global variable is same with InterRunnableVariable).
My questions are,
Will design 1 and 2 work?
If design 1 and 2 are true, which one do you prefer according the time process, time to implement, etc.?
Are the codes right? how to handle event and InterRunnableVariable?
Thank you for your help.
====================Adding Code After Comment========================
Design 1
/* Runnable Entity 1*/
/* Event : TimeEvent 25ms */
void re1(void){
data_output out;
irv irv1 = Rte_IrvIread_re1_irv1();
irv irv2 = Rte_IrvIread_re1_irv2();
irv irv3 = Rte_IrvIread_re1_irv3();
out = DataProcess(&irv1,&irv2,&irv3);
Rte_Write_re1_port3_out();
}
/* Runnable Entity 2*/
/* Event : DataReceiveErrorEvent on port1 */
void re2(void){
irv irv2 = Rte_IrvIread_re1_irv2();
modify(&irv2);
Rte_IrvIwrite_re1_irv2(irv2);
}
/* Runnable Entity 3*/
/* Event : DataReceiveEvent on port1 */
void re2(void){
data_input1 in;
Std_RetrunType status;
irv irv1 = Rte_IrvIread_re1_irv1();
status = Rte_Receive_re1_port1_input(&in);
if (status == RTE_E_OK) {
modify(&irv1,in);
Rte_IrvIwrite_re1_irv1(irv1);
}
}
/* Runnable Entity 4*/
/* Event : DataReceiveEvent on port2 */
void re2(void){
data_input2 in;
Std_RetrunType status;
irv irv3 = Rte_IrvIread_re1_irv3();
status = Rte_Receive_re1_port2_input2(&in);
if (status == RTE_E_OK) {
modify(&irv3,in2);
Rte_IrvIwrite_re1_irv3(irv3);
}
}
Design 2
/*Global Variable*/
global_variable1 gvar1; /* Equal with InterVariable 1 in Design 1*/
global_variable2 gvar2; /* Equal with InterVariable 2 in Design 1*/
global_variable3 gvar3; /* Equal with InterVariable 3 in Design 1*/
/* Runnable Entity 1*/
/* Event : TimeEvent 25ms */
void re1(void){
data_output out;
GetData1()
GetData2()
out = GetOutputWithGlobalVariable();
Rte_Write_re1_port3_out(out);
}
/* Get Data 1*/
void getData1(){
Std_ReturnType status; /* uint8 */
data_input1 in;
do {
status = Rte_Receive_re1_port1_input1(&in);
if (status == RTE_E_OK) {
modifyGlobalVariable(in);
}
} while (status != RTE_E_NO_DATA && status != RTE_E_LOST_DATA);
if(status != RTE_E_LOST_DATA){
modifyGlobalVariableWhenError();
}
return;
}
/* Get Data 2*/
void getData2(){
Std_ReturnType status; /* uint8 */
data_input2 in;
do {
status = Rte_Receive_re1_port2_input2(&in);
if (_status == RTE_E_OK) {
modifyGlobalVariable2(in);
}
} while (status != RTE_E_NO_DATA && status != RTE_E_LOST_DATA);
return;
}
I think both solutions are possible. The main difference is that in the first solution the generated Rte will manage the global buffer whereas in the second design, you have to take care of the buffers yourself.
Especially if you have multiple runnables accessing the same buffer, the 'Rte' will either generate interrupt locks to protected data consistency or it will optimize out the locks if the task context in that the ´RunnableEntities´ are running cannot interrupt each other.
Even if you have only one ´RunnableEntity´ as shown in the second design, it might happen that the ´TimingEvent´ activates the ´RunnableEntity´ and the DataReceivedEvent as well (although I don't understand why you left out the DataReceivedEvent in the second design). In this case the ´RunnableEntity´ is running in two different contexts accessing the same data.
To make it short: My proposal is to use interrunnable variables and let the Rte handle the data consistency, initialization etc.
It might be a little bit more effort to create the software component description, but then you just need to use the generated IrvRead/IrvWrite functions and you are done.
I'm actually prefering here the first one.
The second one depends a bit on your SWC Description, since there is the specification of the Port Data Access. From this definition it depends, if the RTE creates a blocking or non-blocking Rte_Receive.
[SWS_Rte_01288] A non-blocking Rte_Receive API shall be generated if a VariableAccess in the dataReceivePointByArgument role references a required VariableDataPrototype with ‘event’ semantics. (SRS_Rte_00051)
[SWS_Rte_07638] The RTE Generator shall reject configurations were a VariableDataPrototype with ‘event’ semantics is referenced by a VariableAccess in the dataReceivePointByValue role. (SRS_Rte_00018)
[SWS_Rte_01290] A blocking Rte_Receive API shall be generated if a VariableAccess in the dataReceivePointByArgument role references a required VariableDataPrototype with ‘event’ semantics that is, in turn, referenced by a DataReceivedEvent and the DataReceivedEvent is referenced by a WaitPoint.
(SRS_Rte_00051)
On the other side, I'm not sure what happens with your blocking Rte_Receive vs your TimingEvent based RunnableEntity call.
Also consider the following:
RTE_E_LOST_DATA actually means, you lost data due to incoming data overflowing the queue (Rte_Receive only works with swImplPoliy = queued, otherwise if swImplPolicy != queued you get Rte_Read). This is not an excplicit Std_ReturnType value, but a flag added to that return value -> OverlayedError)
RTE_E_TIMEOUT would be for blocking Rte_Receive
RTE_E_NO_DATA would be for non-blocking Rte_Receive
you should then check as:
Std_ReturnType status;
status = Rte_Receive_..(<instance>, <parameters>);
if (Rte_HasOverlayedError(status)) {
// Handle e.g. RTE_E_LOST_DATA
}
// not with Rte_Receive - if(Rte_IsInfrastructureError(status)) { }
else { /* handle application error with error code status */
status = Rte_ApplicationError(status);
}

Coded UI - "Continue on failure" for Assertions

I'm using SpecFlow with Coded UI to create automated tests for a WPF application.
I have multiple assertions inside a "Then" step and a couple of them fails. When an assertion fails, the test case is failed and the execution is stopped. I want my test case to go ahead till the end with the execution and when the last step is performed if any failed assertions were present during the execution I want to fail the whole test case.
I found only partial solutions:
try
{
Assert.IsTrue(condition)
}
catch(AssertFailedException ex)
{
Console.WriteLine("Assert failed, continuing the run");
}
In this case the execution goes till the end, but the test case is marked as passed.
Thanks!
Make a List of Exceptions. Whenever an exception is encountered, catch it and put it in the list.
Create a method with attribute AfterScenario and see if the list contains Exceptions. If true, Assert a fail with a message the stringyfied list of exceptions. Now you don't lose valuable Exception information and the check on Exceptions always happens on the end because of the AfterScenario attribute.
One approach is to add declare a bool thisTestFailed and initialize it to false. Within the catch blocks add the statement thisTestFailed = true; then near the end of the test add code such as:
if ( thisTestFailed ) {
Assert.Fail("A suitable test failed message");
}
Another approach is to convert a series of Assert... statements into a series of if tests followed by one Assert. There are several ways of doing that. One way is:
bool thisTestFailed = false;
if ( ... the first assertion ... ) { thisTestFailed = true; }
if ( ... another assertion ... ) { thisTestFailed = true; }
if ( ... and another assertion ... ) { thisTestFailed = true; }
if ( thisTestFailed ) {
Assert.Fail("A suitable test failed message");
}

How to set channel gain in OpenAL?

I tried
alBufferf (myChannelId, AL_MAX_GAIN (and AL_GAIN), volumeValue);
and got error 0xA002.
As Isaac has said, you probably want to be setting gain on your a sources:
alSourcef (sourceID, AL_GAIN, volume)
To avoid recieving mysterious error codes in future, you should get into the habit of polling for errors after calls you think may fail / calls you are trying to debug.
This way, you'd know immediately that "0xA002" is "AL_INVALID_ENUM".
To do this with OpenAL you call "alGetError()" which clears and returns the most recent error;
ALenum ALerror = AL_NO_ERROR;
ALerror = alGetError();
std::cout << getALErrorString(ALerror) << std::endl;
You'll need to write something like this to take an error code and return/print a string
std::string getALErrorString(ALenum err) {
switch(err) {
case AL_NO_ERROR: return std::string("AL_NO_ERROR - (No error)."); break;
case AL_INVALID_NAME: return std::string("AL_INVALID_NAME - Invalid Name paramater passed to AL call."); break;
case AL_INVALID_ENUM: return std::string("AL_INVALID_ENUM - Invalid parameter passed to AL call."); break;
case AL_INVALID_VALUE: return std::string("AL_INVALID_VALUE - Invalid enum parameter value."); break;
case AL_INVALID_OPERATION: return std::string("AL_INVALID_OPERATION"); break;
case AL_OUT_OF_MEMORY: return std::string("AL_OUT_OF_MEMORY"); break;
default: return std::string("AL Unknown Error."); break;
};
}
You can lookup exactly what the error code means for a specific function call in OpenAL Programmer's Guide .
For example, on page 39 you can see AL_INVALID_ENUM on alSourcef means "The specified parameter is not valid".
0xA002 is an ILLEGAL ENUM ERROR in linux.
You got that because it's impossible to modify the gain of a buffer. There's no such thing.
What you can do is set the AL_GAIN attribute either to the listener (applying it to all sources in the current context) or to a particular source.

Resources