I have reading several AUTOSAR Documents. For now, my concern is just developing Software Component. I have two software component designs, take a look to the picture below.
Explanation:
I got a data from port 1 and 2. Each of ports will correspondence with RunnableEntity which running when a new data has come. Then, the RunnableEntity sets that data to InterRunnableVariable. The main RunnableEntity which is RunnableEntity 1 will process the InterRunnableVariable to make an output.
The data freely come to the port and waiting to be proceed in the buffer. Then, The only one RunnableEntity will process the data with some help by common global variable (The purpose of global variable is same with InterRunnableVariable).
My questions are,
Will design 1 and 2 work?
If design 1 and 2 are true, which one do you prefer according the time process, time to implement, etc.?
Are the codes right? how to handle event and InterRunnableVariable?
Thank you for your help.
====================Adding Code After Comment========================
Design 1
/* Runnable Entity 1*/
/* Event : TimeEvent 25ms */
void re1(void){
data_output out;
irv irv1 = Rte_IrvIread_re1_irv1();
irv irv2 = Rte_IrvIread_re1_irv2();
irv irv3 = Rte_IrvIread_re1_irv3();
out = DataProcess(&irv1,&irv2,&irv3);
Rte_Write_re1_port3_out();
}
/* Runnable Entity 2*/
/* Event : DataReceiveErrorEvent on port1 */
void re2(void){
irv irv2 = Rte_IrvIread_re1_irv2();
modify(&irv2);
Rte_IrvIwrite_re1_irv2(irv2);
}
/* Runnable Entity 3*/
/* Event : DataReceiveEvent on port1 */
void re2(void){
data_input1 in;
Std_RetrunType status;
irv irv1 = Rte_IrvIread_re1_irv1();
status = Rte_Receive_re1_port1_input(&in);
if (status == RTE_E_OK) {
modify(&irv1,in);
Rte_IrvIwrite_re1_irv1(irv1);
}
}
/* Runnable Entity 4*/
/* Event : DataReceiveEvent on port2 */
void re2(void){
data_input2 in;
Std_RetrunType status;
irv irv3 = Rte_IrvIread_re1_irv3();
status = Rte_Receive_re1_port2_input2(&in);
if (status == RTE_E_OK) {
modify(&irv3,in2);
Rte_IrvIwrite_re1_irv3(irv3);
}
}
Design 2
/*Global Variable*/
global_variable1 gvar1; /* Equal with InterVariable 1 in Design 1*/
global_variable2 gvar2; /* Equal with InterVariable 2 in Design 1*/
global_variable3 gvar3; /* Equal with InterVariable 3 in Design 1*/
/* Runnable Entity 1*/
/* Event : TimeEvent 25ms */
void re1(void){
data_output out;
GetData1()
GetData2()
out = GetOutputWithGlobalVariable();
Rte_Write_re1_port3_out(out);
}
/* Get Data 1*/
void getData1(){
Std_ReturnType status; /* uint8 */
data_input1 in;
do {
status = Rte_Receive_re1_port1_input1(&in);
if (status == RTE_E_OK) {
modifyGlobalVariable(in);
}
} while (status != RTE_E_NO_DATA && status != RTE_E_LOST_DATA);
if(status != RTE_E_LOST_DATA){
modifyGlobalVariableWhenError();
}
return;
}
/* Get Data 2*/
void getData2(){
Std_ReturnType status; /* uint8 */
data_input2 in;
do {
status = Rte_Receive_re1_port2_input2(&in);
if (_status == RTE_E_OK) {
modifyGlobalVariable2(in);
}
} while (status != RTE_E_NO_DATA && status != RTE_E_LOST_DATA);
return;
}
I think both solutions are possible. The main difference is that in the first solution the generated Rte will manage the global buffer whereas in the second design, you have to take care of the buffers yourself.
Especially if you have multiple runnables accessing the same buffer, the 'Rte' will either generate interrupt locks to protected data consistency or it will optimize out the locks if the task context in that the ´RunnableEntities´ are running cannot interrupt each other.
Even if you have only one ´RunnableEntity´ as shown in the second design, it might happen that the ´TimingEvent´ activates the ´RunnableEntity´ and the DataReceivedEvent as well (although I don't understand why you left out the DataReceivedEvent in the second design). In this case the ´RunnableEntity´ is running in two different contexts accessing the same data.
To make it short: My proposal is to use interrunnable variables and let the Rte handle the data consistency, initialization etc.
It might be a little bit more effort to create the software component description, but then you just need to use the generated IrvRead/IrvWrite functions and you are done.
I'm actually prefering here the first one.
The second one depends a bit on your SWC Description, since there is the specification of the Port Data Access. From this definition it depends, if the RTE creates a blocking or non-blocking Rte_Receive.
[SWS_Rte_01288] A non-blocking Rte_Receive API shall be generated if a VariableAccess in the dataReceivePointByArgument role references a required VariableDataPrototype with ‘event’ semantics. (SRS_Rte_00051)
[SWS_Rte_07638] The RTE Generator shall reject configurations were a VariableDataPrototype with ‘event’ semantics is referenced by a VariableAccess in the dataReceivePointByValue role. (SRS_Rte_00018)
[SWS_Rte_01290] A blocking Rte_Receive API shall be generated if a VariableAccess in the dataReceivePointByArgument role references a required VariableDataPrototype with ‘event’ semantics that is, in turn, referenced by a DataReceivedEvent and the DataReceivedEvent is referenced by a WaitPoint.
(SRS_Rte_00051)
On the other side, I'm not sure what happens with your blocking Rte_Receive vs your TimingEvent based RunnableEntity call.
Also consider the following:
RTE_E_LOST_DATA actually means, you lost data due to incoming data overflowing the queue (Rte_Receive only works with swImplPoliy = queued, otherwise if swImplPolicy != queued you get Rte_Read). This is not an excplicit Std_ReturnType value, but a flag added to that return value -> OverlayedError)
RTE_E_TIMEOUT would be for blocking Rte_Receive
RTE_E_NO_DATA would be for non-blocking Rte_Receive
you should then check as:
Std_ReturnType status;
status = Rte_Receive_..(<instance>, <parameters>);
if (Rte_HasOverlayedError(status)) {
// Handle e.g. RTE_E_LOST_DATA
}
// not with Rte_Receive - if(Rte_IsInfrastructureError(status)) { }
else { /* handle application error with error code status */
status = Rte_ApplicationError(status);
}
Related
I know the TCP_SYN_RECV , however what's the meaning for the TCP_NEW_SYN_RECV? what's the difference between them?
https://github.com/torvalds/linux/blob/5924bbecd0267d87c24110cbe2041b5075173a25/include/net/tcp_states.h
enum {
TCP_ESTABLISHED = 1,
TCP_SYN_SENT,
TCP_SYN_RECV,
TCP_FIN_WAIT1,
TCP_FIN_WAIT2,
TCP_TIME_WAIT,
TCP_CLOSE,
TCP_CLOSE_WAIT,
TCP_LAST_ACK,
TCP_LISTEN,
TCP_CLOSING, /* Now a valid state */
TCP_NEW_SYN_RECV,
TCP_MAX_STATES /* Leave at the end! */
};
Also I saw the following code "sk->sk_state == TCP_NEW_SYN_RECV" ,why not use "sk->sk_state == TCP_SYN_RECV" instead?
https://github.com/torvalds/linux/blob/8fa3b6f9392bf6d90cb7b908e07bd90166639f0a/net/ipv4/tcp_ipv4.c#L16485
if (sk->sk_state == TCP_NEW_SYN_RECV) {
struct request_sock *req = inet_reqsk(sk);
struct sock *nsk;
I've found this:
TCP_SYN_RECV state is currently used by fast open sockets.
Initial TCP requests (the pseudo sockets created when a SYN is received)
are not yet associated to a state. They are attached to their parent,
and the parent is in TCP_LISTEN state.
This commit adds TCP_NEW_SYN_RECV state, so that we can convert
TCP stack to a different schem gradually.
source, the author commit: http://git.kernel.org/linus/10feb428a504
I am writing multi-threaded server that handles async read from many tcp sockets. Here is the section of code that bothers me.
void data_recv (void) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)),
boost::bind ( &RPC::on_data_recv, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (boost::system::error_code ec, std::size_t bytesRx) {
if ( rawDataW[bytesRx-1] == ENDMARKER { // <-- this code is fine
process_and_write_rawdata_to_file
}
else {
read_socket_until_endmarker // <-- HELP REQUIRED!!
process_and_write_rawadata_to_file
}
}
Nearly always the async_read_some reads in data including the endmarker, so it works fine. Rarely, the endmarker's arrival is delayed in the stream and that's when my program fails. I think it fails because I have not understood how boost bind works.
My first question:
I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
my second question:
In the on_data_recv() method, how do I read data from the same socket that was read in the on_data() method? In other words, how do I pass the socket as argument from calling method to the handler? when the handler is executed in another thread? Any help in form of a few lines of code that can fit into my "read_socket_until_endmarker" will be appreciated.
My first question: I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
In the example (and I'm assuming this holds for your functions as well) the start_accept() is a member function. The bind function is conveniently designed such that when you use & in front of its first argument, it interprets it as a member function that is applied to its second argument.
So while a code like this:
void foo(int x) { ... }
bind(foo, 3)();
Is equivalent to just calling foo(3)
Code like this:
struct Bar { void foo(int x); }
Bar bar;
bind(&foo, &bar, 3)(); // <--- notice the & before foo
Would be equivalent to calling bar.foo(3).
And thus as per your example
boost::bind ( &RPC::on_data_recv, this, // <--- notice & again
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
When this object is invoked inside Asio it shall be equivalent to calling this->on_data_recv(error, size). Checkout this link for more info.
For the second part, it is not clear to me how you're working with multiple threads, do you run io_service.run() from more than one thread (possible but I think is beyond your experience level)? It might be the case that you're confusing async IO with multithreading. I'm gonna assume that is the case and if you correct me I'll change my answer.
The usual and preferred starting point is to have just one thread running the io_service.run() function. Don't worry, this will allow you to handle many sockets asynchronously.
If that is the case, your two functions could easily be modified as such:
void data_recv (size_t startPos = 0) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)) + startPos,
boost::bind ( &RPC::on_data_recv, this,
startPos,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (size_t startPos,
boost::system::error_code ec,
std::size_t bytesRx) {
// TODO: Check ec
if (rawDataW[startPos + bytesRx-1] == ENDMARKER) {
process_and_write_rawdata_to_file
}
else {
// TODO: Error if startPos + bytesRx == 648*2
data_recv(startPos + bytesRx);
}
}
Notice though that the above code still has problems, the main one being that if the other side sent two messages quickly one after another, we could receive (in one async_read_some call) the full first message + part of the second message, and thus missing the ENDMARKER from the first one. Thus it is not enough to only test whether the last received byte is == to the ENDMARKER.
I could go on and modify this function further (I think you might get the idea on how), but you'd be better off using async_read_until which is meant exactly for this purpose.
Consider an operation with a standard asynchronous interface:
std::future<void> op();
Internally, op needs to perform a (variable) number of asynchronous operations to complete; the number of these operations is finite but unbounded, and depends on the results of the previous asynchronous operations.
Here's a (bad) attempt:
/* An object of this class will store the shared execution state in the members;
* the asynchronous op is its member. */
class shared
{
private:
// shared state
private:
// Actually does some operation (asynchronously).
void do_op()
{
...
// Might need to launch more ops.
if(...)
launch_next_ops();
}
public:
// Launches next ops
void launch_next_ops()
{
...
std::async(&shared::do_op, this);
}
}
std::future<void> op()
{
shared s;
s.launch_next_ops();
// Return some future of s used for the entire operation.
...
// s destructed - delayed BOOM!
};
The problem, of course, is that s goes out of scope, so later methods will not work.
To amend this, here are the changes:
class shared : public std::enable_shared_from_this<shared>
{
private:
/* The member now takes a shared pointer to itself; hopefully
* this will keep it alive. */
void do_op(std::shared_ptr<shared> p); // [*]
void launch_next_ops()
{
...
std::async(&shared::do_op, this, shared_from_this());
}
}
std::future<void> op()
{
std::shared_ptr<shared> s{new shared{}};
s->launch_next_ops();
...
};
(Asides from the weirdness of an object calling its method with a shared pointer to itself, )the problem is with the line marked [*]. The compiler (correctly) warns that it's an unused variable.
Of course, it's possible to fool it somehow, but is this an indication of a fundamental problem? Is there any chance the compiler will optimize away the argument and leave the method with a dead object? Is there a better alternative to this entire scheme? I don't find the resulting code the most intuitive.
No, the compiler will not optimize away the argument. Indeed, that's irrelevant as the lifetime extension comes from shared_from_this() being bound by decay-copy ([thread.decaycopy]) into the result of the call to std::async ([futures.async]/3).
If you want to avoid the warning of an unused argument, just leave it unnamed; compilers that warn on unused arguments will not warn on unused unnamed arguments.
An alternative is to make do_op static, meaning that you have to use its shared_ptr argument; this also addresses the duplication between this and shared_from_this. Since this is fairly cumbersome, you might want to use a lambda to convert shared_from_this to a this pointer:
std::async([](std::shared_ptr<shared> const& self){ self->do_op(); }, shared_from_this());
If you can use C++14 init-captures this becomes even simpler:
std::async([self = shared_from_this()]{ self->do_op(); });
1) std::call_once
A a;
std::once_flag once;
void f ( ) {
call_once ( once, [ ] { a = A {....}; } );
}
2) function-level static
A a;
void f ( ) {
static bool b = ( [ ] { a = A {....}; } ( ), true );
}
For your example usage, hmjd's answer fully explains that there is no difference (except for the additional global once_flag object needed in the call_once case.) However, the call_once case is more flexible, since the once_flag object isn't tied to a single scope. As an example, it could be a class member and be used by more than one function:
class X {
std::once_flag once;
void doSomething() {
std::call_once(once, []{ /* init ...*/ });
// ...
}
void doSomethingElse() {
std::call_once(once, []{ /*alternative init ...*/ });
// ...
}
};
Now depending on which member function is called first the initialization code can be different (but the object will still only be initialized once.)
So for simple cases a local static works nicely (if supported by your compiler) but there are some less common uses that might be easier to implement with call_once.
Both code snippets have the same behaviour, even in the presence of exceptions thrown during initialization.
This conclusion is based on (my interpretation of) the following quotes from the c++11 standard (draft n3337):
1 Section 6.7 Declaration statement clause 4 states:
The zero-initialization (8.5) of all block-scope variables with static storage duration (3.7.1) or thread storage duration (3.7.2) is performed before any other initialization takes place. Constant initialization (3.6.2) of a block-scope entity with static storage duration, if applicable, is performed before its block is first entered. An implementation is permitted to perform early initialization of other block-scope variables with static or thread storage duration under the same conditions that an implementation is permitted to statically initialize a variable with static or thread storage duration in namespace scope (3.6.2). Otherwise such a variable is initialized the first time control passes through its declaration; such a variable is considered initialized upon the completion of its initialization. If the initialization exits by throwing an exception, the initialization is not complete, so it will be tried again the next time control enters the declaration. If control enters the declaration concurrently while the variable is being initialized, the concurrent execution shall wait for completion of the initialization.88 If control re-enters the declaration recursively while the variable is being initialized, the behavior is undefined.
This means that in:
void f ( ) {
static bool b = ( [ ] { a = A {....}; } ( ), true );
}
b is guaranteed to be initialized once only, meaning the lambda is executed (successfully) once only, meaning a = A {...}; is executed (successfully) once only.
2 Section 30.4.4.2 Function call-once states:
An execution of call_once that does not call its func is a passive execution. An execution of call_once that calls its func is an active execution. An active execution shall call INVOKE (DECAY_COPY ( std::forward(func)), DECAY_COPY (std::forward(args))...). If such a call to func throws an exception the execution is exceptional, otherwise it is returning. An exceptional execution shall propagate the exception to the caller of call_once. Among all executions of call_once for any given once_flag: at most one shall be a returning execution; if there is a returning execution, it shall be the last active execution; and there are passive executions only if there is a returning execution.
This means that in:
void f ( ) {
call_once ( once, [ ] { a = A {....}; } );
the lambda argument to std::call_once is executed (successfully) once only, meaning a = A {...}; is executed (successfully) once only.
In both cases a = A{...}; is executed (successfully) once only.
I'm been trying to start doing a plug-in for a program called "Euroscope" for quite some time and i still can't do anything. I even read a C++ book and nothing, it's too difficult to start.
The question i'm going to ask is a little bit specific and it's going to be difficult to explain but i'm tired of trying to solve this by my own so here it comes.
I have a class that i imported with a bunch of function prototypes in the header called "EuroscopePlugIn".
My principal .cpp is this:
void CPythonPlugInScreen::meu()
{
//loop over the planes
EuroScopePlugIn::CAircraft ac;
EuroScopePlugIn::CAircraftFlightPlan acfp;
CString str;
CPythonPlugIn object;
for(ac=GetPlugIn()->AircraftSelectFirst();
ac.IsValid();
ac=GetPlugIn()->AircraftSelectNext(ac))
{
EuroScopePlugIn::CAircraftPositionData acpos=ac.GetPosition();
const char *c=ac.GetCallsign();
object.printtofile_simple_char(*c);
object.printtofile_simple_int(ac.GetState());
};
object.printtofile_simple_int(ac.GetVerticalSpeed());
object.printtofile_simple_int(acfp.GetFinalAltitude());
cout<<acfp.GetAlternate();
}
the "printtofile_simple_int" and "printtofile_simple_char" are defined is the class CPythonPlugIn like this:
void printtofile_simple_int(int n){
ofstream textfile;
textfile.open("FP_simple_int.txt");
textfile<<(n);
textfile.close();
So i open the program, load the .dll i created with Build->Solution and it does nothing, the .txt files aren't even created and even the cout produces nothing.
I will give you some of the prototype infos on the header file "EuroScopePlugIn.h" in case you need them to understand my micro program. If you need other,ask me and i'll put it here
//---GetPlugIn-----------------------------------------------------
inline CPlugIn * GetPlugIn ( void )
{
return m_pPlugIn ;
} ;
&
CAircraft AircraftSelectFirst ( void ) const ;
//-----------------------------------------------------------------
// Return :
// An aircraft object instance.
//
// Remark:
// This instance is only valid inside the block you are querying.
// Do not save it to a static place or into a member variables.
// Subsequent use of an invalid extracted route reference may
// cause ES to crash.
//
// Description :
// It selects the first AC in the list.
//-----------------------------------------------------------------
&
int GetFinalAltitude ( void ) const ;
//-----------------------------------------------------------------
// Return :
// The final requested altitude.
//-----------------------------------------------------------------
Please guys i need help to start with the plug-in making, from that point on with a methodology of trial and error i'll be on my way. I'm just finding it extremely hard to start...
Thank you very much for the help