I know the TCP_SYN_RECV , however what's the meaning for the TCP_NEW_SYN_RECV? what's the difference between them?
https://github.com/torvalds/linux/blob/5924bbecd0267d87c24110cbe2041b5075173a25/include/net/tcp_states.h
enum {
TCP_ESTABLISHED = 1,
TCP_SYN_SENT,
TCP_SYN_RECV,
TCP_FIN_WAIT1,
TCP_FIN_WAIT2,
TCP_TIME_WAIT,
TCP_CLOSE,
TCP_CLOSE_WAIT,
TCP_LAST_ACK,
TCP_LISTEN,
TCP_CLOSING, /* Now a valid state */
TCP_NEW_SYN_RECV,
TCP_MAX_STATES /* Leave at the end! */
};
Also I saw the following code "sk->sk_state == TCP_NEW_SYN_RECV" ,why not use "sk->sk_state == TCP_SYN_RECV" instead?
https://github.com/torvalds/linux/blob/8fa3b6f9392bf6d90cb7b908e07bd90166639f0a/net/ipv4/tcp_ipv4.c#L16485
if (sk->sk_state == TCP_NEW_SYN_RECV) {
struct request_sock *req = inet_reqsk(sk);
struct sock *nsk;
I've found this:
TCP_SYN_RECV state is currently used by fast open sockets.
Initial TCP requests (the pseudo sockets created when a SYN is received)
are not yet associated to a state. They are attached to their parent,
and the parent is in TCP_LISTEN state.
This commit adds TCP_NEW_SYN_RECV state, so that we can convert
TCP stack to a different schem gradually.
source, the author commit: http://git.kernel.org/linus/10feb428a504
Related
I have a piece of data
type data struct {
// all good data here
...
}
This data is owned by a manager and used by other threads for reading only. The manager needs to periodically update the data. How do I design the threading model for this? I can think of two options:
1.
type manager struct {
// acquire read lock when other threads read the data.
// acquire write lock when manager wants to update.
lock sync.RWMutex
// a pointer holding a pointer to the data
p *data
}
2.
type manager struct {
// copy the pointer when other threads want to use the data.
// When manager updates, just change p to point to the new data.
p *data
}
Does the second approach work? It seems I don't need any lock. If other threads get a pointer pointing to the old data, it would be fine if manager updates the original pointer. As GoLang will do GC, after all other threads read the old data it will be auto released. Am I correct?
Your first option is fine and perhaps simplest to do. However, it could lead to poor performance with many readers as it could struggle to obtain a write lock.
As the comments on your question have stated, your second option (as-is) can cause a race condition and lead to unpredictable behaviour.
You could implement your second option by using atomic.Value. This would allow you to store the pointer to some data struct and atomically update this for the next readers to use. For example:
// Data shared with readers
type data struct {
// all the fields
}
// Manager
type manager struct {
v atomic.Value
}
// Method used by readers to obtain a fresh copy of data to
// work with, e.g. inside loop
func (m *manager) Data() *data {
return m.v.Load().(*data)
}
// Internal method called to set new data for readers
func (m *manager) update() {
d:=&data{
// ... set values here
}
m.v.Store(d)
}
I am writing multi-threaded server that handles async read from many tcp sockets. Here is the section of code that bothers me.
void data_recv (void) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)),
boost::bind ( &RPC::on_data_recv, this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (boost::system::error_code ec, std::size_t bytesRx) {
if ( rawDataW[bytesRx-1] == ENDMARKER { // <-- this code is fine
process_and_write_rawdata_to_file
}
else {
read_socket_until_endmarker // <-- HELP REQUIRED!!
process_and_write_rawadata_to_file
}
}
Nearly always the async_read_some reads in data including the endmarker, so it works fine. Rarely, the endmarker's arrival is delayed in the stream and that's when my program fails. I think it fails because I have not understood how boost bind works.
My first question:
I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
my second question:
In the on_data_recv() method, how do I read data from the same socket that was read in the on_data() method? In other words, how do I pass the socket as argument from calling method to the handler? when the handler is executed in another thread? Any help in form of a few lines of code that can fit into my "read_socket_until_endmarker" will be appreciated.
My first question: I am confused with this boost totorial example , in which "this" does not appear in the handler declaration. ( Please see code of start_accept() in the example.) How does this work? Does compiler ignore the "this" ?
In the example (and I'm assuming this holds for your functions as well) the start_accept() is a member function. The bind function is conveniently designed such that when you use & in front of its first argument, it interprets it as a member function that is applied to its second argument.
So while a code like this:
void foo(int x) { ... }
bind(foo, 3)();
Is equivalent to just calling foo(3)
Code like this:
struct Bar { void foo(int x); }
Bar bar;
bind(&foo, &bar, 3)(); // <--- notice the & before foo
Would be equivalent to calling bar.foo(3).
And thus as per your example
boost::bind ( &RPC::on_data_recv, this, // <--- notice & again
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred)
When this object is invoked inside Asio it shall be equivalent to calling this->on_data_recv(error, size). Checkout this link for more info.
For the second part, it is not clear to me how you're working with multiple threads, do you run io_service.run() from more than one thread (possible but I think is beyond your experience level)? It might be the case that you're confusing async IO with multithreading. I'm gonna assume that is the case and if you correct me I'll change my answer.
The usual and preferred starting point is to have just one thread running the io_service.run() function. Don't worry, this will allow you to handle many sockets asynchronously.
If that is the case, your two functions could easily be modified as such:
void data_recv (size_t startPos = 0) {
socket.async_read_some (
boost::asio::buffer(rawDataW, size_t(648*2)) + startPos,
boost::bind ( &RPC::on_data_recv, this,
startPos,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
} // RPC::data_recvW
void on_data_recv (size_t startPos,
boost::system::error_code ec,
std::size_t bytesRx) {
// TODO: Check ec
if (rawDataW[startPos + bytesRx-1] == ENDMARKER) {
process_and_write_rawdata_to_file
}
else {
// TODO: Error if startPos + bytesRx == 648*2
data_recv(startPos + bytesRx);
}
}
Notice though that the above code still has problems, the main one being that if the other side sent two messages quickly one after another, we could receive (in one async_read_some call) the full first message + part of the second message, and thus missing the ENDMARKER from the first one. Thus it is not enough to only test whether the last received byte is == to the ENDMARKER.
I could go on and modify this function further (I think you might get the idea on how), but you'd be better off using async_read_until which is meant exactly for this purpose.
I have reading several AUTOSAR Documents. For now, my concern is just developing Software Component. I have two software component designs, take a look to the picture below.
Explanation:
I got a data from port 1 and 2. Each of ports will correspondence with RunnableEntity which running when a new data has come. Then, the RunnableEntity sets that data to InterRunnableVariable. The main RunnableEntity which is RunnableEntity 1 will process the InterRunnableVariable to make an output.
The data freely come to the port and waiting to be proceed in the buffer. Then, The only one RunnableEntity will process the data with some help by common global variable (The purpose of global variable is same with InterRunnableVariable).
My questions are,
Will design 1 and 2 work?
If design 1 and 2 are true, which one do you prefer according the time process, time to implement, etc.?
Are the codes right? how to handle event and InterRunnableVariable?
Thank you for your help.
====================Adding Code After Comment========================
Design 1
/* Runnable Entity 1*/
/* Event : TimeEvent 25ms */
void re1(void){
data_output out;
irv irv1 = Rte_IrvIread_re1_irv1();
irv irv2 = Rte_IrvIread_re1_irv2();
irv irv3 = Rte_IrvIread_re1_irv3();
out = DataProcess(&irv1,&irv2,&irv3);
Rte_Write_re1_port3_out();
}
/* Runnable Entity 2*/
/* Event : DataReceiveErrorEvent on port1 */
void re2(void){
irv irv2 = Rte_IrvIread_re1_irv2();
modify(&irv2);
Rte_IrvIwrite_re1_irv2(irv2);
}
/* Runnable Entity 3*/
/* Event : DataReceiveEvent on port1 */
void re2(void){
data_input1 in;
Std_RetrunType status;
irv irv1 = Rte_IrvIread_re1_irv1();
status = Rte_Receive_re1_port1_input(&in);
if (status == RTE_E_OK) {
modify(&irv1,in);
Rte_IrvIwrite_re1_irv1(irv1);
}
}
/* Runnable Entity 4*/
/* Event : DataReceiveEvent on port2 */
void re2(void){
data_input2 in;
Std_RetrunType status;
irv irv3 = Rte_IrvIread_re1_irv3();
status = Rte_Receive_re1_port2_input2(&in);
if (status == RTE_E_OK) {
modify(&irv3,in2);
Rte_IrvIwrite_re1_irv3(irv3);
}
}
Design 2
/*Global Variable*/
global_variable1 gvar1; /* Equal with InterVariable 1 in Design 1*/
global_variable2 gvar2; /* Equal with InterVariable 2 in Design 1*/
global_variable3 gvar3; /* Equal with InterVariable 3 in Design 1*/
/* Runnable Entity 1*/
/* Event : TimeEvent 25ms */
void re1(void){
data_output out;
GetData1()
GetData2()
out = GetOutputWithGlobalVariable();
Rte_Write_re1_port3_out(out);
}
/* Get Data 1*/
void getData1(){
Std_ReturnType status; /* uint8 */
data_input1 in;
do {
status = Rte_Receive_re1_port1_input1(&in);
if (status == RTE_E_OK) {
modifyGlobalVariable(in);
}
} while (status != RTE_E_NO_DATA && status != RTE_E_LOST_DATA);
if(status != RTE_E_LOST_DATA){
modifyGlobalVariableWhenError();
}
return;
}
/* Get Data 2*/
void getData2(){
Std_ReturnType status; /* uint8 */
data_input2 in;
do {
status = Rte_Receive_re1_port2_input2(&in);
if (_status == RTE_E_OK) {
modifyGlobalVariable2(in);
}
} while (status != RTE_E_NO_DATA && status != RTE_E_LOST_DATA);
return;
}
I think both solutions are possible. The main difference is that in the first solution the generated Rte will manage the global buffer whereas in the second design, you have to take care of the buffers yourself.
Especially if you have multiple runnables accessing the same buffer, the 'Rte' will either generate interrupt locks to protected data consistency or it will optimize out the locks if the task context in that the ´RunnableEntities´ are running cannot interrupt each other.
Even if you have only one ´RunnableEntity´ as shown in the second design, it might happen that the ´TimingEvent´ activates the ´RunnableEntity´ and the DataReceivedEvent as well (although I don't understand why you left out the DataReceivedEvent in the second design). In this case the ´RunnableEntity´ is running in two different contexts accessing the same data.
To make it short: My proposal is to use interrunnable variables and let the Rte handle the data consistency, initialization etc.
It might be a little bit more effort to create the software component description, but then you just need to use the generated IrvRead/IrvWrite functions and you are done.
I'm actually prefering here the first one.
The second one depends a bit on your SWC Description, since there is the specification of the Port Data Access. From this definition it depends, if the RTE creates a blocking or non-blocking Rte_Receive.
[SWS_Rte_01288] A non-blocking Rte_Receive API shall be generated if a VariableAccess in the dataReceivePointByArgument role references a required VariableDataPrototype with ‘event’ semantics. (SRS_Rte_00051)
[SWS_Rte_07638] The RTE Generator shall reject configurations were a VariableDataPrototype with ‘event’ semantics is referenced by a VariableAccess in the dataReceivePointByValue role. (SRS_Rte_00018)
[SWS_Rte_01290] A blocking Rte_Receive API shall be generated if a VariableAccess in the dataReceivePointByArgument role references a required VariableDataPrototype with ‘event’ semantics that is, in turn, referenced by a DataReceivedEvent and the DataReceivedEvent is referenced by a WaitPoint.
(SRS_Rte_00051)
On the other side, I'm not sure what happens with your blocking Rte_Receive vs your TimingEvent based RunnableEntity call.
Also consider the following:
RTE_E_LOST_DATA actually means, you lost data due to incoming data overflowing the queue (Rte_Receive only works with swImplPoliy = queued, otherwise if swImplPolicy != queued you get Rte_Read). This is not an excplicit Std_ReturnType value, but a flag added to that return value -> OverlayedError)
RTE_E_TIMEOUT would be for blocking Rte_Receive
RTE_E_NO_DATA would be for non-blocking Rte_Receive
you should then check as:
Std_ReturnType status;
status = Rte_Receive_..(<instance>, <parameters>);
if (Rte_HasOverlayedError(status)) {
// Handle e.g. RTE_E_LOST_DATA
}
// not with Rte_Receive - if(Rte_IsInfrastructureError(status)) { }
else { /* handle application error with error code status */
status = Rte_ApplicationError(status);
}
I have some code which looks like:
static int devname_read(struct cdev *dev, struct uio *uio, int ioflag)
{
int error = modify_state();
return (error);
}
The issue here is that modify_state() operates on global state when it really should be operating on is per open(2). In other words no reader should conflict with each other, and nothing persist when the device is close(2)ed.
How can I associate state with the file-descriptor or related identifier?
You probably want to use cdevpriv; see http://www.freebsd.org/cgi/man.cgi?devfs_set_cdevpriv.
I need to read certain statistics from iw_statistics structure, here's the code:
struct net_device *dev;
struct iw_statistics *wi_stats;
dev = first_net_device(&init_net);
while (dev)
{
if (strncmp(dev->name , "wlan",4)==0 )
{
if (dev->wireless_handlers->get_wireless_stats(dev) !=NULL ) // <--- here's where the code crashes.
{
wi_stats = dev-wireless_handlers->get_wireless_stats(dev);
printk(KERN_INFO "wi_stats = dev-wireless_handlers->get_wireless_stats(dev); worked!!! :D\n");
}
}
}
I'm working on linux kernel 2.6.35 and I'm writing a kernel module. What am I doing wrong here?
Looks like wireless_handlers struct is Null ... Just because a net device has it's name field filled doesn't mean it's configured.
This is where wireless_handlers gets set:
#ifdef CONFIG_WIRELESS_EXT
/* List of functions to handle Wireless Extensions (instead of ioctl).
* See <net/iw_handler.h> for details. Jean II */
const struct iw_handler_def * wireless_handlers;
/* Instance data managed by the core of Wireless Extensions. */
struct iw_public_data * wireless_data;
#endif
You should check the value called CONFIG_WIRELESS_EXT if it's not set , the wireless_handler struct is not set and thus you''ll be pointing to a Null and your module will get stuck
You should check that dev->wireless_handlers is not null. Can you paste the actual code snippet? What is the error you get?