I am writing a simple client-server app using AF_UNIX sockets, but my code does not work. When I want to send to socket I get transport endpoint not connected error. Any advices?
SERVER:
struct sockaddr_un addr;
memset(&addr, 0, sizeof(addr));
addr.sun_family=AF_UNIX;
strcpy(addr.sun_path+1,"example");
addr.sun_path[0]=0;
int mysock = socket(AF_UNIX, SOCK_DGRAM, 0);
if((bind(mysock, (struct sockaddr *)&addr,sizeof(addr)))<0)
{
perror("bind() error");
return false;
}
if (send(mysock, path, sizeof(path), 0)<0)
{
perror("send");
}
CLIENT:
struct sockaddr_un addr;
memset(&addr, 0, sizeof(addr));
int mysock = socket(AF_UNIX, SOCK_DGRAM, 0);
if(mysock<0)
{
perror("socket() error");
return false;
}
addr.sun_family=AF_UNIX;
strcpy(addr.sun_path+1,"example");
addr.sun_path[0]=0;
if((connect(mysock, (struct sockaddr *)&addr,sizeof(addr)))<0)
{
perror("connects() error");
return false;
}
recv(mysock, buf, sizeof(buf),0);
printf("%s\n",buf);
You haven't connected the server side. Binding a socket to an address establishes the address of the local peer. However, immediately after binding the socket, you're doing a send but you haven't specified a destination. I.e. where is the data to be sent?
Furthermore, Unix domain datagram sockets are different than others in that both sides need to establish a local address before bidirectional data transfer can occur.
So each side needs to create a socket and bind it to an address of their choosing. The client side can then either connect to the server's address (which permanently establishes the destination address), or it may use sendto to specify the destination address for each buffer.
The server will typically use recvfrom to receive data and the client's address, then use sendto to return the response to the client.
For the sake of clarity, this example in python3. Server code:
import socket
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
sock.bind(b'\x00server') # Our address
data, addr = sock.recvfrom(1024)
print("Data:", data)
print("Client Address:", addr)
sock.sendto(data, addr)
Client code:
import socket
sock = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
sock.bind(b'\x00client') # Our address
sock.connect(b'\x00server') # Server's address
data = b"Hello"
sock.send(data)
print("Sent", data)
rdata, saddr = sock.recvfrom(1024)
print("ReturnedData:", rdata)
print("ServerAddr returned:", saddr)
transport not connected
You can't use send() on an unconnected UDP socket. You need to either connect() it or use sendto(). This is all documented.
NB What does 'Linux abstract socket' mean? I don't see anything abstract about your code. You are also lacking error-checking on recv(), which needs to be recvfrom() if the socket is unconnected.
Related
I have written a Linux application program that receives UDP packets transmitted from a Desktop with fixed & known IP-address on the network. I am using a raw socket to receive packets on my system and filter the received packets based on the source address.
The problem I am facing is, the program runs fine for some time and I get all the required packets, but after a couple of hours, the application stops getting any packets. If I run the command,
tcpdump -i eth0 src 192.168.20.48 on my system, then I see that the system continues to receive the expected packets. But I am not sure what is causing my program to stop receiving packets.
Below is the code snippet used to open a raw socket, receive packets, and filter out the UDP packets transmitted from the known IP address.
int main()
{
int sockfd;
int one = 1;
struct timeval tv;
socklen_t len;
int bytes;
unsigned char tsptr[2048];
struct sockaddr_in cliaddr;
struct iphdr *iph;
int result=0;
char source_add[50];
char expected_source_add[50];
len = sizeof(struct sockaddr_in);
// Creating socket file descriptor
if ((sockfd = socket(AF_INET , SOCK_RAW , IPPROTO_UDP)) < 0 ) {
BRH_PERROR("socket creation failed");
return 1;
}
tv.tv_sec = 30;
tv.tv_usec = 0;
setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR|SO_REUSEPORT, &one, sizeof(one));
setsockopt(sockfd, SOL_SOCKET, SO_RCVTIMEO,(char*)&tv,sizeof(tv));
strcpy(expected_source_add, "192.168.20.48");
while (1) {
/*Read fixed data count from socket*/
bytes =recvfrom(sockfd, tsptr, 1500, MSG_WAITALL, (struct sockaddr *)&cliaddr, &len);
iph=(struct iphdr*)tsptr;
//get only UDP packet
if (iph->protocol != 17) continue;
strcpy(source_add,inet_ntoa(cliaddr.sin_addr));
result = strcmp(expected_source_add,source_add);
/*receive data from expected IP address only*/
if( result == 0) {
//Consume the packet
}
}
return 0;
}
Any clue on why the packet receive stops on my application, even though tcpdump shows that packets are being received on the interface, will be helpful.
The code you write here can not see any problem that you describe, I think you should do something like below.
1. using wireshark or tcpdump to see if the nic receive packets successfully
2. beyond the program, do you use any buffer or message queue and are they working good?
3. using tools to see if there exists any memeory leak
4. writing log in every step, especially around recvefrom and strcmp
I have a server to collect Tcp data from different clients to a certain port. I have a scenario that whenever the client creates tcp connection and remain idle for more than let's say 30 min then I need to close the connection.
I have learned about TCP keep alive to track that the peer is dead or not and Mostly I found examples used in client side. Similarly can I used in the server side to poll the connection whether it is active or not?
Further In linux sysctl.conf , there is a configuration file to edit the values. This seems that the whole tcp connection is destroyed after certain inactivity. I am in need such that certain connection form the device are destroyed after certain time inactivity but not the whole tcp port connection closed.
I am using ubuntu to create the server to collect tcp connection. Can I use TCP Keep-Alives in server code to find the inactive client and close the particular client? or is there any other way in server side to implement such feature?
and while going through the web it is mentioned that
(getsockopt(s, SOL_SOCKET, SO_KEEPALIVE, &optval, &optlen)
this getsockopt is for the main tcp connection and setting here seems the setting is for whole connection to the server.
However what I need is for the specific client. I have the event server code as
here client_fd is accepted and now I need to close this client_fd if next data through this client is not received within certain time.
void event_server(EV_P_ struct ev_io *w, int revents) {
int flags;
struct sockaddr_in6 addr;
socklen_t len = sizeof(addr);
int client_fd;
// since ev_io is the first member,
// watcher `w` has the address of the
// start of the _sock_ev_serv struct
struct _sock_ev_serv* server = (struct _sock_ev_serv*) w;
server->socket_len = len;
for (;;) {
if ((client_fd = accept(server->fd, (struct sockaddr*) &addr, &len)) < 0) {
switch (errno) {
case EINTR:
case EAGAIN:
break;
default:
zlog_info(_c, "Error accepting connection from client \n");
//perror("accept");
}
break;
}
char ip[INET6_ADDRSTRLEN];
inet_ntop(AF_INET6, &addr.sin6_addr, ip, INET6_ADDRSTRLEN);
char *dev_ip = get_ip(ip);
server->device_ip = dev_ip;
zlog_debug(_c,"The obtained ip is %s and dev_ip is %s", ip, dev_ip);
/** check for the cidr address for config_ip **/
char *config_ip;
config_ip = get_config_ip(dev_ip, _client_map);
zlog_debug(_c,"The _config ip for dev_ip:%s is :%s", dev_ip, config_ip);
if (config_ip == NULL) {
zlog_debug(_c,"Connection attempted from unreigistered IP: %s", dev_ip);
zlog_info(_c, "Connection attempted from unregistered IP : %s", dev_ip);
AFREE(server->device_ip);
continue;
}
json_t *dev_config;
dev_config = get_json_object_from_json(_client_map, config_ip);
if (dev_config==NULL) {
zlog_debug(_c,"Connection attempted from unreigistered IP: %s", dev_ip);
zlog_info(_c, "Connection attempted from unregistered IP : %s", dev_ip);
AFREE(server->device_ip);
continue;
}
if ((flags = fcntl(client_fd, F_GETFL, 0)) < 0 || fcntl(client_fd, F_SETFL, flags | O_NONBLOCK) < 0) {
zlog_error(_c, "fcntl(2)");
}
struct _sock_ev_client* client = malloc(sizeof(struct _sock_ev_client));
client->device_ip = dev_ip;
client->server = server;
client->fd = client_fd;
// ev_io *watcher = (ev_io*)calloc(1, sizeof(ev_io));
ev_io_init(&client->io, event_client, client_fd, EV_READ);
ev_io_start(EV_DEFAULT, &client->io);
}
}
TCP keep alives are not to detect idle clients but to detect dead connections, i.e. if a client crashed without closing the connection or if the line is dead etc. But if the client is only idle but not dead the connection is still open. Any attempts to send an empty packet (which keep-alive packets are) to the client will result in an ACK from the client and thus keep alive will not report a dead connection.
To detect idle clients instead use either timeouts for read (SO_RCVTIMEO) or use a timeout with select, poll or similar functions.
I have implemented below mechanism to detect idle status on Socket IO activity.
My Socket is wrapped in some class like UserConnection. This class has one more attribute lastActivtyTime. Whenever I get a read on write on this Socket, I will update this attribute.
I have one more background Reaper thread, which will iterate through all UserConnection objects and check for lastActivtyTime. If current time - lastActivtyTime is greater than configured threshold parameter like 15 seconds, I will close the idle connection.
In your case, when you are iterating through all UserConnections, you can check client_id and your threshold of 30 minutes inactivity to close idle connection.
I am writing a Linux kernel module which redirects a packet to the localhost webserver ,which was originally forwarded through this machine using bridge . It also redirects to reply to the client . The client is oblivious of the redirection . So there are 2 parts
1. all forwarded packets through bridge to some webserver outside are redirected to local webserver .
The output of the localhost webserver is channelled to the original client
I am able to do the second part through nf_hook NF_INET_LOCAL_OUT
unsigned int snoop_hook_reply( unsigned int hooknum, struct sk_buff *skb,
const struct net_device *in, const struct net_device *out,
int(*okfn)( struct sk_buff * ) )
{
int offset, len;
struct ethhdr *ethh;
struct iphdr *iph;
struct tcphdr *tcph;
bool flag = false;
struct net_device *eth1_dev , *lo_dev;
if (!skb) return NF_ACCEPT;
iph = ip_hdr(skb);
if (!iph) return NF_ACCEPT;
skb_set_transport_header(skb, iph->ihl * 4);
tcph = tcp_hdr(skb);
/* skip lo packets */
if (iph->saddr == iph->daddr) return NF_ACCEPT;
if (tcph->dest == htons(80))
flag=true;
if(flag != true)
return NF_ACCEPT;
// correct the IP checksum
iph->check = 0;
ip_send_check (iph);
//correct the TCP checksum
offset = skb_transport_offset(skb);
len = skb->len - offset;
tcph->check = 0;
if(skb->len > 60){
tcph->check = csum_tcpudp_magic((iph->saddr), (iph->daddr), len, IPPROTO_TCP, csum_partial((unsigned char *)tcph,len,0));
}
else{
tcph->check = ~csum_tcpudp_magic((iph->saddr), (iph->daddr), len, IPPROTO_TCP, 0);
}
//send to dev
eth1_dev = dev_get_by_name(&init_net,"eth1");
lo_dev = dev_get_by_name(&init_net,"lo");
skb->dev = eth1_dev;
ethh = (struct ethhdr *) skb_push(skb, ETH_HLEN);
skb_reset_mac_header(skb);
skb->protocol = ethh->h_proto = htons(ETH_P_IP);
memcpy (ethh->h_source,eth1_dev->dev_addr , ETH_ALEN);
memcpy (ethh->h_dest, d_mac, ETH_ALEN); // d_mac is mac of the gateway
dev_queue_xmit(skb);
return NF_STOLEN;
}
the above code works perfectly for me . One issue is that later on I will mangle the packet so need to create a new sk_buff, probably .
I am not able to do the 1st part through NF_INET_PRE_ROUTING, I am not able to push the packet/sk_buff to the webserver process through the TCP/IP stack. I tried using dev_queue_xmit() function with skb->dev as both eth1 and lo . I am seeing the packets hitting on the lo or eth1 through tcpdump . But the packets are not reaching the localhost webserver. Can anyone help me regarding this or point to some similar answered question . I believe instead of dev_queue_xmit() I need to call some receiving function . Also when packets arrive in NF_INET_PREROUTING, I the ethernet headers are already there so I am not forming it .
I have already accomplished the above tasks in variety of ways , first using raw sockets , then using nf_queue , now I want to see the performance through this method.
Thanks
If you want to receive the packet locally, you cannot call dev_queue_xmit() on eth1 as it will be sent out. You probably need to call netif_rx() after pointing the skb->dev to eth1/lo.
One more point is if the dest-ip is not your local host ip, then you need to avoid routing again otherwise, there will be no use of your interception.
To achieve this, either you need to modify packet's dest ip to eth1/lo IP or
fool the IP layer by using skb_dst_set() to set "rth->dst.input= ip_local_deliver" for packet to be accepted as local packet.
I have been trying to write a small program for Linux, to detect a client connection on a port, say 8080, and upon a connection close the socket and execvp some program.
I setup the socket for the port.
After that I do a select to wait for incoming client connections.
if(select(listener+1, &master, NULL, NULL, NULL) == -1)
{
perror("Server-select() erro!");
exit(1);
}
printf("Close socket...\n");
close(listener);
After this I execvp a program, that should read then data on the port.
This all works fine, but the client which tries to connect, always fails first time, because, I guess the data send from the client to the program is lost when I close the socket.
Is there anyway to wait for port connections, without loosing the data send?
I was thinking something like not acknowledging the connection.
When I do accept() as suggested:
{
struct sockaddr_in clientName = { 0 };
int slaveSocket, clientLength = sizeof(clientName);
(void) memset(&clientName, 0,sizeof(clientName));
slaveSocket = accept(listener,(struct sockaddr *) &clientName,&clientLength);
if (-1 == slaveSocket)
{
perror("accept()");
exit(1);
}
}
printf("Close socket...\n");
close(listener);
if ((child = fork()) == 0) { /* Child process. */
printf("Child: PID of Child = %ld\n", (long) getpid());
execvp(argv[2], &argv[2]); /* arg[0] has the command name. */
/* If the child process reaches this point, then *//* execvp must have failed. */
fprintf(stderr, "Child process could not do execvp.\n");
exit(1);
} else { /* Parent process. */
if (child == (pid_t) (-1)) {
fprintf(stderr, "Fork failed.\n");
exit(1);
} else {
c = wait(&cstatus); /* Wait for child to complete. */
printf("Parent: Child %ld exited with status = %d\n", (long) c,
cstatus);
}
}
The executed shell program fails with:
bind() error (port number: 8554): Address already in use
So I guess I need to release the port somehow?
See this example how to do it correctly: http://www.linuxjournal.com/files/linuxjournal.com/linuxjournal/articles/023/2333/2333l1.html
You don't close your listening socket. You call accept to get a new fd for the incoming connection, then fork. After forking, you may close the listening socket, and use the accept'ed socket to transfer data. The parent process just closes the accepted socket and continues listening.
As you aren't accepting the incoming connection, it must get closed when you close the listening socket. You need to call accept() first.
Is there possible that accept() (on redhat Enterprise 4/linux kernel 2.6) return a same socket value for different tcp connections from the same process of a same application and same machine?
I am so surprised that when I got such a result that many connections have the same socket value on server side when I checked the log file!! How is it possible?!!
By the way, I am using TCP blocking socket to listen.
main(){
int fd, clientfd, len, clientlen;
sockaddr_in address, clientaddress;
fd = socket(PF_INET, SOCK_STREAM, 0);
....
memset(&address, 0, sizeof address);
address.sin_address = AF_INET;
address.sin_port = htons(port);
....
bind(fd, &address, sizeof address);
listen(fd, 100);
do {
clientfd = accept(fd, &clientaddress, &clientlen);
if (clientfd < 0) {
....
}
printf("clientfd = %d", clientfd);
switch(fork()){
case 0:
//do something else
exit(0);
default:
...
}
} while(1);
}
my question is that why printf("clientfd = %d"); prints a same number for different connections!!!
If server runs in multiple processes (like Apache with mpm worker model), then every process has its own file descriptor numbering starting from 0.
In other words, it is quite possible that different processes will get exact same socket file descriptor number. However, fd number it does not really mean anything. They still refer to different underlying objects, and different local TCP ports.
The socket is just a number.It is a hook to a data structure for the kernel.
BTW TCP uses IP. Look up the RFC
That printf() doesn't print any FD at all. It's missing an FD parameter. What you are seeing could be a return address or any other arbitrary junk on the stack.