OK. So, this is exactly the opposite of what everyone asks about in network programming. Usually, people ask how to make a broken socket work. I, on the other hand am looking for the opposite.
I currently have sockets working fine, and want them to break to re-create this problem we are seeing. I am not sure how to go about intentionally making the socket fail by having a bad read. The trick is this: The socket needs to be a working, established connection, and then it must fail for whatever reason.
I'm writing this in C and the drivers are running on a Linux system. The sockets are handled by a non-IP Level 3 protocol in Linux by a Linux Device Driver. I have full access to all of the code-base, I just need to find a way to tease it out so that it can fail.
Any ideas?
Can you modify your kernel? You could introduce a method to induce errors at the network stack level.
One classic trick is to unplug the network cable.
Related
I'm writing an RSS reader, and I want to gracefully handle situations where the internet connection is unavailable. What's a good way (on linux) to test program behavior in the absence of internet without pulling the cord and/or RF-killing everything?
As User left in a comment, you could test this by mocking your network access library. At the point the library would normally access the network, you modify the behavior to grab a local file instead. This post describes the technique more, and includes code samples for Python and urllib.
From https://bbs.archlinux.org/viewtopic.php?id=83384:
ifconfig eth0 down/up? (or other interface instead of eth0)
I can't test this, as I'm at work and need the Internet for stuff currently running, but hopefulyy this at least points you in the right direction.
I have an embedded system that can be treated as an Access Point. There's a program that runs in that system and performs some network communication with devices connected to that Access Point. It is sending UDP packets containing some diagnostic information (a data structure) and receiving commands. The problem is that sometimes some fields of that outgoing data structure are not filled with data (eg. there are zeroes or some garbage). I need those fields to be correctly filled every time and I know what values should be put there.
Another task that I need to accomplish is to filter incoming packets that come to this program (I know what ports it listens on) - usually I need to simply pass them, but occassionaly (eg. when I get some information from sensors) it is necessary to completely replace them with new packets that I would generate.
I have several ideas varying from some smart usage of iptables and pcap to writing my own kernel module. I do not own sources of that embedded application so I cannot embed this functionality in its code. Performance is a crucial thing here, and I'd like to hear your suggestions: what should I go for? Writing my own kernel modules seems to be the best solution to me, but I have no experience in network hacking so maybe there are some other ways that are better suited for this problem. Any opinion will be highly appreciated!
One standard approach is to use libnetfilter_queue to intercept and modify packets directly. You should at least try this before attempting to write your own kernel modules.
You could do it in userspace. Just write a server that receives the packets changes them and send them again out. You have to configure the application just to use your localhost as destination ip (or configure your system that it has the target address). Its a typical "man-in-the-middle" setup.
Sorry for the rather long post.
I need some input regarding a project that I am going to undertake.
I am trying to make an application that collects kernel debugging information from a guest Linux OS, located inside a VmWare Virtual Machine, and send them to a host OS efficiently.
So far, I have found a similar project, but written for Windows[1].
The author of the project wrote a DLL that is loaded into memory, and replaces the implementation of the KdSendPacket and KdReceivePacket functions, to use the VmWare GuestRpc[2] mechanism, instead of the slow serial port.
The data are then send to a debugging application on the host(Kd or WinDbg) trough a named pipe.
The author claims that there is a speed-up up to 45%, by avoiding the serial port transmission.
I am trying to achieve something similar ,but for Linux, and try to make the debugging process a little faster, than using the serial port.
My concrete questions are :
Do any similar applications exist?
I didn't manage to find any.
Would such an application be worth it ,comparing its functionality to netconsole[3], for example?
What method of intercepting printk messages would you suggest ?
Is there an equivalent of KdSendPacket/KdReceivePacket on Linux ?
[1]. http://virtualkd.sysprogs.org/dox/operation.html
[2]. http://articles.sysprogs.org/kdvmware/guestrpc.shtml
[3]. http://www.kernel.org/doc/Documentation/networking/netconsole.txt
Using the serial port is really suboptimal.. even the (virtual) network would be preferable to that, but getting back to host-guest IPC channels, VMware's VMCI comes to mind.
many approaches can use to achieve your goal, below methods can be applied if network is connected:
use syslog service and transfer log though network to your server:
syslogd, syslogng seems support sending log to a log server with some filter critiera.
directly call tcp/udp socket functions in your kernel module to sends your collected data back to server.
other approaches, you may write application on host machine that calls hypervisor's share memory access function to read the memory buffer of your kernel module. However, the xen/kvm hypervisor both support these apis and i am not sure about weather vmware have this kind of library.
I need to close all ongoing Linux TCP sockets as soon as the Ethernet interface drops (ie cable is disconnected, interface is down'ed and so on).
Hacking into /proc seems not to do the trick. Not found any valuable ioctl's.
Doint it by hand at application level is not what I want, I'm really looking for a brutal and global way of doing it.
Did anyane experienced this before and willing to share his foundings ?
The brutal way which avoids application level coding is hacking your kernel to activate TCP keepalive with a low timeout for all your connections.
This is rarely needed and is often wouldn't work. TCP is a data transfer protocol, unless there is data loss, nothing should be done. Think twice why you ever would need that.
Otherwise, you can try to periodically poll interface(s) and check for the UP flag. If interface looses UP flag, then OS already reacted on cable being unplugged and down'ed the interface. man 7 netdevice, see SIOCGIFFLAGS for more.
Network drivers also generate an event on even when cable is plugged, but I'm not sure whether you can access that or not from a user. You might want to check the udev as its documentation explicitly mentions network interfaces.
The situation is this: I have a USB device (a custom device I'm trying to talk to) with two endpoints, one writing to the device, one reading from the device. Both are bulk transfers. Every communication transaction takes the form of (1) Write a command to the device (2) Read the response. I'm using libusb (version 0.1 rather than the 1.0 beta) to actually perform the communications.
On Windows, all is well. I can connect the device, claim the interface and communicate happily. However, in Ubuntu (a standard Hardy desktop install), whilst I can connect to the device and write to it, all read operations fail with the error "error submitting URB: Invalid argument" reported from libusb (error code -22).
If I check /var/log/messages I see a warning message logged for the same time as the read was attempted: "sysfs: duplicate filename 'usbdev4.3_ep81' can not be created" - which tallies with the device (it is indeed on that bus and it's endpoint 81 I'm trying to read from).
So... anyone seen a similar problem using libusb, or have any idea how to fix it?
Turns out it was a misconfiguration in the descriptors on the device itself. lsusb -v showed an extra interface which was never used, which had a single isochronous endpoint 0x81. Since this was never used (and had never been tested as far as I could see, so quite possibly not even defined correctly) I removed it from the device descriptors completely (in the firmware).
And now I have a fully working device. Why linux refused to read from the device but Windows worked fine I don't know, but it definitely sent me on a wild goose chase.
I haven't used libusb in quite some time -- but the sysfs error indicates that this is likely to be a kernel problem rather than a libusb one, so I'd start by trying to track that one down. (Not much point in trying to work with libusb until you're sure your kernel is talking to the device correctly).
Does the patch at http://kerneltrap.org/mailarchive/linux-usb-devel/2007/10/17/345922 apply to your kernel? (If so, does it fix the issue?)
I had to do some hacking to udev rules to get the device created with the right permissions for libusb to work. Like so:
SUBSYSTEM=="usb" ATTRS{idVendor}=="0a81", ATTRS{idProduct}=="0701", \
MODE="0666" SYMLINK+="missile_launcher"
(This was an usb missile launcher I was writing a driver for.
Also this snippet was required to not clash with the kernel.
if(LIBUSB_HAS_DETACH_KERNEL_DRIVER_NP)
{
// Detach kernel driver (usbhid) from device interface. (Linux hack)
usb_detach_kernel_driver_np(launcher, 0);
usb_detach_kernel_driver_np(launcher, 1);
}
I'm not sure how this relates to your problem, but atleast there are two possible points of failure that might be involved.
You can try WinDriver it's a commercial tool but have free full function evaluation (somehow time limited). You can check with WinDriver and if problem is reproducible it's might be device or your protocol fault. You did not give enough information to determine or analyze.