I'm trying to find a UPnP device in the urn:dslforum.org namespace with this code:
com_ptr<IUPnPDeviceFinder> upnp_device_finder;
CoCreateInstance(CLSID_UPnPDeviceFinder, NULL, CLSCTX_ALL,
IID_IUPnPDeviceFinder, (LPVOID*)&upnp_device_finder);
if (upnp_device_finder.empty()) return;
com_ptr<IUPnPDevices> upnp_devices;
upnp_device_finder->FindByType(L"urn:dslforum-org:device:InternetGatewayDevice:1",
0, &upnp_devices);
if (upnp_devices.empty()) return;
long num_devices = 0;
upnp_devices->get_Count(&num_devices);
assert(num_devices != 0);
With Wireshark I see my device responding, but num_devices is always 0. I also have another InternetGatewayDevice in that LAN which is found via:
FindByType(L"urn:schemas-upnp-org:device:InternetGatewayDevice:1", 0, &upnp_devices);
So the only difference I can see is the namespace. Seems the Windows 8.1 IUPnPDeviceFinder can only find the devices in the urn:schemas-upnp-org space? Am I missing something? Any Ideas?
Related
Am trying to establish automated serial connection with a medical bluetooth device.
For ex .
the device is paired with a comport number 20 .The problem in this device is once the device is turned off ,it disconnects the bluetooth connection resulting the comport in some inaccessible state.(ie. when i try to open the create file handle returns 0x79 -ERROR_SEM_TIMEOUT).
So i tried polling for the comport connection as shown below in main thread ,
{
......
do
{
hSerial = CreateFileA(pcomport,
GENERIC_READ | GENERIC_WRITE,
0,
0,
OPEN_EXISTING,
IO_ATTRIBUTE,
0);
if (hSerial == INVALID_HANDLE_VALUE)
{
result = GetLastError();
if (ERROR_FILE_NOT_FOUND == result)
{
//serial port does not exist. Inform user and exit for now...
printf("spc [OpenSerialPort]:unable to detect COM port: %s\n", pcomport);
return hSerial;
}
//some other error occurred. Inform user.
printf("spc [OpenSerialPort]:unknown error 0x%08x at %s\n", result, pcomport);
CloseHandle(hSerial);
//return hSerial;
}
} while (hSerial == INVALID_HANDLE_VALUE);
....
}
This seems to work fine .But the problem am facing is when i try to open some other device comport where bluetooth connection is maintained ,it fails .(may be because of polling with a win api ).
I couldnt find any method in msdn to detect the connection is established.
How can i automate this scenario .ie ,whenever the device is on ,the serial port should be opened and should continue in next cycle.
My app creates a tap interface, and everything works well. But on FreeBSD, when it exits, the tap interface remains. To delete it, I have to manually run this command:
sudo ifconfig tap0 destroy
But I'd like to do this programmatically within my application. Where can I find the docs for SIOCIFDESTROY? Here is what I've tried when my app exits:
struct ifreq ifr;
memset(&ifr, '\0', sizeof(ifr));
strcpy(ifr.ifr_name, "tap0");
int sock = socket(PF_INET, SOCK_STREAM, 0);
err = ioctl(sock, SIOCIFDESTROY, &ifr);
At this point, err is zero, but the tap interface still exists when the app ends. Anyone know what else I might be missing?
The tricky part was trying to find documentation to describe is the parameter to pass to ioctl(). I never did find anything decent to read.
Turns out a completely blank ifreq with just the tap interface name set is all that is needed. In addition to the original code I included in the question, also note that I close the tap device file descriptor prior to deleting the actual tap interface. I can only imagine that might also be relevant:
close(device_fd);
struct ifreq ifr;
memset(&ifr, '\0', sizeof(ifr));
strcpy(ifr.ifr_name, "tap0");
int sock = socket(PF_INET, SOCK_STREAM, 0);
err = ioctl(sock, SIOCIFDESTROY, &ifr);
I'm trying to receive IEEE1722 packet via a raw Ethernet socket on ubuntu linux.
The socket itself works fine, I receive any single packet (ARP,TCP,SSDP,....) flowing around on the network, with exception of the IEEE1722 packets. They are somehow ignored on my read calls and don't understand why - maybe someone of you has an idea.
The packets are 802.1 frames with VLAN tag and EtherType 0x22f0
Neither switching from ETH_P_ALL to ETH_P_8021Q or to htons(0x22f0) does help. If I change it I don't receive anything anymore.
That's my code - someone with an idea what's wrong?
Creating the socket:
m_socket = socket(AF_PACKET, SOCK_RAW, htons(ETH_P_ALL));
if (m_socket < 0)
{
LOGERROR("EthRawSock", "Start(): SOCK_RAW creation failed! error: %d",errno);
m_socket = NULL;
return ErrorFileOpen;
}
struct ifreq ifr;
memset(&ifr, 0, sizeof(ifr));
strcpy(ifr.ifr_name, m_sznic.ptrz());
if (ioctl(m_socket, SIOCGIFINDEX, &ifr) < 0) {
LOGERROR("EthRawSock", "Start(): ioctl() SIOCGIFINDEX failed! error: %d (NIC: %s)",errno,ifr.ifr_name);
return ErrorFileOpen;
}
struct sockaddr_ll sll;
memset(&sll, 0, sizeof(sll));
sll.sll_family = AF_PACKET;
sll.sll_ifindex = ifr.ifr_ifindex;
sll.sll_protocol = htons(0x22f0);
if (bind((int)m_socket, (struct sockaddr *) &sll, sizeof(sll)) < 0) {
LOGERROR("EthRawSock", "Start(): bind() failed! error: %d",errno);
return ErrorFileOpen;
}
if (ioctl(m_socket, SIOCGIFHWADDR, &ifr) < 0)
{
LOGERROR("EthRawSock", "Start(): SIOCGIFHWADDR failed! error: %d",errno);
return ErrorFileOpen;
}
struct packet_mreq mr;
memset(&mr, 0, sizeof(mr));
mr.mr_ifindex = sll.sll_ifindex;
mr.mr_type = PACKET_MR_PROMISC;
if (setsockopt(m_socket, SOL_PACKET, PACKET_ADD_MEMBERSHIP, &mr, sizeof(mr)) < 0) {
LOGERROR("EthRawSock", "Start(): setsockopt() PACKET_ADD_MEMBERSHIP failed! error: %d",errno);
return ErrorFileOpen;
}
Reading via:
nsize = read(m_socket,m_recv_buffer,ETH_FRAME_LEN);
My two cents contribution:
AVTP streams run in a tagged frame, this means that you won't find the ethertype 0x22f0 at the expected offset (12 octets from start of packet, just after destination and source MAC addresses) - it will be 4 octets after that. The ethertype for VLAN tagged frames is normally 0x8100.
Have you tried wireshark - or tshark - on this interface? Wireshark should be able to get those packets fine - nots sure if you need to enable it though. If I'm not mistaken all network ports must support 802.1AS. IEEE 1722 requires hardware support and I think that it would be impossible to help you out without knowing what's how this was set up.
Code snippet below; basically, I am grabbing the active vt and issuing an ioctl KDGETLED against that terminal for the current state of the capslock/numlock/scrolllock keys and I always get result=0, regardless of the state of the lock keys.
I've tried this on multiple Linux boxes, all running variants of Ubuntu (e.g. Mint). I've tried other fds for the KDGETLED command such as "/dev/tty", "/dev/console", 0, etc. I'm running into the same problem with KDGKBLED. Are others experiencing the same issue, am I doing something silly, am I running into poorly written drivers, or something else?
int fd;
vt_stat stat;
fd = open("/dev/tty0", O_RDONLY);
if (ioctl(fd, VT_GETSTATE, &stat) == -1) {
fprintf(stderr, "Error on VT_GETSTATE\n");
exit(1);
}
close(fd);
char tty[128];
sprintf(tty, "/dev/tty%d", stat.v_active);
printf("Query tty: %s\n", tty);
char result;
fd = open(tty, O_RDWR | O_NDELAY, 0);
if (ioctl(fd, KDGETLED, &result) == -1) {
fprintf(stderr, "Error on KDGETLED\n");
exit(1);
}
close(fd);
printf("LED flag state: %d\n", result);
Thanks, in advance, to all who review my question.
Checkout the driver code, especially the struct file_operations instance for that driver, and check the function assigned to the .ioctl member - if that is poorly coded (I've seen a lot of shitty stuff happening in ioctls) then that is definitely your issue.
In this case I am pretty sure it is the drivers fault. As long the ioctl command shows no compile error, everything - especially error handling and input checking - is the task of the driver.
I am working on a Linux server that listens for UDP messages as part of a discovery protocol. My code for listening follows:
rcd = ::select(
socket_handle + 1,
&wait_list,
0, // no write
0, // no error
&timeout);
if(rcd > 0)
{
if(FD_ISSET(socket_handle,&wait_list))
{
struct sockaddr address;
socklen_t address_size = sizeof(address);
len = ::recvfrom(
socket_handle,
rx_buff,
max_datagram_size,
0, // no flags
&address,
&address_size);
if(len > 0 && address.sa_family == AF_INET)
{
struct sockaddr_in *address_in =
reinterpret_cast<struct sockaddr_in *>(&address);
event_datagram_received::cpost(
this,
rx_buff,
rcd,
ntohl(address_in->sin_addr.s_addr),
ntohs(address_in->sin_port));
}
}
}
In the meantime, I have written a windows client that transmits the UDP messages. I have verified using wireshark that the messages are being transmitted with the right format and length (five bytes). However, when I examine the return value for recvfrom(), this value is always one. The size of my receive buffer (max_datagram_size) is set to 1024. The one byte of the packet that we get appears to have the correct value. My question is: why am I not getting all of the expected bytes?
In case it matters, my Linux server is running under Debian 5 within a VirtualBox virtual machine.
nos answered my question in the first comment. I was using the wrong variable to report the buffer length.