SW1: 6e SW2:00 error on selecting card - javacard

I am trying to implement my first example of a Javacard applet, and on selecting the card by running the CREF simulator, I am getting the error sw1: 6e sw2:00
The tools that I am using are:
Eclipse
JDK 7
JCDK 2.2.2
Eclipse JCDE
This is the definition of my applet:
public class Card extends Applet {
/******************** Constants ************************/
public static final byte CLA_CARDAPPLET = (byte) 0xB0;
public static final byte INS_INCREMENT_COUNTER = 0x00;
public static final byte INS_DECREMENT_COUNTER = 0x01;
public static final byte INS_CHECK_COUNTER = 0x02;
public static final byte INS_INITIALIZE_COUNTER = 0x03;
/*********************** Variables ***************************/
private byte counter;
/************ Constructor **************/
private Card() {
counter = 0;
}
public static void install(byte bArray[], short bOffset, byte bLength) throws ISOException {
new Card().register();
}
public void process(APDU apdu) throws ISOException {
byte[] buffer = apdu.getBuffer();
if (buffer[ISO7816.OFFSET_CLA] != CLA_CARDAPPLET) {
ISOException.throwIt(ISO7816.SW_CLA_NOT_SUPPORTED);
}
switch (buffer[ISO7816.OFFSET_INS]) {
case INS_INCREMENT_COUNTER:
counter++;
break;
case INS_DECREMENT_COUNTER:
counter--;
break;
case INS_CHECK_COUNTER:
buffer[0] = counter;
apdu.setOutgoingAndSend((short) 0, (short) 1);
break;
case INS_INITIALIZE_COUNTER:
apdu.setIncomingAndReceive();
counter = buffer[ISO7816.OFFSET_CDATA];
break;
default:
break;
}
}
}
To simulate the javacard, I am following these steps:
Execute cref -o card.eeprom
Upload the applet: myPackage> Java Card Tools > Deploy
Execute cref -i card.eeprom -o card.eeprom
Initialize the card by running create-Card.script
Execute cref -i card.eeprom -o card.eeprom
Select the card by running select-Card.script
The auto-generated content of the script select-Card.script is:
powerup;
// select Card applet
0x00 0xA4 0x04 0x00 0xb 0x01 0x02 0x03 0x04 0x05 0x06 0x07 0x08 0x09 0x00 0x01 0x7F;
powerdown;
Where 0x01 0x02 0x03 0x04 0x05 0x06 0x07 0x08 0x09 0x00 0x01 is the AID of the Card applet
The selection returns sw1: 6e sw2:00, and according to scard.org, the code 6e, 00 means that the class does not exist or is not supported, but which class? and how to make it recognizable?

You are getting 0x6E00 because of below mentioned line..
if (buffer[ISO7816.OFFSET_CLA] != CLA_CARDAPPLET) {
ISOException.throwIt(ISO7816.SW_CLA_NOT_SUPPORTED);
}
When you select applet it apdu reaches to your applet and it checks the class byte which is 0x00 in case of select applet command.
add below mentioned line on top of process(APDU apdu) method..
if (selectingApplet()) {
return;
}
This line will just return 0x9000 as SW when you send select applet command.

Related

Arduino to PC secure bluetooth connection(cryptography)

I'm using an Arduino Uno to build a smoke detection system, which works. Since I have to do an IoT project, I have to establish a secure connection with a device (I thought with my smartphone, or my PC), and I'm using a Bluetooth module HC-05 to do it. The idea is:
Send the smoke sensor data to Arduino IDE, encrypt them and display the encrypted data to the serial (and it works)
Connect Arduino to my smartphone using HC-05 and the app "Makerslab BT Demo" (already done)
Decrypt the value of the sensor when I press "1" on the app and display it;
Decrypt the value of the sensor when there's danger and display a "danger message".
(that's what I have to do now).
That's my code on Arduino IDE:
#include <AES.h>
#include <AESLib.h>
#include <AES_config.h>
#include <xbase64.h>
#include <SoftwareSerial.h>
SoftwareSerial BT(1,0);
#define VCC2 5
int smokeA0 = A0;
int buzzer = 11;
AES aes;
byte cipher[400];
char b64[400];
float sensorValue;
//char a;
void do_encrypt(String msg, String key_str, String iv_str){
byte iv[16];
memcpy(iv,(byte*)iv_str.c_str(),16);
int blen=base64_encode(b64,(char*)msg.c_str(),msg.length());
aes.calc_size_n_pad(blen);
int len=aes.get_size(); //zero padding
byte plain_p[len];
for(int i=0;i<blen;++i) plain_p[i]=b64[i];
for(int i=blen;i<len;++i) plain_p[i]='\0';
// l'AES-128-CBC encryption
int blocks = len/16;
aes.set_key((byte *)key_str.c_str(), 16);
aes.cbc_encrypt(plain_p, cipher, blocks, iv);
// use base64 encoder to encode the encrypted data:
base64_encode(b64,(char *)cipher,len);
Serial.println(String((char *)b64));
}
void setup() {
pinMode(buzzer, OUTPUT);
pinMode(smokeA0, INPUT);
pinMode(VCC2, OUTPUT);
digitalWrite(VCC2, HIGH);
BT.begin(9600);
BT.println(F("Hi! Press "1" to know the sensor value."));
Serial.begin(115200);
Serial.println(F("gas sensor is warming up!"));
delay(2000); //allow the sensor to warm up
noTone(buzzer);
}
void loop() {
String key_str="aaaaaaaaaaaaaaaa"; //16 bytes
String iv_str="aaaaaaaaaaaaaaaa"; //16 bytes
sensorValue = analogRead(smokeA0);
String msg = String(sensorValue, 3);
do_encrypt(msg,key_str,iv_str);
/* if(BT.available()){
a=(BT.read());
if(a=='1'){
Serial.print(F("Air quality: "));
}
}*/
if(sensorValue > 300){
Serial.print(F(" | Danger!"));
BT.print(F(" | Danger!"));
tone(buzzer,1000,2000);
}
else {
noTone(buzzer);
}
Serial.println(F(""));
delay(2000);
}
I'm not sure to use the right security protocol (AES), but encryption works. How I can decrypt data?

Input/Output errors in rusb write_interrupt on windows

I'm building a small "macro" style keyboard using rust, STM32 chips, usb-device and usbd-hid. The device is correctly detected by my PC and the manufacturer strings etc are visible in device manager. I can use the device as a keyboard.
However when I try to write from host to device using rusb or hidapi, it fails with an IO error. When I debug the C code it gives an "Access Denied" error when it tries to WriteFile to the USB device. It seems on further research that this is a limitation of Windows, in that I can't access HID mice and keyboards as they're system reserved:
the native Windows HID driver is supported by libusb, but there are some limitations, such as not being able to access HID mice and keyboards since they are system reserved
However I can't find any documentation or examples of how to work around this - it seems I should be able to add another configuration, interface or endpoint in my USB descriptors that isn't exclusively held by Windows. Anybody have any hints on where to start?
My current USB device configuration (using usbd-hid's macro, which I'm happy to abandon if I need to) looks like this:
#[gen_hid_descriptor(
(collection = APPLICATION, usage_page = GENERIC_DESKTOP, usage = KEYBOARD) = {
(usage_page = KEYBOARD, usage_min = 0xE0, usage_max = 0xE7) = {
#[packed_bits 8] #[item_settings data,variable,absolute] modifier=input;
};
(usage_min = 0x00, usage_max = 0xFF) = {
#[item_settings constant,variable,absolute] reserved=input;
};
(usage_page = LEDS, usage_min = 0x01, usage_max = 0x05) = {
#[packed_bits 5] #[item_settings data,variable,absolute] leds=output;
};
(usage_page = KEYBOARD, usage_min = 0x00, usage_max = 0xDD) = {
#[item_settings data,array,absolute] keycodes=input;
};
(usage_page = 0xFF17, usage_min = 0x01, usage_max = 0xFF) = {
#[item_settings data,variable,absolute] command=output;
};
(usage_page = 0xFF17, usage_min = 0x01, usage_max = 0xFF) = {
#[item_settings data,variable,absolute] data=output;
};
}
)]
#[allow(dead_code)]
pub struct CustomKeyboardReport {
pub modifier: u8,
pub reserved: u8,
pub leds: u8,
pub keycodes: [u8; 6],
pub command: u8,
pub data: u8,
}
Which produces a descriptor that looks like this
0x05 0x01 0x09 0x06 0xA1 0x01 0x05 0x07
0x19 0xE0 0x29 0xE7 0x15 0x00 0x25 0x01
0x75 0x01 0x95 0x08 0x81 0x02 0x19 0x00
0x29 0xFF 0x26 0xFF 0x00 0x75 0x08 0x95
0x01 0x81 0x03 0x05 0x08 0x19 0x01 0x29
0x05 0x25 0x01 0x75 0x01 0x95 0x05 0x91
0x02 0x95 0x03 0x91 0x03 0x05 0x07 0x19
0x00 0x29 0xDD 0x26 0xFF 0x00 0x75 0x08
0x95 0x06 0x81 0x00 0x06 0x17 0xFF 0x19
0x01 0x29 0xFF 0x95 0x01 0x91 0x02 0x06
0x17 0xFF 0x19 0x01 0x29 0xFF 0x91 0x02
0xC0
Related: 1
I resolved this by creating a separate report:
#[gen_hid_descriptor(
(collection = LOGICAL, usage_page = VENDOR_DEFINED_START, usage = 0x00) = {
(usage_page = 0xFF17, usage_min = 0x01, usage_max = 0xFF) = {
#[item_settings data,array,absolute] command=output;
};
}
)]
pub struct CommandReport {
pub command: [u8; 2],
}
Then I created a separate HID interface with only an OUT endoint:
// for the keyboard
let hid = HIDClass::new(&alloc, CustomKeyboardReport::desc(), 10);
// for comms from the host -> device
let command = HIDClass::new_ep_out(&alloc, CommandReport::desc(), 10);
In my poll function I check both:
bus.poll(&mut [&mut hid, &mut command])
And then I can read the data by pulling raw output:
let mut buffer: [u8; 64] = [0; 64];
match command.pull_raw_output(&mut buffer) {
Ok(size) => handleCommand(buffer, size),
Err(UsbError::WouldBlock) => {
// no pending data
}
Err(err) => panic!("Error receiving data {:?}", err),
}
As there is only 1 report ID, I did not have to prepend the reportId in the data I sent from the host using libusb.

BlueZ,Bluetooth Management API: Read Controller Information Command, bluetooth_version, human-readable

What human-readable version number does the bluetooth_version returned by the Read Controller Information Command of the BlueZ Bluetooth Management API map to?
E.g. my controller returns 0x08. Is this Bluetooth 4.1 or 5.2 or 1.1 or ... ?
I can't find any info to this the mgmt-api.txt file. Searches for on google with "bluetooth version binary to string" didn't turn up anything helpful either. Also, the spec didn't turn up anything for "0x08" or "bluetooth version". Searching for version is pointless in there as each page header contains that word...
New insight
btmon seems to know...
# MGMT Event: Command Comp.. (0x0001) plen 283 {0x0003} [hci0]
11:04:18.712443
Read Controller Information (0x0004) plen 280
Status: Success (0x00)
Address: 00:25:CA:2A:08:38 (OUI 00-25-CA)
Version: Bluetooth 4.2 (0x08)
I don't know if and where Bluetooth version mapping is documented.
However such mapping can be found inside bluez lib/hci.c source file:
/* Version mapping */
static hci_map ver_map[] = {
{ "1.0b", 0x00 },
{ "1.1", 0x01 },
{ "1.2", 0x02 },
{ "2.0", 0x03 },
{ "2.1", 0x04 },
{ "3.0", 0x05 },
{ "4.0", 0x06 },
{ "4.1", 0x07 },
{ "4.2", 0x08 },
{ "5.0", 0x09 },
{ "5.1", 0x0a },
{ NULL }
};
I also found a mapping in monitor/packet.c:
void packet_print_version(const char *label, uint8_t version,
const char *sublabel, uint16_t subversion)
{
const char *str;
switch (version) {
case 0x00:
str = "Bluetooth 1.0b";
break;
case 0x01:
str = "Bluetooth 1.1";
break;
case 0x02:
str = "Bluetooth 1.2";
break;
case 0x03:
str = "Bluetooth 2.0";
break;
case 0x04:
str = "Bluetooth 2.1";
break;
case 0x05:
str = "Bluetooth 3.0";
break;
case 0x06:
str = "Bluetooth 4.0";
break;
case 0x07:
str = "Bluetooth 4.1";
break;
case 0x08:
str = "Bluetooth 4.2";
break;
case 0x09:
str = "Bluetooth 5.0";
break;
case 0x0a:
str = "Bluetooth 5.1";
break;
default:
str = "Reserved";
break;
}
if (sublabel)
print_field("%s: %s (0x%2.2x) - %s %d (0x%4.4x)",
label, str, version,
sublabel, subversion, subversion);
else
print_field("%s: %s (0x%2.2x)", label, str, version);
}

How to initiate USB bulk transfers in usb-vhci

EDIT: reformulating this question as I've managed to get the basics to work, but still experience problems.
I'm trying to emulate a USB device (bar code scanner) for testing purposes using usb-vhci, and I'm having some problems.
To give some context: the device is a CDC abstract modem, and the client - a java program - communicates with it over the serial line using AT commands.
Basically, I've got my device up and running, it registers itself correctly and I'm able to receive commands from and respond to the client.
The main problem appears to be that as soon as the device starts up or receives a bulk transfer from the host it triggers an ongoing stream of bulk and interrupt IN transfers (massive amounts, my usbmon log grows to 100 MB in a few seconds).
First at startup, where it keeps spewing out (mainly) bulk IN transfers until I receive the SET_CONTROL_LINE_STATE request and then they stop. Then, when the client sends the commands (AT command via the serial device) it starts again.
I'm guessing this is because I'm not responding correctly to some transfer, but I can't figure out what it is.
I've been comparing the usbmon output of my device with that of the real device, but so far I haven't been able to detect any difference that would explain why my emulated device behaves like this and the real one doesn't.
I basically started out with the example code found in libusb_vhci/examples/virtual_device2.c and adapted it to mimic the actual device. First off the device descriptors:
const uint8_t dev_desc[] = {
/* Device Descriptor */
0x12, //bLength 18
0x01, //bDescriptorType 1
0x00, 0x02, //bcdUSB 2.00
0x02, //bDeviceClass 2 Communications
0x00, //bDeviceSubClass 0
0x00, //bDeviceProtocol 0
0x40, //bMaxPacketSize0 64
0x5a, 0x06, //idVendor 065a
0x02, 0xa0, //idProduct a002
0x00, 0x01, //bcdDevice 1.00
0x00, //iManufacturer 0
0x01, //iProduct 1
0x00, //iSerial 0
0x01 //bNumConfigurations 1
};
const uint8_t conf_desc[] = {
/* Configuration Descriptor */
0x09, //bLength 9
0x02, //bDescriptorType 2
0x43, 0x00, //wTotalLength 67 ??
0x02, //bNumInterfaces 2
0x01, //bConfigurationValue 1
0x00, //iConfiguration 0
0x80, //bmAttributes (Bus Powered) 0x80
250, //MaxPower 500mA
/* Interface Descriptor 0 */
0x09, //bLength 9
0x04, //bDescriptorType 4
0x00, //bInterfaceNumber 0
0x00, //bAlternateSetting 0
0x01, //bNumEndpoints 1
0x02, //bInterfaceClass 2 Communications
0x02, //bInterfaceSubClass 2 Abstract (modem)
0x00, //bInterfaceProtocol 0 None
0x00, //iInterface 0
/* CDC Header */
0x05, //bLength 7
0x24, //bDescriptorType 5
0x00, //bEndpointAddress 0x01 EP 1 OUT
0x10, //bcdCDC 1.10
0x01, //"
/* CDC Call Management */
0x05, //bLength 3
0x24, //CDC_CS_INTERFACE
0x01, //CDC_CALL_MANAGEMENT
0x01, //bmCapabilities 0x01
0x00, //bDataInterface 0
/* CDC ACM */
0x04, //bLength 2
0x24, //CDC_CS_INTERFACE
0x02, //CDC_ABSTRACT_CONTROL_MANAGEMENT
0x02, //bmCapabilities 0x02
/* CDC Union */
0x05, //bLength 3
0x24, //CDC_CS_INTERFACE
0x06, //CDC_UNION
0x00, //bMasterInterface 0
0x01, //bSlaveInterface 1
/* Endpoint Descriptor */
0x07, //bLength 7
0x05, //bDescriptorType 5
0x83, //bEndpointAddress 0x83 EP 3 IN
0x03, //bmAttributes 3
0x40, 0x00, //wMaxPacketSize 0x0040 1x 64 bytes
0x0a, //bInterval 10
/* Interface Descriptor 1 */
0x09, //bLength 9
0x04, //bDescriptorType 4
0x01, //bInterfaceNumber 1
0x00, //bAlternateSetting 0
0x02, //bNumEndpoints 2
0x0a, //bInterfaceClass 10 CDC Data
0x00, //bInterfaceSubClass 0
0x00, //bInterfaceProtocol 0
0x00, //iInterface 0
/* Endpoint Descriptor */
0x07, //bLength 7
0x05, //bDescriptorType 5
0x01, //bEndpointAddress 0x01 EP 1 OUT
0x02, //bmAttributes 2
0x40, 0x00, //wMaxPacketSize 0x0040 1x 64 bytes
0x00, //bInterval 0
/* Endpoint Descriptor */
0x07, //bLength 7
0x05, //bDescriptorType 5
0x82, //bEndpointAddress 0x82 EP 2 IN
0x02, //bmAttributes 2
0x40,0x00, //wMaxPacketSize 0x0040 1x 64 bytes
0x00 //bInterval 0
};
const uint8_t str0_desc[] = {
0x04, //bLength 4
0x03, //bDescriptorType 3
0x09, 0x04 //bLanguage 0409 US
};
const uint8_t *str1_desc =
(uint8_t *)"\x36\x03O\0p\0t\0i\0c\0o\0n\0 \0U\0S\0B\00\0B\0a\0r\0c\0o\0d\0e\0 \0R\0e\0a\0d\0e\0r";
The main function is the same as in the example, but the process_urb() function is what has mainly been changed. The control section is largely intact, but I've added handling for some additional setup packets:
uint8_t rt = urb->bmRequestType;
uint8_t r = urb->bRequest;
if(rt == 0x00 && r == URB_RQ_SET_CONFIGURATION)
{
devlog("URB_RQ_SET_CONFIGURATION\n");
urb->status = USB_VHCI_STATUS_SUCCESS;
}
else if(rt == 0x00 && r == URB_RQ_SET_INTERFACE)
{
devlog("URB_RQ_SET_INTERFACE\n");
urb->status = USB_VHCI_STATUS_SUCCESS;
}
else if (rt == 0x21 && r == 0x20)
{
devlog("URB_CDC_SET_LINE_CODING\n");
urb->status = USB_VHCI_STATUS_SUCCESS;
}
else if (rt == 0x21 && r == 0x22)
{
devlog("URB_CDC_SET_CONTROL_LINE_STATE\n");
urb->status = USB_VHCI_STATUS_SUCCESS;
}
else if(rt == 0x80 && r == URB_RQ_GET_DESCRIPTOR)
{
int l = urb->wLength;
uint8_t *buffer = urb->buffer;
devlog("GET_DESCRIPTOR ");
switch(urb->wValue >> 8)
{
case 0:
puts("WTF_DESCRIPTOR");
urb->status = USB_VHCI_STATUS_SUCCESS;
break;
case 1:
puts("DEV_DESC");
if(dev_desc[0] < l) l = dev_desc[0];
memcpy(buffer, dev_desc, l);
urb->buffer_actual = l;
urb->status = USB_VHCI_STATUS_SUCCESS;
break;
case 2:
puts("CONF_DESC");
if(conf_desc[2] < l) l = conf_desc[2];
memcpy(buffer, conf_desc, l);
urb->buffer_actual = l;
urb->status = USB_VHCI_STATUS_SUCCESS;
break;
case 3:
devlog(" Reading string %d\n", urb->wValue & 0xff);
switch(urb->wValue & 0xff)
{
case 0:
if(str0_desc[0] < l) l = str0_desc[0];
memcpy(buffer, str0_desc, l);
urb->buffer_actual = l;
urb->status = USB_VHCI_STATUS_SUCCESS;
break;
case 1:
if(str1_desc[0] < l) l = str1_desc[0];
memcpy(buffer, str1_desc, l);
urb->buffer_actual = l;
urb->status = USB_VHCI_STATUS_SUCCESS;
break;
default:
devlog(" Trying to read unknown string: %d\n",urb->wValue & 0xff);
urb->status = USB_VHCI_STATUS_STALL;
break;
}
break;
default:
devlog(" UNKNOWN: wValue=%d (%d)\n",urb->wValue, urb->wValue >> 8);
urb->status = USB_VHCI_STATUS_STALL;
break;
}
}
else
{
devlog("OTHER bmRequestType %x bRequest %x\n", rt, r);
urb->status = USB_VHCI_STATUS_STALL;
}
The main issue is in handling the non-control transfers though. Here's my current implementation:
/* handle non-control sequences */
if(!usb_vhci_is_control(urb->type)) {
/* if we have a BULK OUT transfer */
if (usb_vhci_is_bulk(urb->type) && usb_vhci_is_out(urb->epadr)) {
/* we have a bulk out transfer, i.e. a command from client */
int cmd = get_at_command(urb->buffer, urb->buffer_actual);
if (cmd == COMMAND_Z1) {
/* we have request for version, need to wait for the BULK IN transfer */
last_command = cmd;
}
urb->status = USB_VHCI_STATUS_SUCCESS;
return;
}
/* if we have a BULK IN transfer */
if (usb_vhci_is_bulk(urb->type) && usb_vhci_is_in(urb->epadr)) {
/* we have a BULK IN transfer, use it to respond to any buffered commands */
if (last_command) {
/* send version */
memcpy(urb->buffer, VERSION_STR, strlen(VERSION_STR));
urb->buffer_actual = strlen(VERSION_STR);
last_command = 0;
urb->status = USB_VHCI_STATUS_SUCCESS;
return;
}
}
urb->status = USB_VHCI_STATUS_SUCCESS;
return;
}
Here's a snippet of the usbmon log I get as my device is starting up:
ffff880510727900 266671312 S Bi:5:002:2 -115 128 <
ffff880510727f00 266671315 C Bi:5:002:2 0 0
ffff880510727f00 266671316 S Bi:5:002:2 -115 128 <
ffff880510727cc0 266671319 C Ii:5:002:3 0:8 0
ffff880510727cc0 266671321 S Ii:5:002:3 -115:8 64 <
ffff880514d80900 266671323 S Co:5:002:0 s 21 22 0000 0000 0000 0
ffff880510727780 266671324 C Bi:5:002:2 0 0
ffff880510727780 266671325 S Bi:5:002:2 -115 128 <
ffff8805101096c0 266671329 C Bi:5:002:2 0 0
ffff8805101096c0 266671333 S Bi:5:002:2 -115 128 <
ffff8805107273c0 266671339 C Bi:5:002:2 0 0
ffff8805107273c0 266671344 S Bi:5:002:2 -115 128 <
ffff880510109b40 266671348 C Bi:5:002:2 0 0
ffff880510109b40 266671350 S Bi:5:002:2 -115 128 <
ffff880510109000 266671354 C Bi:5:002:2 0 0
ffff880510109000 266671357 S Bi:5:002:2 -115 128 <
ffff880510727d80 266671360 C Bi:5:002:2 0 0
ffff880510727d80 266671361 S Bi:5:002:2 -115 128 <
ffff880510109a80 266671363 C Bi:5:002:2 0 0
ffff880510109c00 266671370 C Bi:5:002:2 0 0
...
So, this is basically where I'm stuck. I've got a nearly functioning device, but the massive amounts of transfers basically chokes my system rendering it useless. Any help or info would be greatly appreciated!
It seems I have been able to resolve most of my issues now, and the problem was indeed me not responding correctly to events.
After doing some more detailed analysis of the usbmon output for the real device I noticed that it was responding to the superfluous interrupt transfers with -ENOENT, whereas I was responding with 0 (i.e. success). Some more digging into the usb-vhci code revealed that this error code corresponded to USB_VHCI_STATUS_CANCELED, and once I started responding with this I got the same behavior in my device as with the real device. Essentially I added this to my non-control section of process_urb:
/* if we have a INTERRUPT transfer */
if (usb_vhci_is_int(urb->type)) {
urb->status = USB_VHCI_STATUS_CANCELED;
return;
}
I'm not entirely out of the woods yet though. I noticed that the same thing seemed to apply for bulk IN transfers; I'm getting a ton of them during startup (which stop as soon as setup is complete) which - again - does not appear to be the case for the real device, and the real device - again - responds to these (superfluous) transfers with -ENOENT. I tried doing this, and it appears to work fine. The additional transfers do stop and it behaves just as the real device, but unfortunately it also results in my device not being able to send data back to the client. I modified my bulk IN handling code as follows:
/* if we have a BULK IN transfer */
if (usb_vhci_is_bulk(urb->type) && usb_vhci_is_in(urb->epadr)) {
if (last_command) {
// send version
memcpy(urb->buffer, VERSION_STR, strlen(VERSION_STR));
urb->buffer_actual = strlen(VERSION_STR);
last_command = 0;
urb->status = USB_VHCI_STATUS_SUCCESS;
} else {
urb->status = USB_VHCI_STATUS_CANCELED;
}
return;
}
I figure this should work, i.e. if I received a command in the previous bulk OUT transfer I should be able to use the IN transfer to respond (as I've been doing all along) and if there was no command I just respond with -ENOENT. For some reason this does not work though and I'm not sure why.
Another thing I noticed regarding the trace from the real device: although it does respond to these bulk transfers with -ENOENT, they send the response more than 10 seconds (!) after they received the request! Not sure what that's all about, but if anyone has an idea I'd be most grateful.

QThread crashes the program?

I implement QThread like this, but get program crashed when it runs.
I've searched and seen posts saying it is not the correct way to use QThread.
But I cannot find any reason for the crashes of my program, what I do is only
triggering 'on_Create_triggered()' and I guarantee the mutex is locked and unlocked properly.
I have tested the program for two days(testing only by 'std::cerr << ...;' prints results), but still cannot find reason. What I guess is that the thread may wait for the lock too long and cause program to crash. (not sounds reasonable...) :)
My codes:
Background.h
class Background : public QThread
{
Q_OBJECT
public:
Background(int& val,DEVMAP& map, QQueue<LogInfoItem*>& queue, QList<DEV*>& devlist, QList<IconLabel*>& icllist,QMutex& m)
:val_i(val),DevMap(map), LogInfoQueue(queue), DevInfoList(devlist), IconLabelList(icllist),mutex(m)
{}
~Background();
protected:
void run(void);
private:
DEVMAP& DevMap;
QQueue<LogInfoItem*>&LogInfoQueue;
QList<DEV*>& DevInfoList;
QList<IconLabel*>& IconLabelList;
int& val_i;
QMutex& mutex;
void rcv();
};
Background.cpp
#include "background.h"
Background::~Background()
{
LogFile->close();
}
void Background::run(void)
{
initFile();
while(1)
{
msleep(5);
rcv();
}
}
void Background::rcv()
{
mutex.lock();
...
...//access DevMap, LogInfoQueue, DevInfoList, IconLabelList and val_i;
...
mutex.unlock();
}
MainWindow:(MainWindow has Background* back as property)
void MainWindow::initThread()
{
back = new Background(val_i, dev_map, logDisplayQueue, devInfoList, iconLabelList, mutex);
back->start();
}
void MainWindow::on_Create_triggered()
{
mutex.lock();
...
...//access DevMap, LogInfoQueue, DevInfoList, IconLabelList and val_i;
...
mutex.unlock();
}
I have found the reason, which is more subtle.
(I use some codes written by others but believe it is not broken, what I got totally wrong! :) )
The broken codes:
#define DATABUFLEN 96
typedef struct Para//totally 100bytes
{
UINT8 type;
UINT8 len;
UINT8 inType;
UINT8 inLen;
UINT8 value[DATABUFLEN];//96 bytes here
}ERRORTLV;
class BitState
{
public:
UINT8 dataBuf[DATABUFLEN];
......
};
And the function using it:
bool BitState::rcvData() //the function crosses bound of array
{
UINT8 data[12] =
{
0x72, 0x0A, 0x97, 0x08,
0x06, 0x0A, 0x0C, 0x0F,
0x1E, 0x2A, 0x50, 0x5F,
}; //only 12 bytes
UINT32 dataLen = 110;
memcpy(this->dataBuf, data, dataLen); //copy 110 bytes to dataBuf //but no error or warning from compiler, and no runtime error indicates the cross
}
bool BitState::parseData(BitLog* bitLog)//pass pointer of dataBuf to para_tmp, but only use 0x08 + 4 = 12 bytes of dataBuf
{
Para* para_tmp;
if(*(this->dataBuf) == 0x77)
{
para_tmp = (ERRORTLV*)this->dataBuf;
}
if(para_tmp->type != 0x72 || para_tmp->inType != 0x97 || (para_tmp->len - para_tmp->inLen) != 2) // inLen == 0x08
{
return false;
}
else
{
//parse dataBuf according to Para's structure
this->bitState.reset();
for(int i = 0; i < para_tmp->inLen; i++) // inLen == 0x08 only !!!
{
this->bitState[para_tmp->value[i]-6] = 1;
}
if(this->bitState.none())
this->setState(NORMAL);
else
this->setState(FAULT);
QString currentTime = (QDateTime::currentDateTime()).toString("yyyy.MM.dd hh:mm:ss.zzz");
string sysTime = string((const char *)currentTime.toLocal8Bit());
this->setCurTime(sysTime);
this->addLog(sysTime, bitLog);
}
return true;
}
bool BitState::addLog(std::string sysTime, BitLog* bitLog)// this function is right
{
bitLog->basicInfo = this->basicInfo;//not in data Buf, already allocated and initialized, (right)
bitLog->bitState = this->bitState; //state is set by setState(..)
bitLog->rcvTime = sysTime; //time
return true;
}
Generally speaking, the program allocates 96 bytes to a byte array, but use 'memcpy(...)' to copy 110 bytes to the array, later uses only 12 bytes of the array.
All kinds of crashes appear, which are confusing and frustrating...:( :( :(

Resources