I'm trying to connect a Newport AGILIS AG-UC8 controller to my Linux via USB.
Internally, this controller has an FT232R usb-to-serial chip, that should accept simple text commands, such as "VE\r\n" to print the version.
This works fine when using the provided windows driver and software, but on Linux (tested with pyserial and others), the chip doesn't answer. Basically I initialized the device with the values that the Windows device manager showed and then just sent the command.
ser = serial.Serial(port='/dev/ttyUSB0', baudrate=9600, parity=serial.PARITY_NONE, stopbits=serial.STOPBITS_ONE, bytesize=serial.EIGHTBITS)
ser.write("VE\r\n")
I did some debugging with wireshark USB captures:
Windows packet (working, from the provided software):
0000 1b 00 20 9a 40 55 8a ad ff ff 00 00 00 00 09 00 .. .#U..........
0010 00 01 00 03 00 02 03 04 00 00 00 56 45 0d 0a ...........VE..
Linux packet (not working, from pyserial):
0000 80 dd 35 82 e5 99 ff ff 53 03 02 08 02 00 2d 00 ..5.....S.....-.
0010 b6 08 41 60 00 00 00 00 3a 6f 00 00 8d ff ff ff ..A`....:o......
0020 04 00 00 00 04 00 00 00 00 00 00 00 00 00 00 00 ................
0030 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
0040 56 45 0d 0a VE..
As you can see, the packet on windows is much smaller, and does not contain the padding zeros. These zeros follow immediately after the data-length field (04), so I am wondering whether they pose the problem to the controller.
Is there a way to send the data without zero padding? Preferably in pyserial, but for now I'd be happy with any solution that lets me speak to my device.
Answering your question, but not necessarily solving your problem: those extra zeros are due to the sniffer capturing traffic between the kernel and libusb. It is also the reason why your capture looks different between environments.
Specifically they are the values stuffed into the pcap_usb_setup structure that sits after urb and data length. There's no particular way or reason to try to eliminate them.
/*
* Header prepended by linux kernel to each event.
* Appears at the front of each packet in DLT_USB_LINUX captures.
*/
typedef struct _usb_header {
uint64_t id;
uint8_t event_type;
uint8_t transfer_type;
uint8_t endpoint_number;
uint8_t device_address;
uint16_t bus_id;
char setup_flag;/*if !=0 the urb setup header is not present*/
char data_flag; /*if !=0 no urb data is present*/
int64_t ts_sec;
int32_t ts_usec;
int32_t status;
uint32_t urb_len;
uint32_t data_len; /* amount of urb data really present in this event*/
pcap_usb_setup setup;
} pcap_usb_header;
/*
* USB setup header as defined in USB specification.
* Appears at the front of each Control S-type packet in DLT_USB captures.
*/
typedef struct _usb_setup {
uint8_t bmRequestType;
uint8_t bRequest;
uint16_t wValue;
uint16_t wIndex;
uint16_t wLength;
} pcap_usb_setup;
Related
I want to generate gp secure channel 01. my trace is:
Send: 80 50 00 00 08 00 00 00 00 00 00 00 00
Recv: 00 00 00 00 00 00 00 00 00 00 FF 02 00 02 0E 5A 8F F4 57 DD 35 5C 49 A6 8B 15 E9 A5 9000
so I have :
Card challenge= 00 02 0E 5A 8F F4 57 DD
Host challenge=00 00 00 00 00 00 00 00
according SPC01: image
Derivation data== 8F F4 57 DD 00 00 00 00 00 02 0E 5A 00 00 00 00
IV=0000000000000000
c_ENC: 404142434445464748494A4B4C4D4E4F
according this image and 3Des online
session s_ENC= C72F032C8BAD55D4D2579295CCF0A6CA
now :
hot-auth_data = card challenge + host challenge + pad
host-auth= 00020E5A8FF457DD00000000000000008000000000000000
s_ENC=C72F032C8BAD55D4D2579295CCF0A6CA
IV=0000000000000000
===========
result= 93CC77E144488A031BFFCCC62EB3B5C233A485F8255FE90E
Host cryptogram= 33A485F8255FE90E
but when I send :
848200000833A485F8255FE90E
I have error 0x6700 in method SDInstruction in line
short len = sc.processSecurity(apdu);
public void process(APDU apdu) throws ISOException {
if (selectingApplet()) {
return;
}
byte[] buffer = apdu.getBuffer();
switch (buffer[ISO7816.OFFSET_INS]) {
case ISO7816.INS_SELECT:
select();
return;
case INS_INIT_UPDATE:
case INS_EXT_AUTH:
SDInstruction(apdu);
break;
}
}
private void SDInstruction(APDU apdu)
{
byte[] buf = apdu.getBuffer();
byte cla = buf[ISO7816.OFFSET_CLA];
byte ins = buf[ISO7816.OFFSET_INS];
apdu.setIncomingAndReceive();
if(ins == INS_INIT_UPDATE)
sc = GPSystem.getSecureChannel();
short len = sc.processSecurity(apdu);
apdu.setOutgoing();
apdu.setOutgoingLength(len);
apdu.sendBytes(ISO7816.OFFSET_CDATA, (short) len);
}
Your card is using SCP02 and not SCP01.
Given the response to the INITIALIZE UPDATE command:
00 00 00 00 00 00 00 00 00 00 FF 02 00 02 0E 5A 8F F4 57 DD 35 5C 49 A6 8B 15 E9 A5 9000
The highlighted part is the "Key Information" which contains:
"Key Version Number" -- in your trace 0xFF
"Secure Channel Protocol Identifier" -- in your trace it is 0x02 indicating SCP02
See the Global Platform Card Specification for further reference (sections describing the INITIALIZE UPDATE command).
So you need to establish the secure channel with the card according to the SCP02.
Some additional (random) notes:
be sure to check the "i" secure channel parameter encoded inside the "Card Recognition Data" (tag '64') as well
you might want to look at the method GlobalPlatform.openSecureChannel() and the inner class GlobalPlatform.SCP0102Wrapper in the GlobalPlatformPro tool source code
Good luck!
According to the GlobalPlatform specification, the EXTERNAL AUTHENTICATE command has to include the host cryptogram as well as the MAC. Both are 8 bytes long, hence, your command should be 16 bytes in total.
If you want to implement the generation of this MAC value yourself, you can follow the description in the GlobalPlatform spec. But I suggest you to make use of available open source implementation. For example: GPJ is a Java implementation of the GlobalPlatform specification and has all commands that you need. You can take a look at the class GlobalPlatformService, where you will find the implementation of the secure channel protocol. GPDroid (github.com/mobilesec/secure-element-gpdroid) is a wrapper for this project on Android.
I am trying to understand how vfio works on pci.
I have read https://www.kernel.org/doc/Documentation/vfio.txt, and I am writting a test based on it.
At some point,my code looks like this:
/* Get pci device information (number of regions and interrupts...) */
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_INFO, &device_info);
if (ret) {
printf(" %s cannot get device info, "
"error %i (%s)\n", pci_addr, errno, strerror(errno));
close(vfio_dev_fd);
return -1;
}
printf ("found %d regions in the device\n", device_info.num_regions);
What I do not understand is that the last printf shows 9 (nine) regions!
Looking at the config space of my PCI device I can see:
cat /sys/bus/pci/devices/0000\:06\:00.0/config | hd
00000000 e4 14 81 16 00 04 10 00 10 00 00 02 10 00 00 00 |................|
00000010 04 00 de f3 00 00 00 00 04 00 df f3 00 00 00 00 |................|
00000020 00 00 00 00 00 00 00 00 00 00 00 00 28 10 6e 02 |............(.n.|
00000030 00 00 00 00 48 00 00 00 00 00 00 00 0a 01 00 00 |....H...........|
00000040
I am new to PCI, but my interpretation of this is that there are only two BARSs on this board (0 and 2). And I though that max number of regions (BARs?) was 6 on PCI.
There is nevertheless a little indication that I am misunderstanding something: the kernel doc mentioned above states:
For PCI devices, config space is a region
So that could lead to 3 regions for the board I am looking at... Still it says 9...
What is a PCI region for pci_vfio? how are they numbered and accessed?...
Thanks.
From vfio.h:
/*
* The VFIO-PCI bus driver makes use of the following fixed region and
* IRQ index mapping. Unimplemented regions return a size of zero.
* Unimplemented IRQ types return a count of zero.
*/
enum {
VFIO_PCI_BAR0_REGION_INDEX,
VFIO_PCI_BAR1_REGION_INDEX,
VFIO_PCI_BAR2_REGION_INDEX,
VFIO_PCI_BAR3_REGION_INDEX,
VFIO_PCI_BAR4_REGION_INDEX,
VFIO_PCI_BAR5_REGION_INDEX,
VFIO_PCI_ROM_REGION_INDEX,
VFIO_PCI_CONFIG_REGION_INDEX,
/*
* Expose VGA regions defined for PCI base class 03, subclass 00.
* This includes I/O port ranges 0x3b0 to 0x3bb and 0x3c0 to 0x3df
* as well as the MMIO range 0xa0000 to 0xbffff. Each implemented
* range is found at it's identity mapped offset from the region
* offset, for example 0x3b0 is region_info.offset + 0x3b0. Areas
* between described ranges are unimplemented.
*/
VFIO_PCI_VGA_REGION_INDEX,
VFIO_PCI_NUM_REGIONS
};
So, these additional regions are a way to provide access to configuration space, option ROM and VGA-specific I/O (ports and legacy framebuffer 0xa0000...0xbffff window).
There are some threads for how to apply bluez as ibeacon or BLE peripheral.
But when I use ble scanner (a BLE central application on Android), that reveal the bluz peripheral as dual mode.
How should I do to disable the classic mode in bluez?
Since you are referring to that thread, you probably use hcitool to set the advertisement data and options.
You need to set change the Flag for BR/EDR support to 0. Thats Bit 2 of the Flags field (See Part A Supplement of the Bluetooth Core Spezification, p.12).
So the original 0x1A changes to 0x16:
Change
sudo hcitool -i hci0 cmd 0x08 0x0008 1e 02 01 1a 1a ff 4c 00 02 15 e2 c5 6d b5 df fb 48 d2 b0 60 d0 f5 a7 10 96 e0 00 00 00 00 c5 00 00 00 00 00 00 00 00 00 00 00 00 00
to
sudo hcitool -i hci0 cmd 0x08 0x0008 1e 02 01 16 1a ff 4c 00 02 15 e2 c5 6d b5 df fb 48 d2 b0 60 d0 f5 a7 10 96 e0 00 00 00 00 c5 00 00 00 00 00 00 00 00 00 00 00 00 00
Running btmon on another shell while executing the commands allows you to observe whats going on exactly.
I have a node application which runs on a raspberry pi that keeps track of a bunch of UPnP-players (Sonos), which I would like to be able to control through a physical remote. I have a couple of airmouses, which has small keyboards as well as volume buttons that I would like to use.
I have tried to get a grip on how to read out physical key strokes on a linux machine, and come to the conclusion that I need to read events from the input device, which in my case would be:
/dev/input/by-id/usb-Dell_Dell_QuietKey_Keyboard-event-kbd
How to find the device and stuff like that is not a problem, the real issue is how to interpret the data that you read from it.
I know that you would receive a C struct, like this:
struct input_event {
struct timeval time;
unsigned short type;
unsigned short code;
unsigned int value;
};
But I'm not sure how I would go about reading this from node. If I could run an external app that would be triggered from pre-defined keystrokes, and then invoke an HTTP-request against my node, that would be my second option, a python script or some native daemon. I have however looked at some hotkey-daemons, but none of them worked.
If would of course be nice if I could contain it within node somehow.
EDIT: So I did some testing, and made a simple snippet:
var fs = require('fs');
var buffer = new Buffer(16);
fs.open('/dev/input/by-id/usb-HJT_Air_Mouse-event-kbd', 'r', function (err, fd) {
while (true) {
fs.readSync(fd, buffer, 0, 16, null);
console.log(buffer)
}
});
This outputs something like this (for space):
<Buffer a4 3e 5b 51 ab cf 03 00 04 00 04 00 2c 00 07 00>
<Buffer a4 3e 5b 51 c3 cf 03 00 01 00 39 00 01 00 00 00>
<Buffer a4 3e 5b 51 cb cf 03 00 00 00 00 00 00 00 00 00>
<Buffer a4 3e 5b 51 ba 40 06 00 04 00 04 00 2c 00 07 00>
<Buffer a4 3e 5b 51 cd 40 06 00 01 00 39 00 00 00 00 00>
<Buffer a4 3e 5b 51 d2 40 06 00 00 00 00 00 00 00 00 00>
I realize that the first four bytes are some sort of timestamp, and the following 3 bytes could be something like a micro/millisecond thing.
Another odd thing is that not all keypresses produces output, but a subsequent press might sent twice as much data, and most of the time it starts blasting out data which would stop after subsequent keypresses (or after about 20 seconds or so). I'm not really sure how to interpret that. I have tried to read the source for this daemon https://github.com/baskerville/shkd/blob/master but C is not my strongest language and I can't identify how he handles it (or if it should even be handled). And that daemon didn't even work for me (compiled it on a raspberry pi).
Well, let's have a look at that struct.
struct input_event {
struct timeval time;
unsigned short type;
unsigned short code;
unsigned int value;
};
A struct timeval has this structure:
struct timeval
{
__time_t tv_sec; /* Seconds. */
__suseconds_t tv_usec; /* Microseconds. */
};
The definition of those time types are
typedef signed long time_t;
typedef signed long suseconds_t;
A signed long is 4 bytes (well, not if you just follow the spec, but in practice, it is), so the first 8 bytes are a typestamp. Next, you have a type and a code. Both are short, so in practice, they're 2 bytes each. Now there's just the value left, and that's an int again, which will be four bytes. Also, a compiler could theoretically add padding between the fields here, but I'm pretty sure he won't.
So, first chop the bytes you've read into chunks of 4+4+2+2+4=16 bytes. Each of those chunks is an event. This fits your sample data. Next, extract the values from the buffer (as little endian values because you're on an ARM system – on a normal PC, you'd need big endian) and interpret the values. For instructions on how to do that, read http://www.mjmwired.net/kernel/Documentation/input/event-codes.txt. The values of the constants aren't written down there, but you can usually find those using grep -R NAME_OF_CONSTANT /usr/include.
Let's chop up
<Buffer a4 3e 5b 51 ab cf 03 00 04 00 04 00 2c 00 07 00>
as an example.
<Buffer a4 3e 5b 51 ab cf 03 00 04 00 04 00 2c 00 07 00>
| tv_sec | tv_usec |type |code | value |
tv_sec in hex is 0x515b3ea4 (reversed order because it's little endian), which is 1364934308 in decimal. A simple unix time converter reports that this means 02.04.2013 - 22:25:08. Looks good!
tv_usec is 0x0003cfab=249771, so actually, the event happened 249771 microseconds after that time.
Type is 0x0004=4. /usr/include/linux/input.h tells us that this is a EV_MSC.
Given the type, we can also see the the code, 0x0004=4, means MSC_SCAN.
The value is 0x0007002c. Turns up nowhere in input.h. Hmm.
I think what you're looking for is fs.createReadStream, so you can install some event handlers.
You can parse input events into structs by using the Buffer.readX routines:
var i = 0;
while((buf.length-i) >= 16) {
var event = {
tssec: buf.readUInt32LE(i+0),
tsusec: buf.readUInt32LE(i+4),
type: buf.readUInt16LE(i+8),
code: buf.readUInt16LE(i+10),
value: buf.readUInt32LE(i+12)
};
i += 16;
}
I try to issue a scsi read(10) and write(10) to a SSD. I use this example code as a reference/basic code.
This is my scsi read:
#define READ_REPLY_LEN 32
#define READ_CMDLEN 10
void scsi_read()
{
unsigned char Readbuffer[ SCSI_OFF + READ_REPLY_LEN ];
unsigned char cmdblk [ READ_CMDLEN ] =
{ 0x28, /* command */
0, /* lun/reserved */
0, /* lba */
0, /* lba */
0, /* lba */
0, /* lba */
0, /* reserved */
0, /* transfer length */
READ_REPLY_LEN, /* transfer length */
0 };/* reserved/flag/link */
memset(Readbuffer,0,sizeof(Readbuffer));
memcpy( cmd + SCSI_OFF, cmdblk, sizeof(cmdblk) );
/*
* +------------------+
* | struct sg_header | <- cmd
* +------------------+
* | copy of cmdblk | <- cmd + SCSI_OFF
* +------------------+
*/
if (handle_scsi_cmd(sizeof(cmdblk), 0, cmd,
sizeof(Readbuffer) - SCSI_OFF, Readbuffer )) {
fprintf( stderr, "read failed\n" );
exit(2);
}
hex_dump(Readbuffer,sizeof(Readbuffer));
}
And this is my scsi write:
void scsi_write ( void )
{
unsigned char Writebuffer[SCSI_OFF];
unsigned char cmdblk [] =
{ 0x2A, /* 0: command */
0, /* 1: lun/reserved */
0, /* 2: LBA */
0, /* 3: LBA */
0, /* 4: LBA */
0, /* 5: LBA */
0, /* 6: reserved */
0, /* 7: transfer length */
0, /* 8: transfer length */
0 };/* 9: control */
memset(Writebuffer,0,sizeof(Writebuffer));
memcpy( cmd + SCSI_OFF, cmdblk, sizeof(cmdblk) );
cmd[SCSI_OFF+sizeof(cmdblk)+0] = 'A';
cmd[SCSI_OFF+sizeof(cmdblk)+1] = 'b';
cmd[SCSI_OFF+sizeof(cmdblk)+2] = 'c';
cmd[SCSI_OFF+sizeof(cmdblk)+3] = 'd';
cmd[SCSI_OFF+sizeof(cmdblk)+4] = 'e';
cmd[SCSI_OFF+sizeof(cmdblk)+5] = 'f';
cmd[SCSI_OFF+sizeof(cmdblk)+6] = 'g';
cmd[SCSI_OFF+sizeof(cmdblk)+7] = 0;
/*
* +------------------+
* | struct sg_header | <- cmd
* +------------------+
* | copy of cmdblk | <- cmd + SCSI_OFF
* +------------------+
* | data to write |
* +------------------+
*/
if (handle_scsi_cmd(sizeof(cmdblk), 8, cmd,
sizeof(Writebuffer) - SCSI_OFF, Writebuffer )) {
fprintf( stderr, "write failed\n" );
exit(2);
}
}
In the following example I do
scsi read
scsi write
scsi read
And I print the hexdumps of the data which is written (scsi write) and what is read (scsi read)
Read(10)
[0000] 00 00 00 44 00 00 00 44 00 00 00 00 00 00 00 00 ...D...D ........
[0010] 00 2C 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
[0020] 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
[0030] 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
[0040] 00 00 00 00 ....
Write(10):
[0000] 00 00 00 00 00 00 00 24 00 00 00 00 00 00 00 00 ........ ........
[0010] 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
[0020] 00 00 00 00 2A 00 00 00 00 00 00 00 00 00 41 62 ........ ......Ab
[0030] 63 64 65 66 67 00 cdefg.
Read(10):
[0000] 00 00 00 44 00 00 00 44 00 00 00 00 00 00 00 00 ...D...D ........
[0010] 04 00 20 00 70 00 02 00 00 00 00 0A 00 00 00 00 ....p... ........
[0020] 04 00 00 00 41 62 63 64 65 66 67 00 00 00 00 00 ....Abcd efg.....
[0030] 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ........ ........
[0040] 00 00 00 00 ....
fter running the three commands again, I should read Abcdefg with the first read. Right? But running them again changes nothing. You could now assume, that the memory I use has still the data from previous funcions, but I get the same result even though I run memset(Readbuff,0,sizeof(Readbuff)) before the sys_read() happens.
I assumed, that the LBA I try to write is maybe forbidden to write, and I read the cache. But interating over LBA Adresses from 0x00-0xFF changes nothing - That means, I read the same data (Abcdefg).
Do you know an example implementation of scsi read or writes with the scsi generic interface?
In SCSI, the units of the LBA and the transfer length are in blocks, sometimes called sectors. This is almost always 512 bytes. So, you can't read or write just 32 bytes. At a minimum, you'll have to do 512 bytes == one block. This one point is most of what you need to fix.
Your transfer length is zero in your scsi_write implementation, so it's not actually going to write any data.
You should use different buffers for the CDB and the write/read data. I suspect that confusion about these buffers is leading your implementation to write past the end of one of your statically-allocated arrays and over your ReadBuffer. Run it under valgrind and see what shows up.
And lastly, a lot could go wrong in whatever is in handle_scsi_cmd. It can be tricky to set up the data transfer... in particular, make sure you're straight on which way the data is going in the I/O header's dxfer_direction: SG_DXFER_TO_DEV for write, SG_DXFER_FROM_DEV for read.
Check out this example of how to do a read(16). This is more along the lines of what you're trying to accomplish.
https://github.com/hreinecke/sg3_utils/blob/master/examples/sg_simple16.c