USB HID Codes mystery - python-3.x

I'm currently writing out USB HID codes in Python 3....
NULL_CHAR = chr(0)
def write_report(report):
with open('/dev/hidg0', 'rb+') as fd:
fd.write(report.encode())
# Press SHIFT + a = A
write_report(chr(32)+NULL_CHAR+chr(4)+NULL_CHAR*5)
# Release all keys
write_report(NULL_CHAR*8)
My question:
By looking at the standard "USB HID Usage Tables" from USB.org ( v1.12 - 10-21-2004 - https://www.usb.org/document-library/hid-usage-tables-112 ) ...
I know from actively running the example above
( Using "USAGE ID" in Decimal FYI ), that CHR(32) is a < SHIFT >, but in the HID table it shows that a Decimal (32) is a Keyboard < 3 > or < # > ..
What's up with that ?
Am I using the wrong table, Should i be using ASCii codes instead of USB HID Keyboard codes ?
Is there a better/more accurate table of codes ?
Because if chr(32) is in reality a < SHIFT >, what then would be < 3 > ?
Is there an easier way to type Function Keys ( F1+F3+F5 ) in combination, in addition to full text lines, and issue them out over USB than the way i'm currently doing it ?
Any help to clear this up by the pro's here would be greatly appreciated !!!

Not sure if you still need the answer, but I figured out after a day of headache over the same thing.
The information you're sending always must be transferred as 8 bytes.
So from your example:
write_report(chr(32)+NULL_CHAR+chr(4)+NULL_CHAR*5)
The chr(32) is Byte 0 (which is the modifier), then comes a null char at Byte 1, then the actual char at Byte 2, then another 5 nullchars, to make it 8.
Modifiers follow a different system from actual chars.
LCTRL 0x01
LSHIFT 0x02
LALT 0x04
LMETA 0x08
RCTRL 0x10
RSHIFT 0x20
RALT 0x40
RMETA 0x80
So the reason for 32 is because that's the decimal equivalent of 0x20 in hex.
Hope this helps you, or anyone else looking for this answer.

Related

Understanding the code in rtc_interrupt

I need to understand the code in "Real time clock" function rtc_interrupt. Code is
rtc_irq_data += 0x100;
rtc_irq_data &= ~0xff;
rtc_irq_data |= (CMOS_READ(RTC_INTR_FLAGS) & 0xF0);
I am unable to understand why it is += 0x100 and the rest of code.
From the Book "Linux kernel development", from Robert Love, that snippet of code has the following comment(s):
/*
* Can be an alarm interrupt, update complete interrupt,
* or a periodic interrupt. We store the status in the
* low byte and the number of interrupts received since
* the last read in the remainder of rtc_irq_data.
*/
As for rtc_irq_data += 0x100; So, we know there is a counter for the interrupts received in the high byte. Hence the 0x100. If a 16 bit hexadecimal number representation, where the highest byte is being added +1 (more one interrupt on the counter).
As for the second line, rtc_irq_data &= ~0xff; rtc_irq_data is being logically ANDED with the negation of 0xff, eg, with possibly 0xff00. The high part of the integer is being kept, and the low part being discarded. So supposing this was the first time being called, the value would now guaranteed to be 0x0100.
The last part rtc_irq_data |= (CMOS_READ(RTC_INTR_FLAGS) & 0xF0); is doing a logical OR |= of the low byte (that is now 0 / 0x00) with as the RTC current status. Hence the comment "We store the status in the low byte".
As for doing a logical AND with 0xF0 in (CMOS_READ(RTC_INTR_FLAGS) & 0xF0) , consulting the original AT compatible RTC datasheet, INTR_FLAGS is REGISTER C, a register byte where only the 4 upwards bits are used. b7 = IRQF, b6 = FP, b5 = AF, b4 = UF,
b3 to b0
The unused bits of Status Register 1 are read as "0s". They cannot be
writen.
From RTC datasheet
Hence then as a good standard coding practice, making sure with the AND logical 0xF0 that the lower 4 bits are ignored.

Trouble displaying signed unsigned bytes with python

I have a weird problem! I made a client / server Python code with Bluetooth in series, to send and receive byte frames (for example: [0x73, 0x87, 0x02 ....] )
Everything works, the send reception works very well !
The problem is the display of my frames, I noticed that the bytes from 0 to 127 are displayed, but from 128, it displays the byte but it adds a C2 (194) behind, for example: [0x73, 0x7F, 0x87, 0x02, 0x80 ....] == [115, 127, 135, 2, 128 ....] in hex display I would have 73 7F C2 87 2 C2 80 .. , we will notice that he adds a byte C2 from nowhere!
I think that since it is from 128! that it is due to a problem of signed (-128 to 127) / unsigned (0 to 255).
Anyone have any indication of this problem?
Thank you
0xc2 and 0xc3 are byte values that appear when encoding character values between U+0080 and U+00FF as UTF-8. Something on the transmission side is trying to send text instead of bytes, and something in the middle is (properly) converting the text to UTF-8 bytes before sending. The fix is to send bytes instead of text in the first place.

Arduino: need assistance in understanding <keyboard.h> library

I have Leonardo/Micro device that should emulate Keyboard.
I would like to modify library. The reason is I would like to be able to send raw scancodes, wheras the library does some preparation.
I looked in the source code, also of HID library, dbut have difficulty to understand following points:
Keyboard_::begin() and Keyboard_::end() are supposed to start and stop keboard emulation, but they have empty bodies; https://www.arduino.cc/en/Reference/KeyboardBegin
KeyReport is especially mysterious:
What exactly happens to the keyreport? I lost track in USB_Send function in HID.cpp. Couldnt find where it comes from
What are modifiers, what they ar4 doing?
Is number of keys sent limited to 6 or, theoretically could be be arbitrary?
I will try to answer your questions the best I can. Let me know if you still have questions:
Keyboard_::begin() and Keyboard_::end() are supposed to start and stop keboard emulation, but they have empty bodies
I believe those are just placeholders in case any initialization or cleanup would need to be done. The other libraries have the same functions (e.g. the Mouse library). I suspect they are there for consistency and just in case they are needed.
KeyReport is especially mysterious.
typedef struct
{
uint8_t modifiers;
uint8_t reserved;
uint8_t keys[6];
} KeyReport;
KeyReport is the data structure that represents the USB message sent to the host computer.
The modifiers member is an 8-bit unsigned integer that contains various flags (e.g. Left Shift, Left Ctrl, Left Alt, etc.)
The reserved member is an 8-bit unsigned integer that is not used, but must be there.
The keys member is an array of six 8-bit unsigned integers that represent the keys that are currently pressed.
What exactly happens to the keyreport? I lost track in USB_Send function in HID.cpp.
It gets sent to the host computer.
What are modifiers, what they are doing?
Some keys are “regular” keys (e.g. A, B, 1, 2, #, etc.). Other keys are modifiers (e.g. Shift, Ctrl, Alt). The modifier keys set flags in KeyReport.modifiers. For example, the Left Shift key is 0x02.
Is number of keys sent limited to 1 or, theoretically could be arbitrary?
The number of “regular” keys that can be press simultaneously is 6, but you could also have the modifier keys pressed (Shift, Alt, Ctrl, etc.).
FYI: I have been able to add additional keys (e.g. the numeric keypad keys) by adding new key definitions to the USBAPI.h file:
#define KEY_NUMPAD_DIVIDE 0xDC
#define KEY_NUMPAD_MULTIPLY 0xDD
#define KEY_NUMPAD_MINUS 0xDE
#define KEY_NUMPAD_PLUS 0xDF
#define KEY_NUMPAD_ENTER 0xE0
#define KEY_NUMPAD_1 0xE1
#define KEY_NUMPAD_2 0xE2
#define KEY_NUMPAD_3 0xE3
#define KEY_NUMPAD_4 0xE4
#define KEY_NUMPAD_5 0xE5
#define KEY_NUMPAD_6 0xE6
#define KEY_NUMPAD_7 0xE7
#define KEY_NUMPAD_8 0xE8
#define KEY_NUMPAD_9 0xE9
#define KEY_NUMPAD_0 0xEA
#define KEY_NUMPAD_DEL 0xEB

Bluetooth data shown as 0X80 instead of 0X00

I have been using Bluetooth module (HC-05) with Atmega8(both A and L) microcontroller to transmit data to my Android device. In following code an 8-bit signed(or unsigned doesn't made any change) value is sent over bluetooth to be displayed on device, this value starts at 0X00 and is incremented in every iteration:
#define F_CPU 1000000
#define BAUD 9600
#define MYUBRR (F_CPU/16/BAUD-1)
#include <avr/io.h>
#include <util/delay.h>
int main (void)
{
uint8_t data = 0;
UBRRH = (MYUBRR >> 8); // setting higher bits of UBRR
UBRRL = MYUBRR; // setting lower bit of UBRR
UCSRB = (1 << TXEN); // transmit enable
UCSRC = ((1 << URSEL) | (1 << UCSZ1) | (1 << UCSZ0)); //URSEL=USART reg selection (R/W), UCSZ1 & UCSZ0 set equal to 011 that is 8 bit data size
while (1)
{
UDR=data; // loading data in USART Data register (8-bit) and it will be transmitted immidiately
while(!(UCSRA&(1<<UDRE))); // waiting till complete data sent and UDRE flag set
_delay_ms(200); // after some time
data++; // incrementing data
}
return 0;
}
On the android device end there is "Bluetooth spp Pro" app to display the recieved data on screen.
Following is the configuration of recieve mode (Data is displayed as Hex values):
The data recieved here should start at 0X00 and go up to 0XFF instead it starts at 0X80 and increments upto 0XFF in a very unfamiliar manner.
Referring above image. The pattern I observed here is that the tens place digit starts at 8 and units place change from 0 to F then in next loop again it becomes 9 and unit place change from 0 to F after that instead of incrementing (expected) tens place again goes to 8 and then in next cycle it again becomes 9, after these four cycles of two repetetive words tens place increments to A and units place change from 0 to F and later the strange tens place pattern reappears for A and B then for C and D and later for E and F at tens place.
So my concern is:
Why is the device showing 80 for 00, as it is correctly working for ones place why is it not working for tens place as expected???
Thanks!!!
Edit:
This problem is neither Android version nor device manufacturer specific.
Problem was with voltage levels. Operating the microcontroller circuit on 3.2V and Bluetooth module on 3.8V solved the problem and data is transmitted as expected. However I am unable to predict an explanation for this.
Please help.
It is observed clearly on varying the potentiometer of voltage regulator, when I keep it below 3.20V data is transmitted smooth, and as the voltage level crosses 3.20V the tens place of data starts getting corrupted up to the point of complete data corruption and output becomes constant data 0XFE at 3.8V.

Modbus: convert hex value to dword in nodejs

Specification given in the manual of meter which uses modbus as communication protocol:
“dword” refers to 32-bit unsigned integer using two data addresses and 4 bytes
of memory with high word at the front and low word at the end, it varies from 0
to 4294967295. rx = high word *65536+low word
i have this hexValue : 00003ef5.
where highword = 0000
and lowword = 3ef5
so rx would be 3ef5 and this converted to decimal gives 16,117.
For now i have used this:
var rx = hexValue.substr(0,4)*65536 + hexValue.substr(4,8);
console.log(parseInt(rx,16));
Is there a way to convert my hex value directly to dword using nodejs buffer library or any other method better than this?

Resources