I use FPGAs to implement USB1.0 phy and application layer.
The application is debugging with a normal Logitech K120 keyboard.
Now, I could send a SETUP packet to light up the NUM, CAPS and SCROLL lock LED with no problem.
But when I send a INTERRUPT packet to enable device report, the bus is to wait, and then get timeout fail.
SET Configuration is 1.
The INTERRUPT packet is as follows:
< SYNC >< IN ><ADR0 > EP1 CRC5
00000001100101100000000100000101
< 0 8 >< 9 6 >< 0 ><1 ><CRC>
Why could it not work?
I would really appreciate it if you would help.
Related
I am using dwapb gpio driver under aarch64 linux. Port A of the gpio device supports interrupt. In my configuration, the sdhc host acts as the consumer of gpio, i.e., sdhc's interrupt signal is connected to pin 7 on port a, thus interrupts from sdhc has the following path:
sdhc -> gpio -> gic -> cpu.
The sdhc uses level-sensitive trigger method (which is the default config of gpio). For interrupt polarity, the default setting is active-low. If I use the default setting, after enabling the interrupt on the gpio pin, the pin voltage is 3v3 (measured using multimeter). It seems that it's reasonable, that the pin voltage is set to high, while waiting active-low interrupt signals.
The problem I am facing is when I configure the polarity to active-high. In this case, right after enable the pin, the voltage is also 3v3, which causes immediate irq storm to the kernel. Apparently, the voltage on the pin should be 0 after enabling the interrupt on this pin. But I am not able to get this done.
I tried to the following steps for enabling the irq for ACTIVE-HIGH:
change the pin direction to "output"
change the pin value to 0
change the pin direction to "input"
enable the interrupt of the pin
I was hoping that steps 1~3 will keep the pin voltage to 0 after step 4, but it seems that it's not the case. No matter what value I set before step 4, after step 4, the pin voltage will change to 3v3 (measured by multimeter).
Is there a way to get the ACTIVE-HIGH work properly?
Screen grab from WireShark showing traffic when problem occurs
Short question - Referring to WireShark image, What is causing Master to send LL_CHANNEL_MAP_IND and why is it taking so long?
We are working on a hardware/software project that is utilizing the TI WL18xx as a Bluetooth controller. One of the main functions of this product is to communicate with our sensor hardware over a Bluetooth Low Energy connection. We have encountered an issue that we have had difficulty pinpointing, but feel may reside in the TI WL18xx hardware/firmware. Intermittently, after a second Bluetooth Low Energy device is connected, the response times from writing and notification of one of the characteristics on one of the connected devices becomes very long.
Details
The host product device is running our own embedded Linux image on a TI AM4376x processor. The kernel is 4.14.79 and our communication stack sits on top of Bluez5. The wifi/bluetooth chip is the Jorjin WG7831-BO, running TIInit_11.8.32.bts firmware version 4.5. It is based on the TI WL1831. The sensor devices that we connect to are our own and we use a custom command protocol which uses two characteristics to perform command handshakes. These devices work very well on a number of other platforms, including Mac, Windows, Linux, Chrome, etc.
The workflow that is causing problems is this;
A user space application allows the user to discover and connect to our sensor devices over BLE, one device at a time.
The initial connection requires a flurry of command/response type communication over the aforementioned BLE characteristics.
Once connected, the traffic is reduced significantly to occasional notifications of new measurements, and occasional command/response exchanges triggered by the user.
A single device always seems stable and performant.
When the user connects to a second device, the initial connection proceeds as expected.
However, once the second device's connection process completes, we start to see that the command/response response times become hundreds of times longer on the initially connected device.
The second device communication continues at expected speeds.
This problem only occurs with the first device about 30% of the time we follow this workflow.
Traces
Here is a short snippet of the problem that is formed from a trace log that is a mix of our library debug and btmon traces.
Everything seems fine until line 4102, at which we see the following:
ACL Data TX: Handle 1025 flags 0x00 dlen 22 #1081 [hci0] 00:12:48.654867
ATT: Write Command (0x52) len 17
Handle: 0x0014
Data: 580fd8c71bff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1532) : Blob cmd sent: 1bh to GDX-FOR 07100117; length = 15; rolling counter = 216; timestamp = 258104ms .
HCI Event: Number of Completed Packets (0x13) plen 5 #1082 [hci0] 00:12:49.387892
Num handles: 1
Handle: 1025
Count: 1
ACL Data RX: Handle 1025 flags 0x02 dlen 23 #1083 [hci0] 00:12:51.801225
ATT: Handle Value Notification (0x1b) len 18
Handle: 0x0016
Data: 9810272f1bd8ff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1745) : GetNextResponse(GDX-FOR 07100117) returns 1bh cmd blob after 3139=(261263-258124) milliseconds.
The elapsed time reported by GetNextResponse() for most cmds should be < 30 milliseconds. Response times were short when we opened and sent a bunch of cmds to device A.
The response times remained short when we opened and sent a bunch of cmds to device B. But on the first subsequent cmd sent to device A, the response time is more than 3 seconds!
Note that a Bluetooth radio can only do one thing at a time. Receive or transmit. On one single frequency. If you have two connections and two connection events happen at the same time, the firmware must decide which one to prioritize, and which one to skip. Maybe the firmware isn't smart enough to handle your specific case. Try with other connection parameters to see if something gets better. You can also try another Bluetooth dongle from a different manufacturer.
I am trying to enable DDR in sd (spec above 2.0) the procedure in the specifications is as follows
Execute CMD0 to make the card idle
Execute CMD8 to enable ask about the voltage requirements
Execute ACMD41 with S18 bit enabled and log for S18 in the reply to see if the card has voltage switch functionality: checked and the card has the functionality
Now execute CMD11, if the card replies with a response the voltage switching sequence is started, the cmd and data line should go low: checked and yes they do
Stop the clock,
Program the voltage switch reg (with 1.8V) and wait 5 ms
Start the clock: the card should start at speed SDR12 with 1.8V: cmd and data lines should go high, a cmd_done interrupt should be received: NOT RECEIVED
Any pointers regarding this would be helpful...the card status registers shows that there is a data transfer in progress and the card is not present. After this I cannot execute any CMD (the cmd_done interrupts are not received)
For the sake of helping others, the process explained above is correct. The problem was on the board side i.e. there were no 1.8v regulators connected on the board. So first make sure that the SOC or the board has those connectors available. In case of mmc, ddr mode can be enabled with 3V so the above case is only valid for sd.....
I followed a tutorial to illuminate an LED on a Raspberry Pi so that when an iBeacon detected an LED is turned on using the GPIO pins but I need to alter the script so that the LED goes off again when the iBeacon is no longer detected.
The script at the moment is:
#!/bin/bash
gpio mode 1 out
trap "gpio write 1 0" exit
while read line
do
if [[ `echo $line | grep "2F234454-CF6D-4A0F-ADF2-F4911BA9FFA6 1 1" ` ]]; then
gpio write 1 1
fi
done
Which is being called by:
$ beacon scan -b | ./scriptName
The out put of beacon scan is:
pi#pibeacon ~ $ sudo beacon scan
BLE Beacon Scan ...
iBeacon UUID: 92AB49BE-4127-42F4-B532-90FAF1E26491 MAJOR: 1 MINOR: 1 POWER: -59 RSSI: -62
iBeacon UUID: 92AB49BE-4127-42F4-B532-90FAF1E26491 MAJOR: 1 MINOR: 1 POWER: -59 RSSI: -65
iBeacon UUID: 92AB49BE-4127-42F4-B532-90FAF1E26491 MAJOR: 1 MINOR: 1 POWER: -59 RSSI: -65
Continuously updating all the time the iBeacon is detected and just stops when the iBeacon is undetected.
The aim is to have a script run all the time and use the output of the beacon scan command to determine if the LED should be on or off - if the iBeacon is detected the LED should be on and if the iBeacon is then moved out of range the LED turn off again. The existing strip turns the LED on once and then the only way to reset the situation is to stop the script and start it again.
Thanks
One way you could accomplish it with your existing code is to set a variable to a timestamp inside your if statement. Then, outside your if statement (but inside the while), you can compare the current time to the timestamp. If enough time has passed since the beacon was detected (say 5 seconds), you code can turn off the LED.
The disadvantage of this approach is that if no beacons are detected at all, your code will block on the read line statement. So this is only workable if you know for sure at least one beacon will always be around to keep your program running. This sort of programming is not ideally suited to a simple bash script, because you really need two threads to handle this. But if you want to keep your same basic toolset, this is a decent option.
I worked out a (bad?) solution and thought I'd share it here. It has the effect of when the beacon is detected the light flashes and then when the beacon goes out of range the light stops flashing. I set this code to run on startup of the Pi and has fulfilled the function I needed (a very rough proof of concept prototype!).
I used the very good Radius Networks Development Kit (which is where the original script is from) and would highly recommend that if anyone else is interested in messing about with iBeacons.
#!/bin/bash
gpio mode 1 out
trap "gpio write 1 0" exit
while read line
do
if [[ $line = *"2F234454-CF6D-4A0F-ADF2-F4911BA9FFA6 1 1"* ]]; then
gpio write 1 1
fi
gpio write 1 0
done
It's a weird senario. We use a x86 soc as the host and a stm8s MCU as client. They are connected with 2 wires between the GPIOs(sOC side) and RST and SWIM(MCU side). So in this way we can reflash the MCU firmware by the SWIM protocol.
Here is the problem. After we enter the SWIM mode(enter sequence is easily UP/DOWN by gpio operations on low frequency), we need to send data to MCU. But the protocol need to send data with a high frequency(16M clock). I have try with a simple gpio operation with no delay:
for(i=0; i<10; i++) {
gpio_set_value(SWIM, 0);
gpio_set_value(SWIM, 1);
}
But on the oscilloscope the pulse will delay 2ns for both UP/DOWN. I think it maybe the io speed limit or some other system reason. As the SWIM protocol said, to send data the minimal delay is about 0.25ns.
Are there any ways to speed up the GPIO operations? or do we have some special method to apply this request?
BTW: We are using linux on the soc.