ECU waked up by the Can error frame (Automotive) - autosar

I am encountering a issue that is ECU waked up by error frame.
Then, I got report from the testing team for this issue.
I am wondering why error frame can wake up the ECU in sleep mode? how can?
who know this issue or encountered this one, Please help me
I really appreciate your willing to support!

Wake-Up Pattern in CAN / CAN FD Networks
So, a wakeup pattern is like dominant for >5µs .. with 500kB/s CAN, bit time is 2µs/bit, this is like 2.5 bits.
Active Error Frame is defined as "six dominant bits transmitted by ECU detecting failure" .. I would say, that would be plenty of time for a >5µs wakeup pattern

I am not sure about your Hardware Design but there is nothing wrong with "wakeup" in this case. Usually decision wakeup made by CAN Controller which is not a part of Micro Controller (MCU). It always wakeup if it detect Wakeup Pattern (check Can controller datasheet) and then it will trigger some HW PIN INH , RX/TX to MCU.
That time MCU need to wakeup fast enough to check what is first CAN message.. at this point you only know what is CAN Frame.
So the expectation if CAN Frame Error MCU will not wakeup is also wrong..
How do you know if it is a Error Frame.. if you not wakeup yet.
Correct expectation is what ECU will do after that.. ECU should first check if it is valid CAN Frame and Valid wakeup reason to be awake or not.. Otherwise it should be put to sleep again. Please also check NetworkManageMent Specification from Autosar for more information.

Related

Decoding Bluetooth signal and packets using GnuRadio

I am currently working on a project which aim to detect Bluetooth and decode Bluetooth packets (I use a Hack RF One to make the detection). I have made a Gnuradio Flowgraph in order to demodulate Bluetooth signal and I am trying to decode visualy the packets by searching a Bluetooth frame on a binary file.
Unfortunately, I didn't succeed to recover a clear view of the Bluetooth signal. To be precise, I am pretty sure that I detect Bluetooth on my sinks but when sending this to a Clock Recovery + Binary Slicer blocks, I am unable to recover interresting data in the binary file (especially the MAC adress of the sending device, which is part of the a Bluetooth packet). Moreover, I would like to know what type of network layer (physical, transport, baseband...) is intercepted in this type of process. In my case, I aim to intercept baseband layer packets.
Additionaly, I am interrested in knowing how to use the gr-bluetooth because I can't find a lot of documentation concerning this block. I think this can be interresting for the development of my project.
Could you please, give me your view, opinion about this problem ? I am stucked at this stage without knowing the exact origin of my issue. (Here is my flowgraph GnuRadio_Flowgraph and a screenshot of one of my Bluetooth detection Detected signal at 2.402GHz).
Thank you very much,
You probably need an ubertooth instead https://www.sparkfun.com/products/10573
I read that the bluetooth frequency skipping is spread wider than the HackRF can read, so at-best, you're going to miss 75% of frames if you only have one hackrf connected.

Contiki OS on Zolertia Z1 - Conflicting activation of phidget and battery sensors?

I build a small game controller for the Z1.
I have a process reading values from a Joystick sensor. It works fine.
Then, I added a second process, reading the value of the battery sensor every 5 minutes. But it makes the Joystick stop working: the value does not update anymore!
I found a workaround: when I have to read the value of the battery, I deactivate the phidget_sensor, activate the battery_sensor, read the value and then deactivate the battery_sensor and reactivate the phidget_sensor.
But I would like to know why I can not have both sensors activated at the same time ?
Thanks
Comes from Here.
The ADC is the "analogue to digital converter", basically is the component that provides you the voltage signal levels of an analogue sensor, so then it can later be used to translate to a meaningful value.
What happens is the battery sensor driver and the phidget driver each when starts configures the ADC on its own, thus overwriting the ADC configuration.
The expected use of both of these components is actually how you are actually using: enable, measure, then disable. This way you ensure at all times the ADC is configured the way your application expects. If you want to have this done in a single operation then I'm afraid you will need to modify probably the phidget driver and include this.
I hope this is the answer you expected, as you are asking why does this happens.

Interrupt behavior of a 6502 in standalone test vs. in a Commodore PET

I am building a Commodore PET on an FPGA. I've implemented my own 6502 core in Kansas Lava (code is available at https://github.com/gergoerdi/mos6502-kansas-lava), and by putting enough IO around it (https://github.com/gergoerdi/eightbit-kansas-lava) I was able to boot the original Commodore PET ROM on it, get a blinking cursor and start typing.
However, after typing in the classic BASIC program
10 PRINT "HELLO WORLD"
20 GOTO 10
it crashes after a while (after several seconds) with
?ILLEGAL QUANTITY ERROR IN 10
Because my code has fairly reasonable per-opcode test coverage, and it passes AllSuiteA, I thought I would look into tests for more complicated behaviour, which is how I arrived at Klaus Dormann's interrupt testsuite. Running it in the Kansas Lava simulator has pointed out a ton of bugs in my original interrupt implementation:
The I flag was not set when entering the interrupt handler
The B flag was all over the place
IRQ interrupts were completely ignored unless I was unset when they arrived (the correct behaviour seems to be to queue interrupts when I is set and when it gets unset, they should still be handled)
After fixing these, I can now successfully run the Klaus Dormann test, so I was hoping by loading my machine back onto the real FPGA, with some luck the BASIC crash could be going away.
However, the new version, with all these interrupt bugs fixed, and passing the interrupt test in a simulator, now fails to respond to keyboard input or even just blink the cursor on the real FPGA. Note that both keyboard input and cursor blinking is done in response to an external IRQ (connected from the screen VBlank signal), so this means the fixed version somehow broke all interrupt handling...
I am looking for any kind of vague suggestions what could be going wrong or how I could begin to debug this.
The full code is available at https://github.com/gergoerdi/mos6502-kansas-lava/tree/interrupt-rewrite, the offending commit (the one that fixes the test and breaks the PET) is 7a09b794af. I realize this is the exact opposite of a minimal viable reproduction, but the change itself is tiny and because I have no idea where it goes wrong, and because reproducing the problem requires a machine featureful enough to boot the stock Commodore PET ROM, I don't know how I could shrink it...
Added:
I managed to reproduce the same issue on the same hardware with a very simple (dare I say minimal) ROM instead of the stock PET ROM:
.org $C000
reset:
;; Initialize PIA
LDY #$07
STY $E813
LDA #30
STA $42
STA $8000
CLI
JMP *
irq:
CMP $E812 ; ACK irq
DEC $42
BNE endirq
LDX $8000
INX
STX $8000
LDA #30
STA $42
endirq: RTI
.res $FFFC-*
.org $FFFC
resetv: .addr reset
irqv: .addr irq
Interrupts aren't queued; the interrupt line is sampled on the penultimate cycle of each instruction and if it is active then, and I unset, then a jump to interrupt occurs next instead of a fetch/decode. Could the confusion be that IRQ is level triggered, not edge triggered, and is usually held high for a period, not a single cycle? So clearing I will cause an interrupt to occur immediately if it was already ongoing. It looks like on the PET interrupt is held active until the CPU acknowledges it?
Also notice the semantics: SEI and CLI adjust the flag in the final cycle. A decision on whether to jump to interrupt was made the cycle before. So a SEI as the final thing when an interrupt comes in, you'll enter the interrupt routine with I set. If an interrupt is active when you hit a CLI then the processor will perform the operation after the CLI before branching.
I'm on a phone so it's difficult to assess more thoroughly than to offer those platitudes; I'll try to review properly later. Is any of that helpful?

Bit Bang with SPI (fwirte, write performance)

I have a bit bang code that allows me to send like 4 megs of data through SPI lines. Its embedded code for custom Hardware using a Linux Kernel.
The problem is that takes a VERY long time to do it (4 hours) this is most likely becase the kernel is doing more stuff. Basically my code is something like this(aprox):
unsigned char data=0xFF;
BB_SPI_Init();
SPI_start();//activates chipselect(enable)
for(i=0;i<8;i++){
if(data & 0x80){
gpio_set_value(SPI_MOSI,1);
}else{
gpio_set_value(SPI_MOSI,0);
}
//send pulse clock
gpio_set_value(SPI_CLK,0);
gpio_set_value(SPI_CLK,1);
data<<=1;
}
SPI_stop();//deactivates chipselect(disable)
So is a very simple bit bang, but i notice that if i use write to send data to the linux gpio handler /sys/class/gpio/gpioXX/value (where XX is any gpio number) it takes 4 hours.
But if i use fwrite() for sending to the same device it takes 3 hours.
BUT, if you use write() only for the enables ( SPI_stop(), and SPI_start()) and fwrite() for sending to MISO, CLK it only takes 1 hour and 30 minutes.
So, with that as a base, could someone explain to me how is that happening? my imagination says that is the way the threads are handled and in every software cycle it resolves 2 threads (fwrite() and write()) instead if was only one of the functions used, but now i'm still investigating, can someone let me know any kind of information? is there a better way to handle this?
FYI
Can't use kernel driver spi because the hardware was connected to gpios and it is a mandatory requirement to use bit bang but i accept any suggestion
Thanks in advance
EDIT
Hey Guys thanks for your comments, it seems that i had a problem (very dumb one) that i created the file descriptor each time that i was going to send data to sys/class/gpio/gpioxx/value so that's why was slow. Also turn off some other programs and the transfer skyrocket to 3 minutes instead of 1 hour 30 minutes (with write()). Thanks and Sorry about it
I think that the spi-bitbang driver is the best solution if you are looking for performance. Doing the bit-bang from user space is a pain because you have at least 3 system calls for each bit of data. A system call is an expensive operation.
FYI Can't use kernel driver spi because the hardware was connected to gpios and it is a mandatory requirement to use bit bang but i accept any suggestion
That's why the spi-bitbang driver exists. You can easily configure the spi-bitbang driver to work with your GPIOs.
Then, once you have a spi-bitbang driver, you can write a char device that accept as input your entire block of data and transfer it in kernel space. With this solution you will get the maximum performance for a bit-bang interface.

Xilinx Virtex5 Simple I/O

I'm using a Virtex 5 FPGA and want to have a few +5/0 I/O pins to communicate with a microcontroller. The only peripherials I've used on the board so far are pushbuttons and switches and no one I've asked seems to know the simplest way to do this I/O. I've looked around the board specification but haven't found any simple way of doing it. I would appreciate any advice you might have.
This is not an easy thing to do. If you don't have the schematic of the board, then you need to get volt meter with some fine pitch probes and reverse engineer the board.
It is pretty easy if you have 2 boards, with one board it can be really hard since the BGA signals may not be connected to a via and therefore not available on the bottom of the board, and even if they are, then you don't know exactly which pin they are connected to. But with some luck, you can find them since the VIA can only be connected to 4 possible pins surrounding it!
The first thing you need to do is to identify your chip, find the BGA print of the IC from Xilin'x web site.
If your board has some buttons already, then if you are lucky, those signals may be routed to the pins of the FPGA that are available on the bottom of your board. Here are the things you need to do:
Make sure you have good ESD protection to perform these test
Put your voltmeter into 'buzzer' mode
Check the pins of your connector and find out how it is connected, see if there is a pull-up and/or pull-down resistors on the board
when you find the 'active' pin of your connector, start connecting the other probe to the VIAs one by one
When you hear a buzz, make a note of the position (guess or measure the distance between the side of t he IC and the location of the via)
Identify the 4 possible pins that the signal can be connected to
Write a code to get all those 4 signals and connect them to ChipScope
In Chip Scope, capture all 4 signals and see which one is the one with the right connection!
alternative, you can create a design with inputs only, capture all the inputs and put them into a memory block and create a trigger logic to capture all the signals whenever any of the inputs changes, after lots of work and analysis, you will find the correct pins.
Anyway, these are just crazy ideas since this is a really difficult thing to do without having the PCB info of the board.
Good luck with your hacking.

Resources