Will below code in cmm script result in any DAP signal toggling in waveform? Will GTL connect code result in any DAP signal toggles? - emulation

system.down
system.config.ahbaccessport 0. system.mode prepare
I want to understand which commands will and which will not result in any activity in DAP signals in waveform.
I am trying these thing on emulation.

Related

How can I reset my SoC from cmm script using RISC-V trace32 debugger. I don't have TRST or SRST serial lines connected to SoC

My cmm script is something like this :
..start of cmm script
""GTL config and GTL connect""
""some JTAG.SHIFT operations""
JTAG.PIN DISable
system.mode prepare ;Need a reset here. I am trying this to reset my SoC
WAIT 10.MS ;this WAIT command is not working, no 10ms delay observed
system.down
system.mode prepare
...end of cmm script
The normal approach to reset your RISC-V from the debugger without having a dedicated reset line is the following:
SYStem.Option.ResetMode NDMRST
SYStem.Up
SYStem.Option.ResetMode NDMRSTconfigures TRACE32 to perform all system resets via the ndmreset bit of the dmcontrol debug register in the RISC-V debug module (so without physical SRST pin).
SYStem.Up is then actually performing the reset before it connects the debugger with the core. You can also call SYStem.Up from up-state, just to reset your system.
However it looks like, you want to do the reset in prepare-state or even down-state. I guess you should contact the Lauterbach support on how to do that.

Is it always nessasary to call gpio_free()?

Is it always necessary to free a GPIO pin after you have requested it? What does freeing a GPIO pin do and what happens if you don't call gpio_free()?
Calling gpio_free() means to disconnect it from power (GND or 3V3) if it is set to output type. I have written a c program which needs a ctrl+c keyboard interrupt to quit, so it cannot free the used pins, this means there is still voltage on the pins when te program quits, but when I restart it it resets the pins, so not calling gpio free does not stop any code from working, but the pins stay connected to power after quit.

Enabling DDR in SD/MMC causes problems? CMD 11 gives a response but the voltage switch wont complete

I am trying to enable DDR in sd (spec above 2.0) the procedure in the specifications is as follows
Execute CMD0 to make the card idle
Execute CMD8 to enable ask about the voltage requirements
Execute ACMD41 with S18 bit enabled and log for S18 in the reply to see if the card has voltage switch functionality: checked and the card has the functionality
Now execute CMD11, if the card replies with a response the voltage switching sequence is started, the cmd and data line should go low: checked and yes they do
Stop the clock,
Program the voltage switch reg (with 1.8V) and wait 5 ms
Start the clock: the card should start at speed SDR12 with 1.8V: cmd and data lines should go high, a cmd_done interrupt should be received: NOT RECEIVED
Any pointers regarding this would be helpful...the card status registers shows that there is a data transfer in progress and the card is not present. After this I cannot execute any CMD (the cmd_done interrupts are not received)
For the sake of helping others, the process explained above is correct. The problem was on the board side i.e. there were no 1.8v regulators connected on the board. So first make sure that the SOC or the board has those connectors available. In case of mmc, ddr mode can be enabled with 3V so the above case is only valid for sd.....

Interrupt behavior of a 6502 in standalone test vs. in a Commodore PET

I am building a Commodore PET on an FPGA. I've implemented my own 6502 core in Kansas Lava (code is available at https://github.com/gergoerdi/mos6502-kansas-lava), and by putting enough IO around it (https://github.com/gergoerdi/eightbit-kansas-lava) I was able to boot the original Commodore PET ROM on it, get a blinking cursor and start typing.
However, after typing in the classic BASIC program
10 PRINT "HELLO WORLD"
20 GOTO 10
it crashes after a while (after several seconds) with
?ILLEGAL QUANTITY ERROR IN 10
Because my code has fairly reasonable per-opcode test coverage, and it passes AllSuiteA, I thought I would look into tests for more complicated behaviour, which is how I arrived at Klaus Dormann's interrupt testsuite. Running it in the Kansas Lava simulator has pointed out a ton of bugs in my original interrupt implementation:
The I flag was not set when entering the interrupt handler
The B flag was all over the place
IRQ interrupts were completely ignored unless I was unset when they arrived (the correct behaviour seems to be to queue interrupts when I is set and when it gets unset, they should still be handled)
After fixing these, I can now successfully run the Klaus Dormann test, so I was hoping by loading my machine back onto the real FPGA, with some luck the BASIC crash could be going away.
However, the new version, with all these interrupt bugs fixed, and passing the interrupt test in a simulator, now fails to respond to keyboard input or even just blink the cursor on the real FPGA. Note that both keyboard input and cursor blinking is done in response to an external IRQ (connected from the screen VBlank signal), so this means the fixed version somehow broke all interrupt handling...
I am looking for any kind of vague suggestions what could be going wrong or how I could begin to debug this.
The full code is available at https://github.com/gergoerdi/mos6502-kansas-lava/tree/interrupt-rewrite, the offending commit (the one that fixes the test and breaks the PET) is 7a09b794af. I realize this is the exact opposite of a minimal viable reproduction, but the change itself is tiny and because I have no idea where it goes wrong, and because reproducing the problem requires a machine featureful enough to boot the stock Commodore PET ROM, I don't know how I could shrink it...
Added:
I managed to reproduce the same issue on the same hardware with a very simple (dare I say minimal) ROM instead of the stock PET ROM:
.org $C000
reset:
;; Initialize PIA
LDY #$07
STY $E813
LDA #30
STA $42
STA $8000
CLI
JMP *
irq:
CMP $E812 ; ACK irq
DEC $42
BNE endirq
LDX $8000
INX
STX $8000
LDA #30
STA $42
endirq: RTI
.res $FFFC-*
.org $FFFC
resetv: .addr reset
irqv: .addr irq
Interrupts aren't queued; the interrupt line is sampled on the penultimate cycle of each instruction and if it is active then, and I unset, then a jump to interrupt occurs next instead of a fetch/decode. Could the confusion be that IRQ is level triggered, not edge triggered, and is usually held high for a period, not a single cycle? So clearing I will cause an interrupt to occur immediately if it was already ongoing. It looks like on the PET interrupt is held active until the CPU acknowledges it?
Also notice the semantics: SEI and CLI adjust the flag in the final cycle. A decision on whether to jump to interrupt was made the cycle before. So a SEI as the final thing when an interrupt comes in, you'll enter the interrupt routine with I set. If an interrupt is active when you hit a CLI then the processor will perform the operation after the CLI before branching.
I'm on a phone so it's difficult to assess more thoroughly than to offer those platitudes; I'll try to review properly later. Is any of that helpful?

Arduino - Use serial print in the background

I would like to know if it would be somehow possible to handle a Serial.println() on an Arduino uno without holding up the main program.
Basically, I'm using the arduino to charge a 400V capacitor and the main program is opening and closing the gate on a MOSFET transistor for 15 and 20 microseconds respectively. I also have a voltage divider connected to the capacitor which I use to measure the voltage on the capacitor when its being charged with the Arduino. I use analogRead() to get the raw value on the pin, multiply the value by the required ratio and try to print that value to the serial console at the end of each cycle. The problem is, though, that in order for the capacitor to charge quickly, the delays need to be very small (in the range of microseconds) and the serial print command takes much longer than that to execute and therefore holds up the entire program.
My question therefore is wherther it would somehow be possible to get the command to execute on something like a different "thread" without holding up the main cycle. Is the 8bit AVR capable of doing something like this?
Thanks
Arduino does not support multithreading, but there are a few libraries that allow you to do the equivalent:
Arduino Multi-Threading Library
ArduinoThread
For more information, you can also visit this question: Does Arduino support threading?
The basic Arduino serial print functions are blocking, They watch the TX Ready flag then load the next byte to transmit. This means if you send "Hello World" the print function will block for as long as it take to load the UART to send 10 characters at your selected baud rate.
One way to deal with this is using high baud rate so you don't wait long.
The better way is to use interrupt driven transmit. This way your string of data to send is buffered with each character sent when the UART can accept the character. The call to send data just loads the TX buffer, loads the first character into the UART to start the process then returns.
This bufferd approach will still block if you fill the TX buffer. The send method would have to wait to load more into the buffer. A good serial lib might give you an easy way to check the buffer status so you could implement your own extra buffering.
Here is a library that claims to be interrupt driven. I've not used it myself. Let us know if it works for you.
https://code.google.com/p/arduino-buffered-serial/
I found this post: Serial output is now interrupt driven. Serial.print() just stuffs the data in a buffer. Interrupts have to happen for that data to actually be shifted out. posted: Nov 11, 2012. I have found that to be true BUT if you try to print more then the internal serial buffer will hold it will lock up until the buffer is at least partially emptied.

Resources