Interrupt behavior of a 6502 in standalone test vs. in a Commodore PET - emulation

I am building a Commodore PET on an FPGA. I've implemented my own 6502 core in Kansas Lava (code is available at https://github.com/gergoerdi/mos6502-kansas-lava), and by putting enough IO around it (https://github.com/gergoerdi/eightbit-kansas-lava) I was able to boot the original Commodore PET ROM on it, get a blinking cursor and start typing.
However, after typing in the classic BASIC program
10 PRINT "HELLO WORLD"
20 GOTO 10
it crashes after a while (after several seconds) with
?ILLEGAL QUANTITY ERROR IN 10
Because my code has fairly reasonable per-opcode test coverage, and it passes AllSuiteA, I thought I would look into tests for more complicated behaviour, which is how I arrived at Klaus Dormann's interrupt testsuite. Running it in the Kansas Lava simulator has pointed out a ton of bugs in my original interrupt implementation:
The I flag was not set when entering the interrupt handler
The B flag was all over the place
IRQ interrupts were completely ignored unless I was unset when they arrived (the correct behaviour seems to be to queue interrupts when I is set and when it gets unset, they should still be handled)
After fixing these, I can now successfully run the Klaus Dormann test, so I was hoping by loading my machine back onto the real FPGA, with some luck the BASIC crash could be going away.
However, the new version, with all these interrupt bugs fixed, and passing the interrupt test in a simulator, now fails to respond to keyboard input or even just blink the cursor on the real FPGA. Note that both keyboard input and cursor blinking is done in response to an external IRQ (connected from the screen VBlank signal), so this means the fixed version somehow broke all interrupt handling...
I am looking for any kind of vague suggestions what could be going wrong or how I could begin to debug this.
The full code is available at https://github.com/gergoerdi/mos6502-kansas-lava/tree/interrupt-rewrite, the offending commit (the one that fixes the test and breaks the PET) is 7a09b794af. I realize this is the exact opposite of a minimal viable reproduction, but the change itself is tiny and because I have no idea where it goes wrong, and because reproducing the problem requires a machine featureful enough to boot the stock Commodore PET ROM, I don't know how I could shrink it...
Added:
I managed to reproduce the same issue on the same hardware with a very simple (dare I say minimal) ROM instead of the stock PET ROM:
.org $C000
reset:
;; Initialize PIA
LDY #$07
STY $E813
LDA #30
STA $42
STA $8000
CLI
JMP *
irq:
CMP $E812 ; ACK irq
DEC $42
BNE endirq
LDX $8000
INX
STX $8000
LDA #30
STA $42
endirq: RTI
.res $FFFC-*
.org $FFFC
resetv: .addr reset
irqv: .addr irq

Interrupts aren't queued; the interrupt line is sampled on the penultimate cycle of each instruction and if it is active then, and I unset, then a jump to interrupt occurs next instead of a fetch/decode. Could the confusion be that IRQ is level triggered, not edge triggered, and is usually held high for a period, not a single cycle? So clearing I will cause an interrupt to occur immediately if it was already ongoing. It looks like on the PET interrupt is held active until the CPU acknowledges it?
Also notice the semantics: SEI and CLI adjust the flag in the final cycle. A decision on whether to jump to interrupt was made the cycle before. So a SEI as the final thing when an interrupt comes in, you'll enter the interrupt routine with I set. If an interrupt is active when you hit a CLI then the processor will perform the operation after the CLI before branching.
I'm on a phone so it's difficult to assess more thoroughly than to offer those platitudes; I'll try to review properly later. Is any of that helpful?

Related

ECU waked up by the Can error frame (Automotive)

I am encountering a issue that is ECU waked up by error frame.
Then, I got report from the testing team for this issue.
I am wondering why error frame can wake up the ECU in sleep mode? how can?
who know this issue or encountered this one, Please help me
I really appreciate your willing to support!
Wake-Up Pattern in CAN / CAN FD Networks
So, a wakeup pattern is like dominant for >5µs .. with 500kB/s CAN, bit time is 2µs/bit, this is like 2.5 bits.
Active Error Frame is defined as "six dominant bits transmitted by ECU detecting failure" .. I would say, that would be plenty of time for a >5µs wakeup pattern
I am not sure about your Hardware Design but there is nothing wrong with "wakeup" in this case. Usually decision wakeup made by CAN Controller which is not a part of Micro Controller (MCU). It always wakeup if it detect Wakeup Pattern (check Can controller datasheet) and then it will trigger some HW PIN INH , RX/TX to MCU.
That time MCU need to wakeup fast enough to check what is first CAN message.. at this point you only know what is CAN Frame.
So the expectation if CAN Frame Error MCU will not wakeup is also wrong..
How do you know if it is a Error Frame.. if you not wakeup yet.
Correct expectation is what ECU will do after that.. ECU should first check if it is valid CAN Frame and Valid wakeup reason to be awake or not.. Otherwise it should be put to sleep again. Please also check NetworkManageMent Specification from Autosar for more information.

Minimum time between falling and rising edge to detect a rising edge on a GPIO on STM32H7

On my STM32H753 I've enabled an interruption on the rising edge of one of the GPIOs. Once I get the interrupt (provided of course that the handler acknowledges the IT in the EXTI peripheral), when the signal goes low again, I will be able to get another interruption at the following rising edge.
My question is: what is the minimal duration between the falling edge and the rising edge for the latter to be detected by EXTI ? The datasheet specifies many characteristics of the IOs, in particular the voltage values to consider the input low or high but I didn't find this timing.
Thank you
For the electonic part, you need to refere to you MCU datasheet.
However I beleive you need the information about the software part:
You will be able to handle a new GPIO IRQ (EXTI) as soon as you've aknowlegded the former one by clearing the IRQPending Flag or via HAL APIs.
If two IRQs occurred and you did not clear the IRQPending flag yet, then they will be considered as one IRQ. Benchamrking such delay depends on the Clock speed you're using and the complexity of your EXTI_IRQ_Handler routine.

2 Questions about Risc-V-Privileged-Spec-v1.7

Page 16, Table 3.1:
Base field in mcpuid: RV32I RV32E RV64I RV128I
What is "RV32E"?
Is there a "E" extension?
ECALL (page 30) says nothing about the behavior of the pc.
While mepc (page 28) and mbadaddr (page 29) claim that "mepc will point to the beginning of the instruction". I think ECALL should set the mepc to the end of the causing instruction so that a ERET would go to the next instruction. Is that right?
As answered by CliffordVienna, RV32E ("embedded") is a new base ISA which uses 16 registers and makes some of the counter registers optional.
I would not recommend implementing a RV32E core, as it is probably an unnecessary over-optimization in core size that limits your ability to use a large body of RV*I code. But if performance is not needed, and you really need the core to be a tad smaller, and the core is not connected to a memory hierarchy that would dominate the area/power anyways, and you were willing to deal with the tool-chain headaches... then maybe an RV32E core is appropriate.
ECALL is treated like an exception, and will redirect the PC to the appropriate trap handler based on the current privilege level. MEPC will be set to the current PC of the ecall instruction.
You can verify this behavior by analyzing the Berkeley RV64G Rocket processor (https://github.com/ucb-bar/rocket/blob/master/src/main/scala/csr.scala), or by looking at the Spike ISA simulator (starting here: https://github.com/riscv/riscv-isa-sim/blob/master/riscv/insns/scall.h). Careful: as of 2015 Jun 27 the code is still in flux regarding the Privileged Spec.
If we look at how Spike handles eret ("sret": https://github.com/riscv/riscv-isa-sim/blob/master/riscv/insns/sret.h) for example, we have to be a bit careful. The PC is set to "mepc", but it's the trap handler's job to advance the PC by 4. We can see that done, for example, by the proxy kernel in some of the handler functions here (https://github.com/riscv/riscv-pk/blob/master/pk/handlers.c).
A draft of the RV32E (embedded) spec can be found here (via isa-dev mailing list):
https://lists.riscv.org/lists/arc/isa-dev/2015-06/msg00022/rv32e.pdf
It's RV32I with 16 instead of 32 registers and without the counter instructions.

x86 reserved EFLAGS bit 1 == 0: how can this happen?

I'm using the Win32 API to stop/start/inspect/change thread state. Generally works pretty well. Sometimes it fails, and I'm trying to track down the cause.
I have one thread that is forcing context switches on other threads by:
thread stop
fetch processor state into windows context block
read thread registers from windows context block to my own context block
write thread registers from another context block into windows context block
restart thread
This works remarkably well... but ... very rarely, context switches seem to fail.
(Symptom: my multithread system blows sky high executing a strange places with strange register content).
The context control is accomplished by:
if ((suspend_count=SuspendThread(WindowsThreadHandle))<0)
{ printf("TimeSlicer Suspend Thread failure");
...
}
...
Context.ContextFlags = (CONTEXT_INTEGER | CONTEXT_CONTROL | CONTEXT_FLOATING_POINT);
if (!GetThreadContext(WindowsThreadHandle,&Context))
{ printf("Context fetch failure");
...
}
call ContextSwap(&Context); // does the context swap
if (ResumeThread(WindowsThreadHandle)<0)
{ printf("Thread resume failure");
...
}
None of the print statements ever get executed. I conclude that Windows thinks the context operations all happened reliably.
Oh, yes, I do know when a thread being stopped is not computing [e.g., in a system function] and won't attempt to stop/context switch it. I know this because each thread that does anything other-than-computing sets a thread specific "don't touch me" flag, while it is doing other-than-computing. (Device driver programmers will recognize this as the equivalent of "interrupt disable" instructions).
So, I wondered about the reliability of the content of the context block.
I added a variety of sanity tests on various register values pulled out of the context block; you can actually decide that ESP is OK (within bounds of the stack area defined in the TIB), PC is in the program that I expect or in a system call, etc. No surprises here.
I decided to check that the condition code bits (EFLAGS) were being properly read out; if this were wrong, it would cause a switched task to take a "wrong branch" when its state was restored. So I added the following code to verify that the purported EFLAGS register contains stuff that only look like EFLAGS according to the Intel reference manual (http://en.wikipedia.org/wiki/FLAGS_register).
mov eax, Context.EFlags[ebx] ; ebx points to Windows Context block
mov ecx, eax ; check that we seem to have flag bits
and ecx, 0FFFEF32Ah ; where we expect constant flag bits to be
cmp ecx, 000000202h ; expected state of constant flag bits
je #f
breakpoint ; trap if unexpected flag bit status
##:
On my Win 7 AMD Phenom II X6 1090T (hex core),
it traps occasionally with a breakpoint, with ECX = 0200h. Fails same way on my Win 7 Intel i7 system. I would ignore this,
except it hints the EFLAGS aren't being stored correctly, as I suspected.
According to my reading of the Intel (and also the AMD) reference manuals, bit 1 is reserved and always has the value "1". Not what I see here.
Obviously, MS fills the context block by doing complicated things on a thread stop. I expect them to store the state accurately. This bit isn't stored correctly.
If they don't store this bit correctly, what else don't they store?
Any explanations for why the value of this bit could/should be zero sometimes?
EDIT: My code dumps the registers and the stack on catching a breakpoint.
The stack area contains the context block as a local variable.
Both EAX, and the value in the stack at the proper offset for EFLAGS in the context block contain the value 0244h. So the value in the context block really is wrong.
EDIT2: I changed the mask and comparsion values to
and ecx, 0FFFEF328h ; was FFEF32Ah where we expect flag bits to be
cmp ecx, 000000200h
This seems to run reliably with no complaints. Apparently Win7 doesn't do bit 1 of eflags right, and it appears not to matter.
Still interested in an explanation, but apparently this is not the source of my occasional context switch crash.
Microsoft has a long history of squirreling away a few bits in places that aren't really used. Raymond Chen has given plenty of examples, e.g. using the lower bit(s) of a pointer that's not byte-aligned.
In this case, Windows might have needed to store some of its thread context in an existing CONTEXT structure, and decided to use an otherwise unused bit in EFLAGS. You couldn't do anything with that bit anyway, and Windows will get that bit back when you call SetThreadContext.

Emulation: Unconditional jumps and PC increment through CPU cycles

I'm writing a simple GB emulator (wow, now that's something new, isn't it), since I'm really taking my first steps in emu.
What i don't seem to understand is how to correctly implement the CPU cycle and unconditional jumps.
Consider the command JP nn (unconditional jump to memory adress pointed out), like JP 1000h, if I have a basic loop of:
increment PC
read opcode
execute command
Then after the JP opcode has been read and the command executed (read 1000h from memory and set PC = 1000h), the PC gets incremented and becomes 1001h, thus resulting in bad emulation.
tl;dr How do you emulate jumps in emulators, so that PC value stays correct, when having cpu loops that increment PC?
The PC should be incremented as an 'atomic' operation every time it is used to return a byte. This means immediate operands as well as opcodes.
In your example, the PC would be used three times, once for the opcode and twice for the two operand bytes. By the time the CPU has fetched the three bytes and is in a position to load the PC, the PC is already pointing to the next instruction opcode after the second operand but, since actually implementing the instruction reloads the PC, it doesn't matter.
Move increment PC to the end of the loop, and have it performed conditionally depending on the opcode?
I know next to nothing about emulation, but two obvious approaches spring to mind.
Instead of hardcoding PC += 1 into the main loop, let the evaluation if each opcode return the next PC value (or the offset, or a flag saying whether to increment it, or etc). Then the difference between jumps and other opcodes (their effect on the program counter) is definable along with everything else about them.
Knowing that the main loop will always increment the PC by 1, just have the implementation of jumps set the PC to target - 1 rather than target.

Resources