is there a way to configure spike (riscv simulator) for entry PC etc.? - riscv

i'm doing verification to a RISCV cpu that supports Machine-Mode only, and i want to run my generated program on spike simulator.
i'm struggling to find any documentation about it.
how can i configure the first PC to my DUT first PC?
how can i configure other parameters like 'mvendorid' etc. ?
currently i'm working without pk and i'm getting "terminate called after throwing an instance of trap_load_access_fault".
when i'm working with pk, the program enters endless loop and the first PC doesn't look related to the ELF.
any suggestions?

Spike provides an option "--pc" which override the default entry point, run with '--pc DUT_FIRST_PC' should solve your problem.
When spike start simulation, its first pc is defined by DEFAULT_RSTVEC(0x1000), and spike set a "trampoline program" starts at DEFAULT_RSTVEC. After the "trampoline program" finished, spike jump to your program and go on, the jump pc can be override by '--pc' option.

Related

Running a machine code program on Vice Emulator (Vic-20)

Does anyone know how to get a machine code program running on the Vice Vic-20 emulator? I keep getting the the emulator doing a reset every time i try it.
If i use the following code it should print an "A" on the screen but i don't see anything. Program is loaded at memory location $1001 for an unexpanded Vic-20
LDA #$01
STA $1E00
RTS
Thanks
Allan

"clocksource tsc unstable" shown when the linux kernel boots up

I am booting up a linux kernel using a full system simulator, and I'd like to run my benchmark on the booted system. However, when it boots up, it shows me this message: "clocksource tsc unstable" and occasionally it hangs in the beggining. However, sometimes it lets me run my benchmark and then probably it hangs in the middle since the application never finishes and seems it's stuck there. Any idea how to fix this issue?
Thanks.
It suggests that, kernel didn't manage to calculate tsc (Time Stamp Counter) value properly i.e value is stale. This usually happens with VM. The way to avoid this is to - use predefined lpj (loops per jiffy) as kernel parameter (lpj=). Try it, hope issue will be fixed!

kernel hangs up at boot indefinitely

I have configured the kernel with linux slob allocator to implement best fit algorithm. I build the and installed the kernel image so that I can boot from it next time. Now when I try to boot this kernel it hangs indefinitely, the cursor just does not even blink. The following messages are printed before the cursor hangs up:
[0.000325] pid_max: default: 32768 minimum: 301
[0.001461] Security Framework initialized
[0.002108] AppArmor: AppArmor initialized
After this message the cursor hangs up indefinitely. I would like to know some kernel debugging tricks that would help me to navigate through the problem or some good read.
I have also configured kdb but do not know how to use it in such a condition. Any help is appriciated!!
Additional details:
I have modified the slob_page_alloc function to implement best-fit algorithm which is in turn called by slob_alloc function. I am using v3.6.2
Basically, you will need to stub-out (or mock-up) the external routines called by the best-fit algorithm code so that best-fit code can be dropped into a test program. Then use some kind of C unit-test suite and C coverage tool to help ensure that you have carefully tested all branches and all states of the code. (Unfortunately, I have no suggestions for such tools at this time.)

How to debug ARM Linux kernel (msleep()) lock up?

I am first of all looking for debugging tips. If some one can point out the one line of code to change or the one peripheral config bit to set to fix the problem, that would be terrific. But that's not what I'm hoping for; I'm looking more for how do I go about debugging it.
Googling "msleep hang linux kernel site:stackoverflow.com" yields 13 answers and none is on the point, so I think I'm safe to ask.
I rebuild an ARM Linux kernel for an embedded TI AM1808 ARM processor (Sitara/DaVinci?). I see the all the boot log up to the login: prompt coming out of the serial port, but trying to login gets no response, doesn't even echo what I typed.
After lots of debugging I arrived at the kernel and added debugging code between line 828 and 830 (yes, kernel version is 2.6.37). This is at this point in the kernel mode before 'sbin/init' is called:
http://lxr.linux.no/linux+v2.6.37/init/main.c#L815
Right before line 830 I added a forever loop printk and I see the results. I have let it run for about a couple of hour and it counts to about 2 million. Sample line:
dbg:init/main.c:1202: 2088430
So it has spit out 60 million bytes without problem.
However, if I add msleep(1000) in the loop, it prints only once, i.e. msleep () does not return.
Details:
Adding a conditional printk at line 4073 in the scheduler that condition on a flag that get set at the start of the forever test loop described above shows that the schedule() is no longer called when it hangs:
http://lxr.linux.no/linux+v2.6.37/kernel/sched.c#L4064
The only selections under .config/'Device Drivers' are:
Block devices
I2C support
SPI support
The kernel and its ramdisk are loaded using uboot/TFTP.
I don't believe it tries to use the Ethernet.
Since all these happened before '/sbin/init', very little should be happenning.
More details:
I have a very similar board with the same CPU. I can run the same uImage and the same ramdisk and it works fine there. I can login and do the usual things.
I have run memory test (64 MB total, limit kernel to 32M and test the other 32M; it's a single chip DDR2) and found no problem.
One board uses UART0, and the other UART2, but boot log comes out of both so it should not be the problem.
Any debugging tips is greatly appreciated.
I don't have an appropriate JTAG so I can't use that.
If msleep doesn't return or doesn't make it to schedule, then in order to debug we can follow the call stack.
msleep calls schedule_timeout_uninterruptible(timeout) which calls schedule_timeout(timeout) which in the default case exits without calling schedule if the timeout in jiffies passed to it is < 0, so that is one thing to check.
If timeout is positive , then setup_timer_on_stack(&timer, process_timeout, (unsigned long)current); is called, followed by __mod_timer(&timer, expire, false, TIMER_NOT_PINNED); before calling schedule.
If we aren't getting to schedule then something must be happening in either setup_timer_on_stack or __mod_timer.
The calltrace for setup_timer_on_stack is setup_timer_on_stack calls setup_timer_on_stack_key which calls init_timer_on_stack_key is either external if CONFIG_DEBUG_OBJECTS_TIMERS is enabled or calls init_timer_key(timer, name, key);which calls
debug_init followed by __init_timer(timer, name, key).
__mod_timer first calls timer_stats_timer_set_start_info(timer); then a whole lot of other function calls.
I would advise starting by putting a printk or two in schedule_timeout probably either side of the setup_timer_on_stack call or either side of the __mod_timer call.
This problem has been solved.
With liberal use of prink it was determined that schedule() indeed switches to another task, the idle task. In this instance, being an embedded Linux, the original code base I copied from installed an idle task. That idle task seems not appropriate for my board and has locked up the CPU and thus causing the crash. Commenting out the call to the idle task
http://lxr.linux.no/linux+v2.6.37/arch/arm/mach-davinci/cpuidle.c#L93
works around the problem.

Address space identifiers using qemu for i386 linux kernel

Friends, I am working on an in-house architectural simulator which is used to simulate the timing-effect of a code running on different architectural parameters like core, memory hierarchy and interconnects.
I am working on a module takes the actual trace of a running program from an emulator like "PinTool" and "qemu-linux-user" and feed this trace to the simulator.
Till now my approach was like this :
1) take objdump of a binary executable and parse this information.
2) Now the emulator has to just feed me an instruction-pointer and other info like load-address/store-address.
Such approaches work only if the program content is known.
But now I have been trying to take traces of an executable running on top of a standard linux-kernel. The problem now is that the base kernel image does not contain the code for LKM(Loadable Kernel Modules). Also the daemons are not known when starting a kernel.
So, my approach to this solution is :
1) use qemu to emulate a machine.
2) When an instruction is encountered for the first time, I will parse it and save this info. for later.
3) create a helper function which sends the ip, load/store address when an instruction is executed.
i am stuck in step2. how do i differentiate between different processes from qemu which is just an emulator and does not know anything about the guest OS ??
I can modify the scheduler of the guest OS but I am really not able to figure out the way forward.
Sorry if the question is very lengthy. I know I could have abstracted some part but felt that some part of it gives an explanation of the context of the problem.
In the first case, using qemu-linux-user to perform user mode emulation of a single program, the task is quite easy because the memory is linear and there is no virtual memory involved in the emulator. The second case of whole system emulation is a lot more complex, because you basically have to parse the addresses out of the kernel structures.
If you can get the virtual addresses directly out of QEmu, your job is a bit easier; then you just need to identify the process and everything else functions just like in the single-process case. You might be able to get the PID by faking a system call to get_pid().
Otherwise, this all seems quite a bit similar to debugging a system from a physical memory dump. There are some tools for this task. They are probably too slow to run for every instruction, though, but you can look for hints there.

Resources