LPC1800 with S25FL256SA on SPIFI interface problem - lpc

LPC1800 is interfaced with external flash memory S25FL064P1F on SPIFI interface. The code present in external flash memory S25FL064P1F is read and executed properly. To increase memory size we just replaced S25FL064P1F by S25FL256SA1F00 on board and stored same code in it. We are observing that code execution speed has reduced significantly in new flash memory S25FL256SA1F00.
I want to know what need to be done to increase code execution speed from new flash memory S25FL256SA1F00.
Please help me.
Thank you!
The code execution speed from new external flash S25FL256S has improved significantly after increasing SPIFI clock.
There is one more observation regarding LPC1800 communication over SPIFI with old flash S25FL064P and new flash S25FL256S.
a. When LPC1800 communicate with flash S25FL064P, data appears on 4 data lines IO0,IO1,IO2,IO3 which means that communication takes place in quad mode.
b. When LPC1800 communicate with flash S25FL256S, data appears on 2 data lines IO0,IO1 which means that communication does not takes place in quad mode. This is because S25FL256S is default working at dual IO or single IO mode. To get data on IO2 and IO3, quad mode need to be set in flash IC using its configuration register.
The LPC1800 SPIFI is currently working in memory mode and same is confirmed as SPIFI status register bit 'MCINIT' is 1. To send command to S25FL256S to configure quad mode in it I aborted LPC1800 SPIFI memory mode by settting RESET bit in SPIFI status register to 1 and saved command in SPIFI command register. But because of memory mode termination by RESET bit 1, code exexution from flash comes to end and therefore code to save command in SPIFI command register is not executed and quad mode is not set in flash S25FL256S.
I want to know how can I send command to flash after abort memory mode. In which sequence SPIFi registers need to be configured?
Please help.

Related

What is the proper Linux suspend resume kernel reentry point when the processor is powered down and back up

I am working on expanding the suspend and resume capability for Linux running on a Xilinx Zynq Soc that has dual ARM A9s using the Xilinx 2014.4 PetaLinux distribution that uses the 3.17 kernel.
The distribution currently supports suspend and resume without removing power to the processors. I am working on expanding the support to include removing power to the processors while the DDR memory is in self refresh mode, then performing a warm boot and having the first stage boot loader (FSBL) hand control back to the Linux kernel in the correct spot to continue with the resume process.
I am looking for what routine to have the first stage boot loader call. I think it might be cpu_resume in arch/arm/kernel/sleep.S, but I am not sure and would like to verify that it is the correct routine to be calling.
I am jumping to this from the FSBL after a suspend and warm boot, and single stepping through the code in a debugger. It gets past the point of turning the MMU back on, but the debugger looses connection near the end of that routine.
In an effort to see if this is the correct entry point, I have modified pm.c in arch/arm/mach-zynq.c to call soft_restart(virt_to_phys(cpu_resume)) just after the resume has happened in the case where the processors are not powered off, but are woken back up with an interrupt. The soft_restart routine turns off the MMUs and clears the caches, and I am using it to get the processors into a similar state as to what they would be in if they had been power cycled. This test mostly works, and gets much further than calling cpu_resume from the FSBL. The cpu_resume routine expects to be called with cache and MMU off.
I have read the Documentation/power/devices.txt file, and anything else I could find.
I have already done a version of this that uses the no OS flow to put the DDR into self refresh and power cycle the processors, and tested it on hardware. That flow does not use virtual memory.
Thank you in advance,
John

Control V4L2/VB2 Buffer Allocation?

I am trying to write a V4L2 compliant driver for a special camera device i have, but the device doesn't seem particularly friendly with V4L2's buffer system. Instead of the separately allocated buffers, it wants a single contiguous block of memory capable of holding a set # of buffers (usually 4), and it then provides a status register telling you which is the latest (updated after each frame is DMA'ed to the host). So it basically needs only a single large DMA allocated memory chunk to work with, not 4 most-likely separated.
How can I use this with V4L? Everything I see about VIDIOC_CREATE_BUFS, VIDIOC_REQBUFS and such does internal allocation of the buffers, and I can't get anything V4L-based (like qv4l2 to work without a successful QBUF and DQBUF that uses their internal structure.
How can this be done?
Just for completion, I finally found a solution in the "meye" driver. I removed everything VB2 and wrote my own reqbuf, querybuf, qbuf, and dqbuf, along with my own mmap routines to handle the allocation. And it all works!

Why do we need a bootloader in an embedded device?

I'm working with ELinux kernel on ARM cortex-A8.
I know how the bootloader works and what job it's doing. But i've got a question - why do we need bootloader, why was the bootloader born?
Why we can't directly load the kernel into RAM from flash memory without bootloader? If we load it what will happen? In fact, processor will not support it, but why are we following the procedure?
In the context of Linux, the boot loader is responsible for some predefined tasks. As this question is arm tagged, I think that ARM booting might be a useful resource. Specifically, the boot loader was/is responsible for setting up an ATAG list that describing the amount of RAM, a kernel command line, and other parameters. One of the most important parameters is the machine type. With device trees, an entire description of the board is passed. This makes a stock ARM Linux impossible to boot with out some code to setup the parameters as described.
The parameters allows one generic Linux to support multiple devices. For instance, an ARM Debian kernel can support hundreds of different board types. Uboot or other boot loader can dynamically determine this information or it can be hard coded for the board.
You might also like to look at bootloader info page here at stack overflow.
A basic system might be able to setup ATAGS and copy NOR flash to SRAM. However, it is usually a little more complex than this. Linux needs RAM setup, so you may have to initialize an SDRAM controller. If you use NAND flash, you have to handle bad blocks and the copy may be a little more complex than memcpy().
Linux often has some latent driver bugs where a driver will assume that a clock is initialized. For instance if Uboot always initializes an Ethernet clock for a particular machine, the Linux Ethernet driver may have neglected to setup this clock. This can be especially true with clock trees.
Some systems require boot image formats that are not supported by Linux; for example a special header which can initialize hardware immediately; like configuring the devices to read initial code from. Additionally, often there is hardware that should be configured immediately; a boot loader can do this quickly whereas the normal structure of Linux may delay this significantly resulting in I/O conflicts, etc.
From a pragmatic perspective, it is simpler to use a boot loader. However, there is nothing to prevent you from altering Linux's source to boot directly from it; although it maybe like pasting the boot loader code directly to the start of Linux.
See Also: Coreboot, Uboot, and Wikipedia's comparison. Barebox is a lesser known, but well structured and modern boot loader for the ARM. RedBoot is also used in some ARM systems; RedBoot partitions are supported in the kernel tree.
A boot loader is a computer program that loads the main operating system or runtime environment for the computer after completion of the self-tests.
^ From Wikipedia Article
So basically bootloader is doing just what you wanted - copying data from flash into operating memory. It's really that simple.
If you want to know more about boostrapping the OS, I highly recommend you read the linked article. Boot phase consists, apart from tests, also of checking peripherals and some other things. Skipping them makes sense only on very simple embedded devices, and that's why their bootloaders are even simpler:
Some embedded systems do not require a noticeable boot sequence to begin functioning and when turned on may simply run operational programs that are stored in ROM.
The same source
The primary bootloader is usually built in into the silicon and performs the load of the first USER code that will be run in the system.
The bootloader exists because there is no standardized protocol for loading the first code, since it is chip dependent. Sometimes the code can be loaded through a serial port, a flash memory, or even a hard drive. It is bootloader function to locate it.
Once the user code is loaded and running, the bootloader is no longer used and the correctness of the system execution is user responsibility.
In the embedded linux chain, the primary bootloader will setup and run the Uboot. Then Uboot will find the linux kernel and load it.
Why we can't directly load the kernel into RAM from flash memory without bootloader? If we load it what will happen? In fact, processor will not support it, but why are we following the procedure?
Bartek, Artless, and Felipe all give parts of the picture.
Every embedded processor type (E.G. 386EX, Coretex-A53, EM5200) will do something automatically when it is reset or powered on. Sometimes that something is different depending on whether the power is cycled or the device is reset. Some embedded processors allow you to change that something based on voltages applied to different pins when the device is powered or reset.
Regardless, there is a limited amount of something that a processor can do, because of the physical space on-processor required to define that something, whether it is on-chip FLASH, instruction micro-code, or some other mechanism.
This limit means that the something is
fixed purpose, does one thing as quickly as possible.
limited in scope and capability, typically loading a small block of code (often a few kilobytes or less) into a fixed memory location and executing from the start of the loaded code.
unmodifiable.
So what a processor does in response to reset or power-cycle cannot be changed, and cannot do very much, and we don't want it to automatically copy hundreds of megabytes or gigabytes into memory which may not exist or may not be initialized, and which could take a looooong time.
So....
We set up a small program which is smaller than the smallest size permitted across all of the devices we are going to use. That program is stored wherever the something needs it to be.
Sometimes the small program is U-Boot. Sometimes even U-Boot is too big for initial load, so the small program then in turn loads U-Boot.
The point is that whatever gets loaded by the something, is modifiable as needed for a particular system. If it is U-Boot, great, if not, it knows where to load the main operating system or where to load U-Boot (or some other bootloader).
U-Boot (speaking of bootloaders in general) then configures a minimal set of devices, memory, chip settings, etc., to enable the main OS to be loaded and started. The main OS init takes care of any additional configuration or initialization.
So the sequence is:
Processor power-on or reset
Something loads initial boot code (or U-Boot style embedded bootloader)
Initial boot code (may not be needed)
U-Boot (or other general embedded bootloader)
Linux init
The kernel requires the hardware on which you are working to be in a particular state. All the hardware you used needs to be checked for its state and initialized for its further operation. This is one of the main reasons to use a boot loader in an embedded (or any other environment), apart from its use to load a kernel image into the RAM.
When you turn on a system, the RAM is also not in a useful state (fully initialized to use) for us to load kernel into it. Therefore, we cannot load a kernel directly (to answer your question)and thus arises the need for a construct to initialize it.
Apart from what is stated in all the other answers - which is correct - in some cases the system has to go through different execution modes, take as example TrustZone for secure ARM chips. It is possible to still consider it as sort of HW initialization, but what makes it peculiar is the fact that there are additional limitations (ex: memory available) that make it impractical, if not impossible, to do everything in a single binary, thus multiple stages of bootloader are available.
Furthermore, for security reason, each of them is signed and can perform its job only if it meets the security requirements.

Does windows have a interrupt-context?

I have recently started reading Linux Kernel Development By Robert Love and I am Love -ing it!
Please read the below excerpt from the book to better understand my questions:
A number identifies interrupts and the kernel uses
this number to execute a specific interrupt handler to process and respond to the interrupt.
For example, as you type, the keyboard controller issues an interrupt to let the system
know that there is new data in the keyboard buffer. The kernel notes the interrupt number of the incoming interrupt and executes the correct interrupt handler.The interrupt
handler processes the keyboard data and lets the keyboard controller know it is ready for
more data...
Now I have dual boot on my machine and sometimes (in fact,many) when I type something on windows, I find myself doing it in, what I call Night crawler mode. This is when I am typing and I don't see anything on the screen and later after a while the entire text comes in one flash, probably the buffer just spits everything out.
Now I don't see this happening on Linux. Is it because of the interrupt-context present in Linux and the absence of it in windows?
BTW, I am still not sure if there is an interrupt-context in windows, google didn't give me any relevant results for that.
All OSes have an interrupt context, it's a feature/constraint of the CPU architecture -- basically, this is "just the way things work" with computer hardware. Different OSes (and drivers within that OS) make different choices about what work and how much work to do in the interrupt before returning, though. That may be related to your windows experience, or it may not. There is a lot of code involved in getting a key press translated into screen output, and interrupt handling is only a tiny part.
A number identifies interrupts and the kernel uses this number to execute a specific interrupt handler to process and respond to the interrupt. For example, as you type, the keyboard controller issues an interrupt to let the system know that there is new data in the keyboard buffer.The kernel notes the interrupt num- ber of the incoming interrupt and executes the correct interrupt handler.The interrupt handler processes the keyboard data and lets the keyboard controller know it is ready for more data
This is a pretty poor description. Things might be different now with USB keyboards, but this seems to discuss what would happen with an old PS/2 connection, where an "8042"-compatible chipset on your motherboard signals on an IRQ line to the CPU, which then executes whatever code is at the address stored in location 9 in the interrupt table (traditionally an array of pointers starting at address 0 in physical memory, though from memory you could change the address, and last time I played with this stuff PCs still had <1MB RAM and used different memory layout modes).
That dispatch process has nothing to do with the kernel... it's the way the hardware works. (The keyboard controller could be asked not to generate interrupts, allowing OS/driver software to "poll" it regularly to see if there happened to be new event data available, but it'd be pretty crazy to use that really).
Still, the code address from the interrupt table will point into the kernel or keyboard driver, and the kernel/driver code will read the keyboard event data from the keyboad controller's I/O port. For these hardware interrupt handlers, a primary goal is to get the data from the device and store it into a buffer as quickly as possible - both to ensure a return from the interrupt to whatever processing was happening, and because the keyboard controller can only handle one event at a time - it needs to be read off into the buffer before the next event.
It's then up to the OS/driver to either provide some kind of input availability signal to application software, or wait for the application software to attempt to read more keyboard input, but it can do it a "whenever you're ready" fashion. Whichever way, once an application has time to read and start responding to the input, things can happen that mean it takes an unexpectedly long amount of time: it could be that the extra keystroke triggers some complex repagination algorithm that takes a long time to run, or that the keystroke results in the program executing code that has been swapped out to disk (check wikipedia for "virtual memory"), in which case it could be only after the hard disk has read part of the program into memory that the program can continue to run. There are thousands of such edge cases involving window movement, graphics clipping algorithms, etc. that could account for the keyboard-handling code taking a long time to complete, and if other keystrokes have happened meanwhile they'll be read by the keyboard driver into that buffer, then only "perceived" by the application after the slow/blocking processing completes. It may well be that the processing consequent to all the keystrokes then in the buffer completes much more quickly: for example, if part of the program was swapped in from disk, that part may be ready to process the remaining keystrokes.
Why would Linux do better at this than Windows? Mainly because the Operating System, drivers and applications tend to be "leaner and meaner"... less bloated software (like C++ vs C# .NET), less wasted memory, so less swapping and delays.

Linux Paging and interrupt handler

Dear Sir/Madam,
I am trying to implement ready boost feature in LINUX for my final year undergraduate project.I was just researching and I found out that whenever a page fault occurs the CPU sends Interrupt 14.So, I need your guidance on the foll scheme I am thinking of:
I will create an interrupt handler which will be activated when an interrupt occurs.
This handler can extract the linear address of the fault from cr2 register and we can use LINUX page table to get the physical address.
Do you think that this ,will be a feasible scheme?
Also any tutorial on the same will be highly appreciated.
Thanks to all in advance.
_Regards
Isn't "ReadyBoost" simply implemented by running mkswap followed by swapon on the /dev/sd* device special file for the flash disk? As far as I'm aware, all the necessary kernel-side support is in place.
We're not going to do your assignment for you.
IIUI ReadyBoost is not the same as swap #caf. It is about caching disk content on a faster medium to make random disk accesses faster. Linux will never page disk-backed pages to swap, they will just be dropped and reread from the disk. Only anonymous pages go to swap.
Also ReadyBoost data is mirrored to disk, so the USB drive can be removed at any time, and also encrypted so if the key is removed and analysed on another system nothing is disclosed.
So #R-The_Master you could implement something like ReadyBoost for Linux. But it has basically nothing to do with int 14.

Resources